Event-driven triggers
Event-driven triggers let workflows react to events elsewhere in Kibana. Two trigger families are available:
workflows.failed— Fires when another workflow's execution fails.- Cases triggers — Fire when cases change (created, updated, status changed, attachments added, comments added).
The event-driven trigger system is in technical preview, including the triggers documented on this page. The schema and semantics can change in future releases.
Fires when any workflow execution reaches the failed terminal state. Use this trigger to build handler workflows that react to failures in your production workflows, for example by paging on-call, opening a case, or logging to a dedicated index for observability.
| Parameter | Location | Type | Required | Description |
|---|---|---|---|---|
type |
top level | string | Yes | Must be workflows.failed. |
condition |
on |
KQL string | No | Optional KQL predicate evaluated against the event payload. The trigger fires only when the condition matches. |
triggers:
- type: workflows.failed
Use on.condition to narrow which failed executions trigger the handler. The value is a KQL predicate evaluated against the event payload.
Fire only on failures from a specific workflow:
triggers:
- type: workflows.failed
on:
condition: "event.workflow.name : 'ops--rollback-deployment'"
Ignore failures that came from another error handler:
triggers:
- type: workflows.failed
on:
condition: "event.workflow.isErrorHandler : false"
Combine conditions with KQL's and to filter on multiple fields:
triggers:
- type: workflows.failed
on:
condition: "event.workflow.isErrorHandler : false and event.workflow.spaceId : 'production'"
When a failed workflow triggers your handler, the handler runs with an event context that describes the failure. The payload has four groups: workflow, execution, error, and the top-level timestamp and spaceId.
| Field | Contains |
|---|---|
event.spaceId |
The Kibana space where the failure occurred. |
event.timestamp |
ISO timestamp of when the event fired. |
event.workflow.id |
The failed workflow's ID. |
event.workflow.name |
The failed workflow's name. |
event.workflow.spaceId |
The Kibana space where the failed workflow ran. |
event.workflow.isErrorHandler |
true if the failed workflow was itself an error handler. Use this to prevent cascading handler loops. |
event.execution.id |
The failed execution's ID. |
event.execution.startedAt |
ISO timestamp of when the execution started. |
event.execution.failedAt |
ISO timestamp of when the execution failed. |
event.error.message |
The error message. |
event.error.stepId |
Identifier of the step where the failure occurred, when available. |
event.error.stepName |
Name of the step where the failure occurred, when available. |
event.error.stepExecutionId |
ID of the step execution where the failure occurred, when available. |
Reference these fields with Liquid templating inside the handler:
- name: log_failure
type: console
with:
message: |
Workflow {{ event.workflow.name }} (id: {{ event.workflow.id }}) failed
at step {{ event.error.stepName }}: {{ event.error.message }}
name: handle-critical-workflow-failures
description: Page on-call and open a case whenever a critical workflow fails.
enabled: true
triggers:
- type: workflows.failed
steps:
- name: skip_if_handler
type: if
condition: "event.workflow.isErrorHandler : true"
steps:
- name: no_op
type: console
with:
message: "Skipping: the failure came from another error handler."
- name: page_oncall
if: "not event.workflow.isErrorHandler"
type: pagerduty.triggerIncident
connector-id: "platform-pagerduty"
with:
dedup_key: "{{ event.workflow.id }}-{{ event.execution.id }}"
summary: "Workflow {{ event.workflow.name }} failed"
severity: "critical"
details:
failed_step: "{{ event.error.stepName }}"
error: "{{ event.error.message }}"
workflow_id: "{{ event.workflow.id }}"
execution_id: "{{ event.execution.id }}"
- name: open_case
if: "not event.workflow.isErrorHandler"
type: cases.createCase
with:
title: "[Auto] Workflow failure: {{ event.workflow.name }}"
description: |
Step `{{ event.error.stepName }}` failed.
Error: `{{ event.error.message }}`
severity: "high"
tags: ["workflow-failure", "auto-triage"]
Cases triggers fire when cases change. Use them to react to case lifecycle events without polling the Cases API.
Shared payload. Every cases trigger event includes:
event.caseId— The case ID, the alphanumeric identifier that is unique to each case.event.owner— The solution that owns the case. It can besecuritySolutionfor Elastic Security cases,observabilityfor Observability cases, orcasesfor Stack cases.
Schema convention. In each trigger's schema table below, the Location column indicates where each parameter sits in the trigger YAML. top level means the parameter sits alongside type; `on` means it sits inside the on: block, parallel to how workflows.failed uses on: for its condition.
Use event.owner in on.condition to filter by solution. For example, a workflow that only fires for Elastic Security cases:
triggers:
- type: cases.caseCreated
on:
condition: 'event.owner: "securitySolution"'
Individual trigger sections below document any additional payload fields specific to that event.
Fires when a case is created.
| Parameter | Location | Type | Required | Description |
|---|---|---|---|---|
type |
top level | string | Yes | Must be cases.caseCreated. |
condition |
on |
KQL string | No | Optional KQL predicate evaluated against the event payload. |
| Field | Contains |
|---|---|
event.caseId |
The new case's ID. |
event.owner |
The case owner (securitySolution, observability, or cases). |
Fire only for Elastic Security cases:
triggers:
- type: cases.caseCreated
on:
condition: 'event.owner: "securitySolution"'
Fires when a case is updated. The event.updatedFields array lists which fields changed.
This trigger also fires when a case's status changes; the dedicated cases.caseStatusUpdated trigger fires alongside it and carries the previous status for easier filtering. For bulk updates, cases.caseUpdated fires once per case.
| Parameter | Location | Type | Required | Description |
|---|---|---|---|---|
type |
top level | string | Yes | Must be cases.caseUpdated. |
condition |
on |
KQL string | No | Optional KQL predicate evaluated against the event payload. |
| Field | Contains |
|---|---|
event.caseId |
The updated case's ID. |
event.owner |
The case owner (securitySolution, observability, or cases). |
event.updatedFields |
Array of field names that changed in this update. |
Fire when a Elastic Security case's title changes:
triggers:
- type: cases.caseUpdated
on:
condition: 'event.owner: "securitySolution" and event.updatedFields: "title"'
Fires when a case's status changes.
| Parameter | Location | Type | Required | Description |
|---|---|---|---|---|
type |
top level | string | Yes | Must be cases.caseStatusUpdated. |
condition |
on |
KQL string | No | Optional KQL predicate evaluated against the event payload. |
| Field | Contains |
|---|---|
event.caseId |
The case ID. |
event.owner |
The case owner (securitySolution, observability, or cases). |
event.previousStatus |
The previous status (open, in-progress, or closed). |
event.status |
The current status (open, in-progress, or closed). |
Fire when a Elastic Security case is closed:
triggers:
- type: cases.caseStatusUpdated
on:
condition: 'event.owner: "securitySolution" and event.status: "closed"'
Fires when attachments are added to a case. If attachments of multiple types are added in one operation (for example, three alerts and two comments), the trigger fires once per type, with one event for each type.
Adding a comment fires both this trigger (with event.attachmentType: "comment") and the dedicated cases.commentsAdded trigger. Both exist because users don't always think of comments as attachments.
| Parameter | Location | Type | Required | Description |
|---|---|---|---|---|
type |
top level | string | Yes | Must be cases.attachmentsAdded. |
condition |
on |
KQL string | No | Optional KQL predicate evaluated against the event payload. |
| Field | Contains |
|---|---|
event.caseId |
The case ID. |
event.owner |
The case owner (securitySolution, observability, or cases). |
event.attachmentIds |
Array of attachment IDs added in this operation, all of event.attachmentType. |
event.attachmentType |
The type of attachments added, for example "comment" or "alert". |
Fire only for Elastic Security cases:
triggers:
- type: cases.attachmentsAdded
on:
condition: 'event.owner: "securitySolution"'
Fire only when a comment-type attachment is added:
triggers:
- type: cases.attachmentsAdded
on:
condition: 'event.attachmentType: "comment"'
Fires when comments are added to a case.
| Parameter | Location | Type | Required | Description |
|---|---|---|---|---|
type |
top level | string | Yes | Must be cases.commentsAdded. |
condition |
on |
KQL string | No | Optional KQL predicate evaluated against the event payload. |
| Field | Contains |
|---|---|
event.caseId |
The case ID. |
event.owner |
The case owner (securitySolution, observability, or cases). |
event.commentIds |
Array of comment IDs added in this operation. |
Fire only for Elastic Security cases:
triggers:
- type: cases.commentsAdded
on:
condition: 'event.owner: "securitySolution"'
If a handler workflow itself fails, it can re-trigger itself. Two safeguards help you avoid infinite loops:
- Every event includes
event.workflow.isErrorHandler, which istruewhen the failing workflow is itself a handler. Filter on this in your handler's logic to skip handling your own failures. - The execution engine enforces a chain-depth limit on cascading event-driven triggers as a safety net.
In practice, keep handler workflows simpler than the workflows they monitor. A handler that only logs, opens a case, and notifies is less likely to fail than the automation it's handling.
- Triggers overview: All trigger types.
- Pass data and handle errors: Per-step
on-failurestrategies complement event-driven handlers. - Cases steps: Open cases from your handler.