﻿---
title: Classify and route mixed items with AI
description: Build a workflow that classifies incoming items with ai.classify, routes each item down a different branch, and summarizes the result with ai.summarize.
url: https://docs-v3-preview.elastic.dev/elastic/docs-content/pull/6103/explore-analyze/workflows/use-cases/ai-augmented-workflows/classify-and-route-alerts
products:
  - Elastic Cloud Enterprise
  - Elastic Cloud Hosted
  - Elastic Cloud Serverless
  - Elastic Cloud on Kubernetes
  - Elastic Stack
  - Kibana
applies_to:
  - Elastic Cloud Serverless: Generally available
  - Elastic Stack: Planned
---

# Classify and route mixed items with AI
This guide walks through building a workflow that takes a stream of mixed items (alerts, tickets, log entries) and routes each one down a different branch based on an AI classification. The workflow pairs the [`ai.classify`](/elastic/docs-content/pull/6103/explore-analyze/workflows/steps/ai-steps#ai-classify) step with [`foreach`](https://docs-v3-preview.elastic.dev/elastic/docs-content/pull/6103/explore-analyze/workflows/steps/foreach) and [`if`](https://docs-v3-preview.elastic.dev/elastic/docs-content/pull/6103/explore-analyze/workflows/steps/if) or [`switch`](https://docs-v3-preview.elastic.dev/elastic/docs-content/pull/6103/explore-analyze/workflows/steps/switch), so each item gets exactly the handling it needs.
The workflow is adapted from [`ai-steps-demo.yaml`](https://github.com/elastic/workflows/blob/main/workflows/observability/ai-steps-demo.yaml) in the `elastic/workflows` library.
If you're new to workflows, complete [Build your first workflow](https://docs-v3-preview.elastic.dev/elastic/docs-content/pull/6103/explore-analyze/workflows/get-started/build-your-first-workflow) first.

## Before you begin

- **Permissions.** `All` on **Analytics > Workflows**. Refer to [Kibana privileges](https://docs-v3-preview.elastic.dev/elastic/docs-content/pull/6103/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges).
- **AI connector.** A configured LLM connector (Azure OpenAI, OpenAI, Anthropic, or Bedrock). Refer to [Connectors](https://docs-v3-preview.elastic.dev/elastic/docs-content/pull/6103/deploy-manage/manage-connectors). Note the connector ID.
- **A set of items to classify.** For this walkthrough, the workflow generates sample items with `ai.prompt`. In production, you'd read items from an alert trigger (`event.alerts`), an Elasticsearch search, or an upstream workflow.


## How it works

The workflow runs manually during development and can be switched to an alert trigger once you're happy with the routing:
1. **Gather items.** For the demo, two `ai.prompt` steps produce a mix of sample observability and security alerts. In production, replace this with your real data source.
2. **Iterate with `foreach`.** Each item is processed independently.
3. **Classify with `ai.classify`.** The step returns the category (for example, `observability alert` or `security alert`) and an optional rationale.
4. **Route with `if` (or `switch`).** Each branch runs the right follow-up: severity classification for observability, malicious-or-not classification for security.
5. **Summarize with `ai.summarize`.** The summary is attached to the routed item.


## Build the workflow

<stepper>
  <step title="Declare the AI connector as a constant">
    Hold the connector ID in a constant so you can swap environments without touching step bodies:
    ```yaml
    consts:
      llm_connector: "your-connector-id"

    triggers:
      - type: manual
    ```
  </step>

  <step title="Gather items to classify">
    For development, generate a mix of sample items with two `ai.prompt` calls. Each call uses a JSON schema so the output is strongly typed and iterable:
    ```yaml
    steps:
      - name: gather_observability_items
        type: ai.prompt
        connector-id: "{{ consts.llm_connector }}"
        with:
          prompt: "Generate two sample observability alerts."
          schema:
            items:
              type: object
              required: [id, severity, message]
              properties:
                id: { type: string }
                severity: { type: string, enum: [critical, high, medium, low] }
                message: { type: string }

      - name: gather_security_items
        type: ai.prompt
        connector-id: "{{ consts.llm_connector }}"
        with:
          prompt: "Generate three sample security alerts."
          schema:
            items:
              type: object
              required: [id, severity, category]
              properties:
                id: { type: string }
                severity: { type: string, enum: [critical, high, medium, low] }
                category: { type: string }
    ```
    In a production workflow, replace these two steps with a real data source. For example, read `event.alerts` from an alert trigger or run an `elasticsearch.search` step.
  </step>

  <step title="Loop through the combined stream">
    Concatenate the two sample arrays and loop over the combined stream. Use `${{ ... }}` when passing arrays so they aren't stringified:
    ```yaml
      - name: route_each_item
        type: foreach
        foreach: "${{ steps.gather_observability_items.output.content | concat: steps.gather_security_items.output.content }}"
        steps:
          # Classification and branching steps go here. Use foreach.item.
    ```
  </step>

  <step title="Classify each item">
    Call `ai.classify` with the categories you want to route on. Set `includeRationale: true` during development so you can see why the model picked a category. Turn it off in production for lower token cost:
    ```yaml
          - name: identify_type
            type: ai.classify
            connector-id: "{{ consts.llm_connector }}"
            with:
              input: "${{ foreach.item }}"
              includeRationale: true
              categories:
                - "security alert"
                - "observability alert"
              fallbackCategory: "other"
    ```
    The category ends up at `steps.identify_type.output.category`.
  </step>

  <step title="Branch on the classification">
    Use `if` steps for two branches, or `switch` for three or more. The following pattern uses `if` for clarity:
    ```yaml
          - name: handle_observability
            type: if
            condition: "steps.identify_type.output.category : 'observability alert'"
            steps:
              - name: classify_severity
                type: ai.classify
                connector-id: "{{ consts.llm_connector }}"
                with:
                  input: "${{ foreach.item }}"
                  categories: ["critical", "high", "medium", "low"]

              - name: store_observability_result
                type: data.set
                with:
                  type: "observability"
                  item: "${{ foreach.item }}"
                  severity: "${{ steps.classify_severity.output.category }}"

          - name: handle_security
            type: if
            condition: "steps.identify_type.output.category : 'security alert'"
            steps:
              - name: classify_safety
                type: ai.classify
                connector-id: "{{ consts.llm_connector }}"
                with:
                  input: "${{ foreach.item }}"
                  categories: ["malicious", "not malicious"]
                  fallbackCategory: "unknown"

              - name: store_security_result
                type: data.set
                with:
                  type: "security"
                  item: "${{ foreach.item }}"
                  status: "${{ steps.classify_safety.output.category }}"
    ```
    For more than two branches, use a [`switch` step](https://docs-v3-preview.elastic.dev/elastic/docs-content/pull/6103/explore-analyze/workflows/steps/switch) which reads cleaner than chained `if`/`else`.
  </step>

  <step title="Summarize the item">
    Add an `ai.summarize` call to produce a human-readable summary. Run it after classification so later steps can include both the category and the summary:
    ```yaml
          - name: summarize_item
            type: ai.summarize
            connector-id: "{{ consts.llm_connector }}"
            with:
              input: "${{ foreach.item }}"
    ```
    `steps.summarize_item.output.content` is the summary string.
  </step>
</stepper>


## Complete workflow

<dropdown title="Full workflow YAML">
  ```yaml
  name: ai--classify-and-route
  description: Classify a stream of mixed items and route each one down the right branch.
  enabled: true
  tags: ["ai", "classify", "route"]

  consts:
    llm_connector: "your-connector-id"

  triggers:
    - type: manual

  steps:
    - name: gather_observability_items
      type: ai.prompt
      connector-id: "{{ consts.llm_connector }}"
      with:
        prompt: "Generate two sample observability alerts."
        schema:
          items:
            type: object
            required: [id, severity, message]
            properties:
              id: { type: string }
              severity: { type: string, enum: [critical, high, medium, low] }
              message: { type: string }

    - name: gather_security_items
      type: ai.prompt
      connector-id: "{{ consts.llm_connector }}"
      with:
        prompt: "Generate three sample security alerts."
        schema:
          items:
            type: object
            required: [id, severity, category]
            properties:
              id: { type: string }
              severity: { type: string, enum: [critical, high, medium, low] }
              category: { type: string }

    - name: route_each_item
      type: foreach
      foreach: "${{ steps.gather_observability_items.output.content | concat: steps.gather_security_items.output.content }}"
      steps:
        - name: identify_type
          type: ai.classify
          connector-id: "{{ consts.llm_connector }}"
          with:
            input: "${{ foreach.item }}"
            includeRationale: true
            categories:
              - "security alert"
              - "observability alert"
            fallbackCategory: "other"

        - name: summarize_item
          type: ai.summarize
          connector-id: "{{ consts.llm_connector }}"
          with:
            input: "${{ foreach.item }}"

        - name: handle_observability
          type: if
          condition: "steps.identify_type.output.category : 'observability alert'"
          steps:
            - name: classify_severity
              type: ai.classify
              connector-id: "{{ consts.llm_connector }}"
              with:
                input: "${{ foreach.item }}"
                categories: ["critical", "high", "medium", "low"]

            - name: store_observability_result
              type: data.set
              with:
                type: "observability"
                item: "${{ foreach.item }}"
                severity: "${{ steps.classify_severity.output.category }}"
                summary: "${{ steps.summarize_item.output.content }}"

        - name: handle_security
          type: if
          condition: "steps.identify_type.output.category : 'security alert'"
          steps:
            - name: classify_safety
              type: ai.classify
              connector-id: "{{ consts.llm_connector }}"
              with:
                input: "${{ foreach.item }}"
                categories: ["malicious", "not malicious"]
                fallbackCategory: "unknown"

            - name: store_security_result
              type: data.set
              with:
                type: "security"
                item: "${{ foreach.item }}"
                status: "${{ steps.classify_safety.output.category }}"
                summary: "${{ steps.summarize_item.output.content }}"
  ```
</dropdown>


## Extend this workflow

- **Trigger from real alerts.** Replace the `manual` trigger and the two `gather_*` steps with an [alert trigger](https://docs-v3-preview.elastic.dev/elastic/docs-content/pull/6103/explore-analyze/workflows/triggers/alert-triggers) and a `foreach` over `event.alerts`.
- **Use `switch` for many categories.** When you have three or more branches, replace the `if` pair with a [`switch` step](https://docs-v3-preview.elastic.dev/elastic/docs-content/pull/6103/explore-analyze/workflows/steps/switch) for cleaner YAML.
- **Follow each branch with a real action.** Replace the `data.set` calls with `cases.createCase`, `http` (Slack, PagerDuty), or [composition](https://docs-v3-preview.elastic.dev/elastic/docs-content/pull/6103/explore-analyze/workflows/steps/composition) calls that invoke a dedicated child workflow for each category.
- **Persist the enriched stream.** Write the classified items back to Elasticsearch with [`elasticsearch.request`](https://docs-v3-preview.elastic.dev/elastic/docs-content/pull/6103/explore-analyze/workflows/steps/elasticsearch) for dashboarding.


## Related pages

- [AI-augmented workflows](https://docs-v3-preview.elastic.dev/elastic/docs-content/pull/6103/explore-analyze/workflows/use-cases/ai-augmented-workflows): The outcome this workflow supports.
- [AI steps reference](https://docs-v3-preview.elastic.dev/elastic/docs-content/pull/6103/explore-analyze/workflows/steps/ai-steps): Parameters for `ai.prompt`, `ai.classify`, `ai.summarize`, and `ai.agent`.
- [Flow control steps](https://docs-v3-preview.elastic.dev/elastic/docs-content/pull/6103/explore-analyze/workflows/steps/flow-control-steps): `foreach`, `if`, `switch`, and others.
- [`elastic/workflows` observability folder](https://github.com/elastic/workflows/tree/main/workflows/observability): More observability workflow examples.