﻿---
title: Elastic AI Assistant for Observability and Search
description: The AI Assistant is an integration with a large language model (LLM) that helps you understand, analyze, and interact with your Elastic data. You can...
url: https://www.elastic.co/elastic/docs-builder/docs/3028/solutions/observability/ai/observability-ai-assistant
products:
  - Elastic Observability
applies_to:
  - Elastic Cloud Serverless: Generally available
  - Elastic Stack: Generally available
---

# Elastic AI Assistant for Observability and Search
The AI Assistant is an integration with a large language model (LLM) that helps you understand, analyze, and interact with your Elastic data.
You can [interact with the AI Assistant](#obs-ai-interact) in two ways:
- **Contextual insights**: Embedded assistance throughout Elastic UIs that explains errors and messages with suggested remediation steps.
- **Chat interface**: A conversational experience where you can ask questions and receive answers about your data. The assistant uses function calling to request, analyze, and visualize information based on your needs.

The AI Assistant integrates with large language model (LLM) providers using [connectors](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/reference/kibana/connectors-kibana/gen-ai-connectors). Refer to the [LLM performance matrix for Observability](https://www.elastic.co/elastic/docs-builder/docs/3028/solutions/observability/ai/llm-performance-matrix) to find performance information for various models and use cases.

## Use cases

The Elastic AI Assistant for Observability and Search helps you:
- **Decode error messages**: Interpret stack traces and error logs to pinpoint root causes.
- **Identify performance bottlenecks**: Find resource-intensive operations and slow queries in Elasticsearch.
- **Generate reports**: Create alert summaries and incident timelines with key metrics.
- **Build and execute queries**: Build Elasticsearch queries from natural language, convert Query DSL to ES|QL syntax, and execute queries directly from the chat interface.
- **Visualize data**: Create time-series charts and distribution graphs from your Elasticsearch data.


## Requirements

To set up or use AI assistant, you need the following:
- An appropriate [Elastic subscription](https://www.elastic.co/subscriptions)
- The `Observability AI Assistant: All` Kibana privilege
- An [LLM connector](https://www.elastic.co/elastic/docs-builder/docs/3028/explore-analyze/ai-features/llm-guides/llm-connectors)
- (Optional) To use [knowledge base](#obs-ai-add-data):
  - A 4 GB machine learning node
  <note>
  In Elastic Cloud or Elastic Cloud Enterprise, if you have machine learning autoscaling enabled, machine learning nodes automatically start when using the knowledge base and AI Assistant. Therefore using these features incurs additional costs.
  </note>
- If you want to use [content connectors](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/reference/search-connectors) to add external data to knowledge base: A self-deployed connector service


## Manage access to AI Assistant

<applies-to>
  - Elastic Cloud Serverless: Generally available
  - Elastic Stack: Generally available since 9.2
</applies-to>

The [**GenAI settings**](https://www.elastic.co/elastic/docs-builder/docs/3028/explore-analyze/ai-features/manage-access-to-ai-assistant) page allows you to:
- Manage which AI connectors are available in your environment.
- Enable or disable AI Assistant and other AI-powered features in your environment.
- <applies-to>Elastic Stack: Generally available since 9.2</applies-to> <applies-to>Elastic Cloud Serverless: Unavailable</applies-to> Specify in which Elastic solutions the `AI Assistant for Observability and Search` and the `AI Assistant for Security` appear.


## Your data and the AI Assistant

It's important to understand how your data is handled when using the AI Assistant. Here are some key points:
<definitions>
  <definition term="Data usage by Elastic">
    Elastic does not use customer data for model training, but all data is processed by third-party AI providers.
  </definition>
  <definition term="Anonymization">
    Data sent to the AI Assistant is *not* anonymized, including alert data, configurations, queries, logs, and chat interactions. If you need to anonymize data, use the [anonymization pipeline](#obs-ai-anonymization).
  </definition>
  <definition term="Permission context">
    When the AI Assistant performs searches, it uses the same permissions as the current user.
  </definition>
  <definition term="Third-party processing">
    Any data submitted may be used by the provider for AI training or other purposes with no guarantee of security or confidentiality.
  </definition>
</definitions>

**Telemetry collection**: Your AI provider may collect telemetry during usage. Contact them for details on what data is collected.

## Set up the AI Assistant

The AI Assistant connects to one of these supported LLM providers:

| Provider                    | Configuration                                                                                                                                    | Authentication                                                                             |
|-----------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------|
| Preconfigured LLM (default) | No configuration needed                                                                                                                          | N/A                                                                                        |
| OpenAI                      | [Configure connector](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/reference/kibana/connectors-kibana/openai-action-type)  | [Get API key](https://platform.openai.com/docs/api-reference)                              |
| Azure OpenAI                | [Configure connector](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/reference/kibana/connectors-kibana/openai-action-type)  | [Get API key](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/reference) |
| Amazon Bedrock              | [Configure connector](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/reference/kibana/connectors-kibana/bedrock-action-type) | [Get auth keys](https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.html)    |
| Google Gemini               | [Configure connector](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/reference/kibana/connectors-kibana/gemini-action-type)  | [Get service account key](https://cloud.google.com/iam/docs/keys-list-get)                 |

**Setup steps**:
1. **Create authentication credentials** with your chosen provider using the links in the previous table.
2. **Create an LLM connector** for your chosen provider by going to the **Connectors** management page in the navigation menu or by using the [global search field](https://www.elastic.co/elastic/docs-builder/docs/3028/explore-analyze/find-and-organize/find-apps-and-objects).
3. **Authenticate the connection** by entering:
   - The provider's API endpoint URL.
- Your authentication key or secret.

<admonition title="Recommended models">
  While the Elastic AI Assistant for Observability and Search is compatible with many different models, refer to the [Large language model performance matrix](https://www.elastic.co/elastic/docs-builder/docs/3028/solutions/observability/ai/llm-performance-matrix) to select models that perform well with your desired use cases.
</admonition>


### Elastic Managed LLMs

[Elastic Managed LLMs](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/reference/kibana/connectors-kibana/elastic-managed-llm) are available in Kibana deployments and serverless projects with an appropriate [subscription or feature tier](https://www.elastic.co/pricing). They provide immediate access to generative AI features without requiring any setup or external model integration.
Elastic Managed LLMs are available out-of-the box. They do not require manual configuration or API key management. Alternatively, you can configure and use third-party LLM connectors, such as OpenAI, Azure, or Amazon Bedrock.
To learn more about security and data privacy, refer to [Elastic Managed LLMs](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/reference/kibana/connectors-kibana/elastic-managed-llm) which provides details on the models.
<important>
  Using Elastic Managed LLMs incurs additional costs. Refer to [Elastic Cloud pricing](https://www.elastic.co/pricing/serverless-search) for more information.
</important>


### Connect to a custom local LLM

<applies-to>
  - Elastic Cloud Serverless: Generally available
  - Elastic Stack: Generally available since 9.2
</applies-to>

[Connect to LM Studio](https://www.elastic.co/elastic/docs-builder/docs/3028/explore-analyze/ai-features/llm-guides/connect-to-lmstudio-observability) to use a custom LLM deployed and managed by you.

## Add data to the AI Assistant knowledge base

The AI Assistant uses one of the following text embedding models to run  semantic search against the internal knowledge base index. The top results are passed to the LLM as context (retrieval‑augmented generation), producing more accurate and grounded responses:
- [ELSER](https://www.elastic.co/elastic/docs-builder/docs/3028/explore-analyze/machine-learning/nlp/ml-nlp-elser): Recommended for English-only use cases.
- [E5](https://www.elastic.co/elastic/docs-builder/docs/3028/explore-analyze/machine-learning/nlp/ml-nlp-e5): <applies-to>Elastic Stack: Generally available since 9.1</applies-to> Recommended for non-English use cases.

Adding data such as Runbooks, GitHub issues, internal documentation, and Slack messages to the knowledge base gives the AI Assistant context to provide more specific assistance.
Add data to the knowledge base with one or more of the following methods:
- [Use the knowledge base UI](#obs-ai-kb-ui) available at [AI Assistant Settings](#obs-ai-settings) page.
- [Use content connectors](#obs-ai-search-connectors)

You can also add information to the knowledge base by asking the AI Assistant to remember something while chatting (for example, "remember this for next time"). The assistant will create a summary of the information and add it to the knowledge base.

### Use the knowledge base UI

To add external data to the knowledge base in Kibana:
1. To open AI Assistant settings, find `AI Assistants` in the [global search field](https://www.elastic.co/elastic/docs-builder/docs/3028/explore-analyze/find-and-organize/find-apps-and-objects).
2. Under **Elastic AI Assistant for Observability and Search**, click **Manage settings**.
3. Switch to the **Knowledge base** tab.
4. Click the **New entry** button, and choose either:
   - **Single entry**: Write content for a single entry in the UI.
- **Bulk import**: Upload a newline delimited JSON (`ndjson`) file containing a list of entries to add to the knowledge base. Each object should conform to the following format:
  ```json
  {
    "id": "a_unique_human_readable_id",
    "text": "Contents of item"
  }
  ```


### Use content connectors

[Content connectors](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/reference/search-connectors) index content from external sources like GitHub, Confluence, Google Drive, Jira, S3, Teams, and Slack to improve the AI Assistant's responses.

#### Requirements and limitations

- For Elastic Stack 9.0.0+ or Serverless, connectors must be [self-managed](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/reference/search-connectors/self-managed-connectors).
- Manage connectors through the Search Solution in Kibana (pre-9.0.0) or with the [Connector APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-connector).


#### Knowledge base data sources

By default, the AI Assistant queries all search connector indices. To customize which indices are used in the knowledge base, set the **Search connector index pattern** setting on the [AI Assistant Settings](#obs-ai-settings) page.
<note>
  You're not limited to search connector indices in the **Search connector index pattern setting**. You can specify any index pattern.
</note>


##### Space awareness

The **Search connector index pattern** setting is [space](https://www.elastic.co/elastic/docs-builder/docs/3028/deploy-manage/manage-spaces) aware. This means you can assign different values for different spaces. For example, a "Developers" space may include an index pattern like `github-*,jira*`, while an "HR" space may include an index pattern like `employees-*`.

##### Custom index field name requirements

Field names in custom indices have no specific requirements. Any `semantic_text` field is automatically queried. Documents matching the index pattern are sent to the LLM in full, including all fields. It's not currently possible to include or exclude specific fields.

#### Setup process:

1. **Create a connector**
   **Use the UI**:
   - Navigate to `Content / Connectors` in the global search field
- Create a connector for your data source (example: [GitHub connector](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/reference/search-connectors/es-connectors-github))
- If your Space lacks the Search solution, either create the connector from a different space or change your space **Solution view** to `Classic`
   **Use the API**:
   - Create a connector using the [Connector APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-connector)
2. **Create embeddings** (choose one method):
   - [`semantic_text` field](#obs-ai-search-connectors-semantic-text): Recommended workflow which handles model setup automatically. Allows the use of any available ML model (Elser, e5, or custom models).
- [ML pipeline](#obs-ai-search-connectors-ml-embeddings): Requires manual setup of the ELSER model and inference pipeline.


#### Option 1: Use a `semantic_text` field type to create embeddings (recommended)

To create the embeddings needed by the AI Assistant using a [`semantic_text`](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/reference/elasticsearch/mapping-reference/semantic-text) field type:
1. Open the previously created connector, and select the **Mappings** tab.
2. Select **Add field**.
3. Under **Field type**, select **Semantic text**.
4. Under **Reference field**, select the field you want to use for model inference.
5. Under **Select an inference endpoint**, select the model you want to use to add the embeddings to the data.
6. Add the field to your mapping by selecting **Add field**.
7. Sync the data by selecting **Full Content** from the **Sync** menu.

The AI Assistant will now query the connector you’ve set up using the model you’ve selected. Check that the AI Assistant is using the index by asking it something related to the indexed data.

#### Option 2: Use machine learning pipelines to create embeddings

This is a more complex method that requires you to set up the ELSER model and inference pipeline manually.
To create the embeddings needed by the AI Assistant (weights and tokens into a sparse vector field) using an **ML Inference Pipeline**:
1. Open the previously created content connector in **Content / Connectors**, and select the **Pipelines** tab.
2. Select **Copy and customize** under `Unlock your custom pipelines`.
3. Select **Add Inference Pipeline** under `Machine Learning Inference Pipelines`.
4. Select the **ELSER (Elastic Learned Sparse EncodeR)** ML model to add the necessary embeddings to the data.
5. Select the fields that need to be evaluated as part of the inference pipeline.
6. Test and save the inference pipeline and the overall pipeline.

After creating the pipeline, complete the following steps:
1. Sync the data.
   Once the pipeline is set up, perform a **Full Content Sync** of the connector. The inference pipeline will process the data as follows:
   - As data comes in, ELSER is applied to the data, and embeddings (weights and tokens into a [sparse vector field](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/reference/query-languages/query-dsl/query-dsl-sparse-vector-query)) are added to capture semantic meaning and context of the data.
- When you look at the ingested documents, you can see the embeddings are added to the `predicted_value` field in the documents.
2. Check if AI Assistant can use the index (optional).
   Ask something to the AI Assistant related with the indexed data.


### Add user-specific system prompts

User-specific prompts customize how the AI assistant responds by appending personalized instructions to built-in system prompts. For example, you could specify "Always respond in French", and all subsequent responses will be in French.
A user-specific prompt only applies to the user that sets it.
To edit the **User-specific System Prompt**:
1. Go to the **Elastic AI Assistant for Observability and Search** management page. You can find it in the **Management** menu or by using the [global search field](https://www.elastic.co/elastic/docs-builder/docs/3028/explore-analyze/find-and-organize/find-apps-and-objects).
2. Switch to the **Knowledge base** tab.
3. Select **Edit User-specific Prompt**.


#### User-specific prompt example

User-specific prompts are useful when configuring specific workflows. For example, if you want the assistant to respond in a consistent, readable format when asked about Kubernetes metadata, you might add the following **user-specific system prompt**:
```
<kubernetes_info>
If asked about a Kubernetes pod, namespace, cluster, location, or owner, return the info in this format.  Use the field names to find the relevant information requested.  Don't mention the field names, just the results.
- Pod: agent.name
- Namespace: data_stream.namespace
- Cluster Name: orchestrator.cluster.name
- Owner: cloud.account.id
</kubernetes_info>
```


### Choose the Knowledge Base language model

<applies-to>
  - Elastic Stack: Generally available since 9.1
</applies-to>

Choose the default language model for the AI Assistant in the AI Assistant settings under **Set text embeddings model**.
- [ELSER](https://www.elastic.co/elastic/docs-builder/docs/3028/explore-analyze/machine-learning/nlp/ml-nlp-elser): recommended for English-only use cases.
- [E5](https://www.elastic.co/elastic/docs-builder/docs/3028/explore-analyze/machine-learning/nlp/ml-nlp-e5): supports multilingual use cases.

Select the language model and click **Update**.
When switching models, all existing Knowledge Base entries must be reindexed. Entries will be unavailable until reindexing is complete.
To have the AI Assistant respond in a language other than English, set a [user specific prompt](#obs-ai-assistant-user-prompt).

## Interact with the AI Assistant

<important>
  The AI Assistant uses large language models (LLMs) which are probabilistic and liable to provide incomplete or incorrect information. Elastic supports LLM configuration and connectivity but is not responsible for response accuracy. Always verify important information before implementing suggested changes.
</important>

Chat with the AI Assistant or interact with contextual insights located throughout the UI. Check the following sections for more on interacting with the AI Assistant.
<tip>
  After every answer the LLM provides, let us know if the answer was helpful. Your feedback helps us improve the AI Assistant!
</tip>


### Chat with the assistant

Select the **AI Assistant** icon (![AI Assistant icon](https://www.elastic.co/elastic/docs-builder/docs/3028/solutions/images/observability-ai-assistant-icon.png)) at the upper-right corner of the Serverless or Kibana UI to start the chat.
This opens the AI Assistant flyout, where you can ask the assistant questions about your instance:
![Observability AI assistant chat](https://www.elastic.co/elastic/docs-builder/docs/3028/solutions/images/observability-obs-ai-chat.png)

<important>
  Asking questions about your data requires `function calling`, which enables LLMs to reliably interact with third-party generative AI providers to perform searches or run advanced functions using customer data.When the Elastic AI Assistant for Observability and Search performs searches in the cluster, the queries are run with the same level of permissions as the user.
</important>


#### Suggest functions

<applies-to>
  - Elastic Cloud Serverless: Preview
  - Elastic Stack: Preview
</applies-to>

The AI Assistant uses functions to include relevant context in the chat conversation through text, data, and visual components. Both you and the AI Assistant can suggest functions. You can also edit the AI Assistant’s function suggestions and inspect function responses.
Main functions:
<definitions>
  <definition term="alerts">
    Get alerts for Observability.
  </definition>
  <definition term="changes">
    Get change points like spikes and dips for logs and metrics data.
  </definition>
  <definition term="elasticsearch">
    Call Elasticsearch APIs on your behalf.
  </definition>
  <definition term="execute_connector">
    Call a Kibana connector on your behalf.
  </definition>
  <definition term="get_alerts_dataset_info">
    Get information about alerts data within a specified time range.
  </definition>
  <definition term="get_data_on_screen">
    Get the structured data of content currently visible on the user's screen. Use this function to provide more accurate and context-aware responses to your questions.
  </definition>
  <definition term="get_dataset_info">
    Get information about available indices and datasets and their fields.
  </definition>
  <definition term="kibana">
    Call Kibana APIs on your behalf.
  </definition>
</definitions>

<important applies-to="Self-managed Elastic deployments: Generally available">
  For self‑managed deployments, you must configure [`server.publicBaseUrl`](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/reference/kibana/configuration-reference/general-settings#server-publicbaseurl) in your `kibana.yml` to use the `kibana` function.
</important>

<definitions>
  <definition term="query">
    Generate, execute, and visualize queries based on your request.
  </definition>
  <definition term="retrieve_elastic_doc">
    Get relevant Elastic documentation. This function is only available if the product documentation is installed.
  </definition>
  <definition term="summarize">
    Store information and facts in the knowledge base for future use. This function is only available if the [knowledge base](#obs-ai-add-data) has already been installed.
  </definition>
</definitions>

Additional functions are available when your cluster has APM data:
<definitions>
  <definition term="get_apm_dataset_info">
    Get information about APM data.
  </definition>
  <definition term="get_apm_downstream_dependencies">
    Get the downstream dependencies (services or uninstrumented backends) for a service. Map the downstream dependency name to a service by returning both `span.destination.service.resource` and `service.name`. Use this to drill down further if needed.
  </definition>
  <definition term="get_apm_services_list">
    Get the list of monitored services, their health statuses, and alerts.
  </definition>
</definitions>


#### Share conversations

Conversations that you start with AI Assistant are private by default and not visible to other users in the space. Sharing conversations lets everyone in the space see what you've already asked or learned, making it easier to collaborate.
To change the visibility of a conversation:
1. Select the **Private** / **Shared** badge next to the conversation's title
2. Use the dropdown menu to define the conversation's visibility.

![AI Assistant chat with the sharing status dropdown open](https://www.elastic.co/elastic/docs-builder/docs/3028/solutions/images/obs-ai-assistant-shared-conversations.png)

After sharing a conversation, you can copy its URL and send the link to your team:
1. Open an AI Assistant chat.
2. Open the **Conversation actions** (`boxes_vertical`) menu.
3. Select **Copy URL**.

When someone shares a chat with you, you can view it, but you can't edit or continue the conversation. To continue the conversation where your colleague left off, duplicate the conversation.
To duplicate a conversation:
1. Open an AI Assistant chat.
2. Open the **Conversation actions** (`boxes_vertical`) menu.
3. Select **Duplicate**.


#### Archive conversations

The owner of a conversation can archive it by selecting **Archive** from the **Conversation actions** (`boxes_vertical`) menu. Once archived, a conversation can't be continued or edited unless it is unarchived. Unarchive a conversation by selecting **Unarchive** from the **Conversation actions** (`boxes_vertical`) menu.

#### Export conversations for support cases

When you open a support case or report an issue, include conversation data from Elastic AI Assistant for Observability and Search so Elastic can review the full context.
To export conversation data:
1. Open an AI Assistant chat.
2. Open the **Conversation actions** (`boxes_vertical`) menu. Conversation actions become available only after the first successful interaction with the AI Assistant.
3. If **Export as JSON** is available, select it and save the downloaded file.
4. If **Export as JSON** is not available, use your browser's developer tools:
   1. Open the **Network** tab.
2. Refresh the chat or navigate to the conversation.
3. Find the request to the Observability AI Assistant conversations API (for example, a request path that includes `api/observability_ai_assistant/conversations/`).
4. Copy the JSON response body and save it as a `.json` file.

<note>
  If something fails or the expected API call is missing, record the full HAR file by following the steps in [Generating a browser HAR file for Kibana troubleshooting](https://www.elastic.co/blog/generating-browser-har-file-kibana-troubleshooting).
</note>

The exported JSON file can include:
- Conversation messages
- Timestamps and user metadata
- Function calls and function responses
- Conversation-level metadata

<tip>
  Include the exported JSON file when you open support cases or report issues to help the support team understand what happened in your conversation.
</tip>


### Use contextual prompts

AI Assistant contextual prompts throughout Observability provide the following information:
- **Universal Profiling**: explains the most expensive libraries and functions in your fleet and provides optimization suggestions.
- **Application performance monitoring (APM)**: explains APM errors and provides remediation suggestions.
- **Infrastructure Observability**: explains the processes running on a host.
- **Logs**: explains log messages and generates search patterns to find similar issues.
- **Alerting**: provides possible causes and remediation suggestions for log rate changes.

For example, in the log details, you’ll see prompts for **What’s this message?** and **How do I find similar log messages?**:
![Observability AI assistant logs prompts](https://www.elastic.co/elastic/docs-builder/docs/3028/solutions/images/observability-obs-ai-logs-prompts.png)

Clicking a prompt generates a message specific to that log entry:
![Observability AI assistant example](https://www.elastic.co/elastic/docs-builder/docs/3028/solutions/images/observability-obs-ai-logs.gif)

Continue a conversation from a contextual prompt by clicking **Start chat** to open the AI Assistant chat.

### Add the AI Assistant connector to alerting workflows

Use the [Observability AI Assistant connector](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/reference/kibana/connectors-kibana/obs-ai-assistant-action-type) to add AI-generated insights and custom actions to your alerting workflows as follows:
1. Navigate to **Observability / Alerts** to [create (or edit) an alerting rule](https://www.elastic.co/elastic/docs-builder/docs/3028/solutions/observability/incident-management/create-manage-rules) that uses the AI Assistant connector. Specify the conditions that must be met for the alert to fire.
2. Under **Actions**, select the **Observability AI Assistant** connector type.
3. In the **Connector** list, select the AI connector you created when you set up the assistant.
4. In the **Message** field, specify the message to send to the assistant:
   ![Add an Observability AI assistant action while creating a rule in the Observability UI](https://www.elastic.co/elastic/docs-builder/docs/3028/solutions/images/observability-obs-ai-assistant-action-high-cpu.png)

You can ask the assistant to generate a report of the alert that fired, recall any information or potential resolutions of past occurrences stored in the knowledge base, provide troubleshooting guidance and resolution steps, and also include other active alerts that may be related. As a last step, you can ask the assistant to trigger an action, such as sending the report (or any other message) to a Slack webhook.
<note>
  Currently only Slack, email, Jira, PagerDuty, or webhook actions are supported. Additional actions will be added in the future.
</note>

When the alert fires, contextual details about the event—such as when the alert fired, the service or host impacted, and the threshold breached—are sent to the AI Assistant, along with the message provided during configuration. The AI Assistant runs the tasks requested in the message and creates a conversation you can use to chat with the assistant:
![AI Assistant conversation created in response to an alert](https://www.elastic.co/elastic/docs-builder/docs/3028/solutions/images/observability-obs-ai-assistant-output.png)

It might take a minute or two for the AI Assistant to process the message and create the conversation.
Overly broad prompts may result in the request exceeding token limits. For more information, refer to [Token limits](#obs-ai-token-limits). Also, attempting to analyze several alerts in a single connector execution may cause you to exceed the function call limit. If this happens, modify the message specified in the connector configuration to avoid exceeding limits.
When asked to send a message to another connector, such as Slack, the AI Assistant attempts to include a link to the generated conversation.
<tip>
  The `server.publicBaseUrl` setting must be correctly specified under Kibana settings, or the AI Assistant is unable to generate this link.
</tip>

![Message sent by Slack by the AI Assistant includes a link to the conversation](https://www.elastic.co/elastic/docs-builder/docs/3028/solutions/images/observability-obs-ai-assistant-slack-message.png)

Elastic AI Assistant for Observability and Search connector is called when the alert fires and when it recovers.
To learn more about alerting, actions, and connectors, refer to [Alerting](https://www.elastic.co/elastic/docs-builder/docs/3028/solutions/observability/incident-management/alerting).

## AI Assistant Settings

To access the AI Assistant Settings page, you can:
- Find `AI Assistants` in the [global search field](https://www.elastic.co/elastic/docs-builder/docs/3028/explore-analyze/find-and-organize/find-apps-and-objects).
- Use the **More actions** menu inside the AI Assistant window.

The AI Assistant Settings page contains the following tabs:
- **Settings**: Configures the main AI Assistant settings, which are explained directly within the interface.
- **Knowledge base**: Manages [knowledge base entries](#obs-ai-kb-ui).
- **Content connectors**: Provides a link to Kibana **Search** → **Content** → **Connectors** UI for connectors configuration.


### Add Elastic documentation

You can make the official Elastic documentation available to the AI Assistant, which significantly improves its ability to accurately answer questions about the Elastic Stack and Elastic products.
Enable this feature from the **Settings** tab in AI Assistant Settings by using the "Install Elastic Documentation" action.
<important>
  For air-gapped environments or environments where outbound traffic must go through an HTTP proxy, installing product documentation requires special configuration. Refer to the [Knowledge base artifact settings for AI Assistants](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/reference/kibana/configuration-reference/ai-assistant-settings) for `kibana.yml` settings and proxy options. To mirror the artifact repository to your own infrastructure (S3-compatible bucket, CDN, or local path), refer to [Host a knowledge base artifact repo for AI Assistant](https://www.elastic.co/elastic/docs-builder/docs/3028/explore-analyze/ai-features/ai-chat-experiences/ai-assistant-host-doc-artifacts).
</important>


## Anonymization

<applies-to>
  - Elastic Cloud Serverless: Preview
  - Elastic Stack: Preview since 9.1
</applies-to>

Anonymization masks personally identifiable or otherwise sensitive information before chat messages leave Kibana for a third-party LLM.
Enabled rules substitute deterministic tokens (for example `EMAIL_ee4587…`) so the model can keep context without ever seeing the real value.
When all rules are disabled (the default), data is forwarded unchanged.

### How it works

When an anonymization rule is enabled in the [AI Assistant settings](#obs-ai-settings), every message in the request (system prompt, message content, function call arguments/responses) is run through an *anonymization pipeline* before it leaves Kibana:
1. Each enabled **rule** scans the text and replaces any match with a deterministic token such as
   `EMAIL_ee4587b4ba681e38996a1b716facbf375786bff7`.
   The prefix (`EMAIL`, `PER`, `LOC`, …) is the *entity class*; the suffix is a deterministic hash of the original value.
2. The fully masked conversation is sent to the LLM.
3. After the LLM responds, the original values are restored so the user sees deanonymized text and any persisted conversation history stores the original content. Deanonymization information is stored with the conversation messages to enable the UI to highlight anonymized content.


### Rule types

**RegExp**: Runs a JavaScript‑style regular expression. Use for fixed patterns such as email addresses, host names, etc.
```jsonc
{
  "type": "RegExp",
  "pattern": "([a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,})",
  "entityClass": "EMAIL",
  "enabled": true
}
```

**NER**: Runs a named entity recognition (NER) model on free text.
```jsonc
{
  "type": "NER",
  "modelId": "elastic__distilbert-base-uncased-finetuned-conll03-english",
  "allowedEntityClasses": ["PER", "ORG", "LOC"],
  "enabled": true
}
```

Rules are evaluated top-to-bottom with `RegExp` rules processed first, then `NER` rules; the first rule that captures a given entity wins. Rules can be configured in the [AI Assistant Settings](#obs-ai-settings) page.

### Example

The following example shows the anonymized content highlighted in the chat window using a `RegExp` rule to mask GKE hostnames:
```jsonc
{
  "entityClass": "GKE_HOST",
  "type": "RegExp",
  "pattern": "(gke-[a-zA-Z0-9-]+-[a-f0-9]{8}-[a-zA-Z0-9]+)",
  "enabled": true
}
```

![AI Assistant chat showing hostname anonymization in action](https://www.elastic.co/elastic/docs-builder/docs/3028/solutions/images/observability-obs-ai-assistant-anonymization.png)


### Requirements

Anonymization requires the following:
- **Advanced Settings privilege**: Necessary to edit the configuration and enable rules.
  Once saved, *all* users in the same **Space** benefit from the anonymization (the setting is [space-aware](https://www.elastic.co/elastic/docs-builder/docs/3028/deploy-manage/manage-spaces)).
- **ML privilege and resources**: If you enable a rule of type NER, you must first [deploy and start a named-entity-recognition model](/elastic/docs-builder/docs/3028/explore-analyze/machine-learning/nlp/ml-nlp-ner-example#ex-ner-deploy) and have sufficient ML capacity.

<important>
  The anonymization pipeline has only been validated with Elastic’s English model
  [elastic/distilbert-base-uncased-finetuned-conll03-english](https://huggingface.co/elastic/distilbert-base-uncased-finetuned-conll03-english).
  Results for other languages or models may vary.
</important>


### Limitations

Anonymization has the following limitations:
- **Non-string fields**:  <applies-to>Elastic Stack: Generally available since 9.1</applies-to> Anonymization only applies to string values. Booleans, numbers, image types, and other non-string values are ignored.
- **Performance (NER)**: Running an NER model can add latency depending on the request. To improve performance of the model, consider scaling up your ML nodes by adjusting deployment parameters: increase `number_of_allocations` for better throughput and `threads_per_allocation` for faster individual requests. For details, refer to [start trained model deployment API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-start-trained-model-deployment).
- **Structured JSON**: The NER model we validated (`elastic/distilbert-base-uncased-finetuned-conll03-english`) is trained on natural English text and often misses entities inside JSON or other structured data. If thorough masking is required, prefer regex rules and craft them to account for JSON syntax.
- **False negatives / positives**: No model or pattern is perfect. Model accuracy may vary depending on model and input.
- **JSON malformation risk** <applies-to>Elastic Cloud Serverless: Removed, Elastic Stack: Removed in 9.1</applies-to>: Both NER inference and regex rules can potentially create malformed JSON when anonymizing JSON data such as function responses. This can occur by replacing text across character boundaries, which may break JSON structure causing the whole request to fail. If this occurs, you may need to adjust your regex pattern or disable the NER rule.


## Known issues

For troubleshooting steps to capture AI Assistant conversation data for support, refer to [Export conversations for support cases](#obs-ai-export-conversations).

### Token limits

Most LLMs have a set number of tokens they can manage in single a conversation. When you reach the token limit, the LLM will throw an error, and Elastic will display a "Token limit reached" error in Kibana. The exact number of tokens that the LLM can support depends on the LLM provider and model you’re using. If you use an OpenAI connector, monitor token utilization in **OpenAI Token Usage** dashboard. For more information, refer to the [OpenAI Connector documentation](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/reference/kibana/connectors-kibana/openai-action-type#openai-connector-token-dashboard).

## AI Assistant for Security

The capabilities and ways to interact with AI Assistant can differ for each solution. For more information about how AI Assistant works in Elastic Security, refer to [AI Assistant for Security](https://www.elastic.co/elastic/docs-builder/docs/3028/solutions/security/ai/ai-assistant).