AI-powered features
Serverless Stack
AI is a core part of the Elastic platform. It augments features and helps you analyze your data more effectively. This page lists the AI-powered capabilities and features available to you in each solution, and provides links to more detailed information about each of them.
To learn about enabling and disabling these features in your deployment, refer to Manage access to AI features.
For pricing information, refer to pricing.
- To use Elastic's AI-powered features, you need an appropriate license and feature tier. These vary by solution and feature. Refer to each feature's documentation to learn more.
- Most features require at least one working LLM connector. To learn about setting up large language model (LLM) connectors used by AI-powered features, refer to Enable large language model (LLM) access. Elastic Managed LLM is available by default if your license supports it.
The following AI-powered features are available across the Elastic platform. These are core Elasticsearch capabilities that you can use regardless of your chosen solution or project type.
Elastic Inference enables you to use machine learning models to perform operations such as text embedding or reranking on your data.
To learn more, refer to:
- Elastic Inference Service (EIS): A managed service that runs inference without the need of deploying a model, or managing infrastructure and resources.
- Elastic Managed LLM connector: This connector enables you to use built-in LLMs vetted for GenAI product features across the platform.
- The inference API: This general-purpose API enables you to perform inference operations using EIS, your own models, or third-party services.
Natural Language Processing (NLP) enables you to analyze natural language data and make predictions. Elastic offers a range of built-in NLP models such as the Elastic-trained ELSER or Jina models. You can also deploy custom NLP models.
AI-powered search helps you find data based on intent and contextual meaning using vector search technology, which uses machine learning models to capture meaning in content.
Depending on your team's technical expertise and requirements, you can choose from two broad paths for implementing semantic search:
- For a minimal configuration, managed workflow use semantic_text.
- For more control over the implementation details, implement dense or sparse vector search manually.
Hybrid search combines traditional full-text search with AI-powered search for more powerful search experiences that serve a wider range of user needs.
Semantic re-ranking involves using ML models to reorder search results based on semantic similarity to queries, using models hosted in Elasticsearch or using third-party inference endpoints.
Learning To Rank is an advanced feature that involves using trained ML models to build custom ranking functions for search. Best suited for use cases with substantial training data and requirements for highly customized relevance tuning.
The Elasticsearch solution view (or project type in Serverless) includes certain AI-powered features beyond the core Elasticsearch capabilities available on the Elastic platform.
Agent Builder enables you to create AI agents that can interact with your Elasticsearch data, run queries, and provide intelligent responses. It provides a complete framework for building conversational AI experiences on top of your search infrastructure.
Elastic AI Assistant for Observability and Search helps you understand, analyze, and interact with your Elastic data throughout Kibana. It provides a chat interface where you can ask questions about the Elastic Stack and your data, and provides contextual insights throughout Kibana that explain errors and messages and suggest remediation steps.
Playground enables you to use large language models (LLMs) to understand, explore, and analyze your Elasticsearch data using retrieval augmented generation (RAG), via a chat interface. Playground is also very useful for testing and debugging your Elasticsearch queries, using the retrievers syntax with the _search endpoint.
The Model Context Protocol (MCP) lets you connect AI agents and assistants to your Elasticsearch data to enable natural language interactions with your indices.
Observability's AI-powered features all require an LLM connector. When you use one of these features, you can select any LLM connector that's configured in your environment. The connector you select for one feature does not affect which connector any other feature uses. For specific configuration instructions, refer to each feature's documentation.
Elastic AI Assistant for Observability and Search helps you understand, analyze, and interact with your Elastic data throughout Kibana. It provides a chat interface where you can ask questions about the Elastic Stack and your data, and provides contextual insights throughout Kibana that explain errors and messages and suggest remediation steps.
Streams is an AI-assisted centralized UI within Kibana that streamlines common tasks like extracting fields, setting data retention, and routing data. Streams leverages AI in the following features:
- Significant Events: Use AI to suggest queries based on your data that find important events in your stream.
- Grok processing: Use AI to generate grok patterns that extract meaningful fields from your data.
- Partitioning: Use AI to suggest logical groupings and child streams based on your data when using wired streams.
- Advanced settings: Use AI to generate a stream description and a feature identification that other AI features, like significant events, use when generating suggestions.
Elastic Security's AI-powered features all require an LLM connector. When you use one of these features, you can select any LLM connector that's configured in your environment. The connector you select for one feature does not affect which connector any other feature uses. For specific configuration instructions, refer to each feature's documentation.
Elastic AI Assistant for Security helps you with tasks such as alert investigation, incident response, and query generation throughout Elastic Security. It provides a chat interface where you can ask questions about the Elastic Stack and your data, and provides contextual insights that explain errors and messages and suggest remediation steps.
This feature requires an LLM connector.
Attack Discovery uses AI to triage your alerts and identify potential threats. Each "discovery" represents a potential attack and describes relationships among alerts to identify related users and hosts, map alerts to the MITRE ATT&CK matrix, and help identify threat actors.
This feature requires an LLM connector.
Automatic Migration uses AI to help you migrate Splunk assets to Elastic Security by translating them into the necessary format and adding them to your Elastic Security environment. It supports the following asset types:
- Splunk rules
- Splunk dashboards
This feature requires an LLM connector.
Automatic Import helps you ingest data from sources that do not have prebuilt Elastic integrations. It uses AI to parse a sample of the data you want to ingest, and creates a new integration specifically for that type of data.
This feature requires an LLM connector.
Automatic troubleshooting uses AI to help you identify and resolve issues that could prevent Elastic Defend from working as intended. It provides actionable insights into the following common problem areas:
- Policy responses: Detect warnings or failures in Elastic Defend’s integration policies.
- Third-party antivirus (AV) software: Identify installed third-party antivirus (AV) products that might conflict with Elastic Defend.
This feature requires an LLM connector.