Loading

Semantic search

Elastic Stack Serverless

Note

This page focuses on the semantic search workflows available in Elasticsearch. For detailed information about lower-level vector search implementations, refer to vector search.

Elasticsearch provides various semantic search capabilities using natural language processing (NLP) and vector search.

Learn more about use cases for AI-powered search in the overview page.

You have several options for using NLP models for semantic search in the Elastic Stack:

  • Option 1: Use the semantic_text workflow (recommended)
  • Option 2: Use the inference API workflow
  • Option 3: Deploy models directly in Elasticsearch

This diagram summarizes the relative complexity of each workflow:

Overview of semantic search workflows in {{es}}

The simplest way to use NLP models in the Elastic Stack is through the semantic_text workflow. We recommend using this approach because it abstracts away a lot of manual work. All you need to do is create an inference endpoint and an index mapping to start ingesting, embedding, and querying data. There is no need to define model-related settings and parameters, or to create inference ingest pipelines. Refer to the Create an inference endpoint API documentation for a list of supported services.

For an end-to-end tutorial, refer to Semantic search with semantic_text.

The inference API workflow is more complex but offers greater control over the inference endpoint configuration. You need to create an inference endpoint, provide various model-related settings and parameters, define an index mapping, and set up an inference ingest pipeline with the appropriate settings.

For an end-to-end tutorial, refer to Semantic search with the inference API.

You can also deploy NLP in Elasticsearch manually, without using an inference endpoint. This is the most complex and labor intensive workflow for performing semantic search in the Elastic Stack. You need to select an NLP model from the list of supported dense and sparse vector models, deploy it using the Eland client, create an index mapping, and set up a suitable ingest pipeline to start ingesting and querying data.

For an end-to-end tutorial, refer to Semantic search with a model deployed in Elasticsearch.

Tip

Refer to vector queries and field types for a quick reference overview.