Loading

Set up and configure semantic_text fields

This page provides instructions for setting up and configuring semantic_text fields. Learn how to configure inference endpoints, including default and preconfigured options, ELSER on EIS, custom endpoints, and dedicated endpoints for ingestion and search operations.

You can configure inference endpoints for semantic_text fields in the following ways:

Note

If you use a custom inference endpoint through your ML node and not through Elastic Inference Service (EIS), the recommended method is to use dedicated endpoints for ingestion and search.

If you use EIS, you don't have to set up dedicated endpoints.

A default endpoint is the inference endpoint that is used when you create a semantic_text field without specifying an inference_id.

The following example shows a semantic_text field configured to use the default inference endpoint:

				PUT my-index-000001
					{
  "mappings": {
    "properties": {
      "inference_field": {
        "type": "semantic_text"
      }
    }
  }
}
		
Important

The default inference endpoint varies by deployment type and version:

  • On Elastic Cloud 9.4+, the inference_id parameter defaults to .jina-embeddings-v5-text-small and runs on EIS. .jina-embeddings-v5-text-small is expected to become the default model for Serverless soon.

  • In version 9.3 and on Serverless, the inference_id parameter defaults to .elser-2-elastic and runs on EIS.

  • In versions 9.0-9.2, the inference_id parameter defaults to .elser-2-elasticsearch and runs on the elasticsearch service.

If you use the default inference endpoint, it might be updated to a newer version and use a different embedding model than the previous default endpoints. Queries that target these indices together can produce unexpected ranking results. For details, refer to potential issues when mixing embedding models across indices.

Preconfigured endpoints are inference endpoints that are automatically available in the deployment or project and do not require manual creation. The available preconfigured endpoints vary across deployment types and versions.

To view the list of available preconfigured endpoints for your deployment, go to Inference endpoints in Kibana.

To use a preconfigured endpoint, set the inference_id parameter to the identifier of the endpoint you want to use:

				PUT my-index-000004
					{
  "mappings": {
    "properties": {
      "inference_field": {
        "type": "semantic_text",
        "inference_id": ".jina-embeddings-v5-text-nano"
      }
    }
  }
}
		

If you use the preconfigured .elser-2-elastic endpoint that utilizes the ELSER model as a service through the Elastic Inference Service (ELSER on EIS), you can set up semantic_text with the following API request:

				PUT my-index-000001
					{
  "mappings": {
    "properties": {
      "inference_field": {
        "type": "semantic_text"
      }
    }
  }
}
		

				PUT my-index-000001
					{
  "mappings": {
    "properties": {
      "inference_field": {
        "type": "semantic_text",
        "inference_id": ".elser-2-elastic"
      }
    }
  }
}
		

To use a custom inference endpoint instead of the default or preconfigured endpoints, you must Create inference API and specify its inference_id when setting up the semantic_text field type.

				PUT my-index-000002
					{
  "mappings": {
    "properties": {
      "inference_field": {
        "type": "semantic_text",
        "inference_id": "my-openai-endpoint"
      }
    }
  }
}
		
  1. The inference_id of the inference endpoint to use to generate embeddings.

If you use a custom inference endpoint through your ML node and not through Elastic Inference Service, the recommended way to use semantic_text is by having dedicated inference endpoints for ingestion and search.

This ensures that search speed remains unaffected by ingestion workloads, and vice versa. After creating dedicated inference endpoints for both, you can reference them using the inference_id and search_inference_id parameters when setting up the index mapping for an index that uses the semantic_text field.

				PUT my-index-000003
					{
  "mappings": {
    "properties": {
      "inference_field": {
        "type": "semantic_text",
        "inference_id": "my-elser-endpoint-for-ingest",
        "search_inference_id": "my-elser-endpoint-for-search"
      }
    }
  }
}
		

Configuring index_options for sparse vector fields lets you configure token pruning, which controls whether non-significant or overly frequent tokens are omitted to improve query performance.

The following example enables token pruning and sets pruning thresholds for a sparse_vector field:

				PUT semantic-embeddings
					{
  "mappings": {
    "properties": {
      "content": {
        "type": "semantic_text",
        "index_options": {
          "sparse_vector": {
            "prune": true,
            "pruning_config": {
              "tokens_freq_ratio_threshold": 10,
              "tokens_weight_threshold": 0.5
            }
          }
        }
      }
    }
  }
}
		
  1. (Optional) Enables pruning. Default is true.
  2. (Optional) Prunes tokens whose frequency is more than 10 times the average token frequency in the field. Default is 5.
  3. (Optional) Prunes tokens whose weight is lower than 0.5. Default is 0.4.

Learn more about sparse_vector index options settings and token pruning.

Configuring index_options for dense vector fields lets you control how dense vectors are indexed for kNN search. You can select the indexing algorithm, such as int8_hnsw, int4_hnsw, or disk_bbq, among other available index options.

The following example shows how to configure index_options for a dense vector field using the int8_hnsw indexing algorithm:

				PUT semantic-embeddings
					{
  "mappings": {
    "properties": {
      "content": {
        "type": "semantic_text",
        "index_options": {
          "dense_vector": {
            "type": "int8_hnsw",
            "m": 15,
            "ef_construction": 90
          }
        }
      }
    }
  }
}
		
  1. (Optional) Selects the int8_hnsw vector quantization strategy. Learn about default quantization types.
  2. (Optional) Sets m to 15 to control how many neighbors each node connects to in the HNSW graph. Default is 16.
  3. (Optional) Sets ef_construction to 90 to control how many candidate neighbors are considered during graph construction. Default is 100.