﻿---
title: Elastic Managed LLMs
description: Elastic provides built-in LLMs through managed AI connectors. These connectors are accessed and managed through the Elastic Inference Service (EIS), which...
url: https://www.elastic.co/elastic/docs-builder/docs/3028/reference/kibana/connectors-kibana/elastic-managed-llm
products:
  - Kibana
applies_to:
  - Elastic Cloud Serverless: Generally available
  - Elastic Stack: Generally available since 9.0
---

# Elastic Managed LLMs
Elastic provides built-in LLMs through managed AI connectors.

These connectors are accessed and managed through the [Elastic Inference Service (EIS)](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/explore-analyze/elastic-inference/eis), which is the single entry point for using Elastic Managed LLMs.

## Prerequisites

- Requires the `manage_inference` [cluster privilege](https://www.elastic.co/docs/reference/elasticsearch/security-privileges#privileges-list-cluster) (the built-in `inference_admin` role grants this privilege)
- <applies-to>Elastic Cloud Enterprise: Generally available</applies-to> <applies-to>Elastic Cloud on Kubernetes: Generally available</applies-to> <applies-to>Self-managed Elastic deployments: Generally available since 9.3</applies-to> For on-premises installations (Elastic Cloud Enterprise, Elastic Cloud on Kubernetes, or self managed clusters), Elastic Managed LLMs are only available through [EIS with Cloud Connect](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/explore-analyze/elastic-inference/connect-self-managed-cluster-to-eis). Your Elastic Stack version must be 9.3 or later.


## Available models

Elastic Managed LLMs are available exclusively through the Elastic Inference Service.
You can find the [list of supported models](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/explore-analyze/elastic-inference/eis#supported-models) on the EIS documentation page.

## Region and hosting

The Elastic Managed LLMs use third party service providers for inference. Refer to [the Elastic Inference Service page](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/explore-analyze/elastic-inference/eis) for details.

## Data protection

Customer projects or deployments hosted in any cloud service provider or region have access to Elastic Managed LLMs in the AWS US region `us-east-1`.
All data is encrypted in transit. The LLMs are configured for zero data retention: none of the prompts or outputs are stored by the service provider.
Only request metadata is logged in AWS CloudWatch.
No information related to prompts is retained.
Logged metadata includes the timestamp, model used, region, and request status.
Read more at our [AI Data FAQs](https://www.elastic.co/trust/ai-data-faq) to learn about our data practices for AI related features.

## Pricing

Elastic Managed LLMs incur a cost per million tokens for input and output tokens. Refer to the Elastic [pricing page](https://www.elastic.co/pricing) that correspond to your Elastic setup for details.