Enable logsdb for integrations
This page shows you how to enable logsdb index mode for integration data streams, using @custom component templates. If your integrations were installed before you upgraded to Elastic Stack 9.x, you need to manually enable logsdb mode.
Although logsdb significantly reduces storage costs, it can increase ingestion overhead slightly. On clusters that are already near capacity, enabling logsdb on many data streams at once can impact stability. For this reason, the 8.x to 9.x upgrade process does not automatically apply logsdb mode to existing integration data streams.
Logsdb index mode is automatically applied to new integration data streams in Elastic Stack 9.0+. For more details, refer to Availability of logsdb index mode.
The steps on this page work in Elastic Cloud Serverless, but you typically won't need to enable logsdb manually in Serverless.
To work with integration data streams, you need some details from the package manifest. Find your integration in the Elastic integrations repository or query the Elastic Package Registry and make a note of the following:
Package name: The exact name in the package manifest. For example, the MySQL integration package is named
mysql.Logs dataset names: Make sure the integration has data streams where the
typeislogs. For example, the MySQL integration hasmysql.errorandmysql.slowlog. Note the name of each logs dataset for use in later steps.Elastic Package Registry query (curl)You can use this
curlcommand to confirm the integration's logs data streams in the Elastic Package Registry. Replacemysql/1.28.1with your integration package name and version:curl -sL epr.elastic.co/package/mysql/1.28.1 | jq '.data_streams[] | select(.type == "logs") | {dataset, type}'{ "dataset": "mysql.error", "type": "logs" } { "dataset": "mysql.slowlog", "type": "logs" }
-
Find logs data streams and check index mode
Go to Index Management using the navigation menu or the global search field.
On the Data Streams tab, search for the integration name.
Check the Index mode column for each logs data stream in the integration.
If the index mode shows LogsDB, logsdb is already enabled and no action is needed. If it shows a different mode like Standard or Time series, continue to the next step.
-
Edit existing component templates
Use Kibana to find and edit
@customcomponent templates for the integration log datasets:Go to Index Management using the navigation menu or the global search field.
On the Component Templates tab, search for
@customto check whether templates already exist for the integration's logs datasets.Edit the existing
@customtemplates that correspond to the log datasets you identified in Before you begin. In the Index settings step, add the logsdb mode:{ "index": { "mode": "logsdb" } }
-
Create component templates
If they don't already exist, create
@customcomponent templates for the logs datasets you identified in Before you begin.- Go to Index Management using the navigation menu or the global search field.
- On the Component Templates tab, click Create component template and step through the wizard:
- In the Logistics step, name the template using the pattern
logs-<integration>.<dataset>@custom(for example,logs-mysql.error@custom). - In the Index settings step, add the logsdb index mode.
- In the Logistics step, name the template using the pattern
Repeat for each logs dataset in the integration.
To create a
@customtemplate for a single integration dataset:- In an Elastic Stack deployment, use the component template API.
- In Elastic Cloud Serverless, use the component template API.
Component template names must use the pattern
logs-<integration>.<dataset>@custom.Example:
PUT _component_template/logs-mysql.error@custom{ "template": { "settings": { "index.mode": "logsdb" } } }Repeat for each logs dataset in the integration.
WarningThis
curlcommand usesPUT, which overwrites any existing component templates. Before using this command, confirm that no@customtemplates exist for your integration.Make sure to consider your cluster's resource usage before enabling logsdb on many data streams at once. On clusters that are already near capacity, this action could impact stability.
To create
@customcomponent templates for all logs data streams in an integration at once, run this command in a terminal window:curl -sL epr.elastic.co/package/mysql/1.28.1 | jq -r '.data_streams[] | select(.type == "logs") | .dataset' | xargs -I% curl -s -XPUT \ -H'Authorization: ApiKey <API_KEY>' -H'Content-Type: application/json' \ '<ES_URL>/_component_template/logs-%@custom' \ -d'{"template": {"settings": {"index.mode": "logsdb"}}}'Replace
<API_KEY>with your API key,<ES_URL>with your Elasticsearch endpoint, andmysql/1.28.1with your integration package name and version. -
Verify logsdb mode
Changes are applied to existing data streams on rollover. Data streams roll over automatically based on your index lifecycle policy, or you can trigger a rollover manually.
After your data streams roll over, repeat the check in step 1 to make sure the index mode is set to logsdb. If not, make sure the data stream has rolled over since you created or updated the corresponding template.
You can also enable logsdb for all logs data streams cluster-wide (not just specific integrations). To do so, create or update the logs@custom component template. For details about component templates and template composition, refer to Component templates.
If your cluster is already near capacity, stability issues can occur if you enable logsdb on many data streams at once. Make sure to check your cluster's resource usage before editing logs@custom.
- Review the documentation for Logs data streams, Templates, and the Default logs index template
- Configure a logs data stream
- Using logsdb index mode with Elastic Security