Loading

Enable logsdb for integrations

This page shows you how to enable logsdb index mode for integration data streams, using @custom component templates. If your integrations were installed before you upgraded to Elastic Stack 9.x, you need to manually enable logsdb mode.

Why isn't logsdb enabled automatically for integrations when upgrading?

Although logsdb significantly reduces storage costs, it can increase ingestion overhead slightly. On clusters that are already near capacity, enabling logsdb on many data streams at once can impact stability. For this reason, the 8.x to 9.x upgrade process does not automatically apply logsdb mode to existing integration data streams.

Logsdb index mode is automatically applied to new integration data streams in Elastic Stack 9.0+. For more details, refer to Availability of logsdb index mode.

The steps on this page work in Elastic Cloud Serverless, but you typically won't need to enable logsdb manually in Serverless.

To work with integration data streams, you need some details from the package manifest. Find your integration in the Elastic integrations repository or query the Elastic Package Registry and make a note of the following:

  • Package name: The exact name in the package manifest. For example, the MySQL integration package is named mysql.

  • Logs dataset names: Make sure the integration has data streams where the type is logs. For example, the MySQL integration has mysql.error and mysql.slowlog. Note the name of each logs dataset for use in later steps.

  1. Find logs data streams and check index mode

    1. Go to Index Management using the navigation menu or the global search field.

    2. On the Data Streams tab, search for the integration name.

    3. Check the Index mode column for each logs data stream in the integration.

      If the index mode shows LogsDB, logsdb is already enabled and no action is needed. If it shows a different mode like Standard or Time series, continue to the next step.

  2. Edit existing component templates

    Use Kibana to find and edit @custom component templates for the integration log datasets:

    1. Go to Index Management using the navigation menu or the global search field.

    2. On the Component Templates tab, search for @custom to check whether templates already exist for the integration's logs datasets.

    3. Edit the existing @custom templates that correspond to the log datasets you identified in Before you begin. In the Index settings step, add the logsdb mode:

      {
        "index": {
          "mode": "logsdb"
        }
      }
      		
  3. Create component templates

    If they don't already exist, create @custom component templates for the logs datasets you identified in Before you begin.

    1. Go to Index Management using the navigation menu or the global search field.
    2. On the Component Templates tab, click Create component template and step through the wizard:
      • In the Logistics step, name the template using the pattern logs-<integration>.<dataset>@custom (for example, logs-mysql.error@custom).
      • In the Index settings step, add the logsdb index mode.

    Repeat for each logs dataset in the integration.

    To create a @custom template for a single integration dataset:

    Component template names must use the pattern logs-<integration>.<dataset>@custom.

    Example:

    				PUT _component_template/logs-mysql.error@custom
    					{
      "template": {
        "settings": {
          "index.mode": "logsdb"
        }
      }
    }
    		

    Repeat for each logs dataset in the integration.

    Warning

    This curl command uses PUT, which overwrites any existing component templates. Before using this command, confirm that no @custom templates exist for your integration.

    Make sure to consider your cluster's resource usage before enabling logsdb on many data streams at once. On clusters that are already near capacity, this action could impact stability.

    To create @custom component templates for all logs data streams in an integration at once, run this command in a terminal window:

    curl -sL epr.elastic.co/package/mysql/1.28.1 | jq -r '.data_streams[] |
    select(.type == "logs") | .dataset' | xargs -I% curl -s -XPUT \
    -H'Authorization: ApiKey <API_KEY>' -H'Content-Type: application/json' \
    '<ES_URL>/_component_template/logs-%@custom' \
    -d'{"template": {"settings": {"index.mode": "logsdb"}}}'
    		

    Replace <API_KEY> with your API key, <ES_URL> with your Elasticsearch endpoint, and mysql/1.28.1 with your integration package name and version.

  4. Verify logsdb mode

    Changes are applied to existing data streams on rollover. Data streams roll over automatically based on your index lifecycle policy, or you can trigger a rollover manually.

    After your data streams roll over, repeat the check in step 1 to make sure the index mode is set to logsdb. If not, make sure the data stream has rolled over since you created or updated the corresponding template.

You can also enable logsdb for all logs data streams cluster-wide (not just specific integrations). To do so, create or update the logs@custom component template. For details about component templates and template composition, refer to Component templates.

Important

If your cluster is already near capacity, stability issues can occur if you enable logsdb on many data streams at once. Make sure to check your cluster's resource usage before editing logs@custom.