Loading

Elastic Stack configuration policies

Warning

We have identified an issue with Elasticsearch 8.15.1 and 8.15.2 that prevents security role mappings configured via Stack configuration policies to work correctly. Avoid these versions and upgrade to 8.16+ to remedy this issue if you are affected.

Note

This requires a valid Enterprise license or Enterprise trial license. Check the license documentation for more details about managing licenses.

Note

Component templates created in configuration policies cannot currently be referenced from index templates created through the Elasticsearch API or Kibana UI.

Starting from ECK 2.6.1 and Elasticsearch 8.6.1, Elastic Stack configuration policies allow you to configure the following settings for Elasticsearch:

Additionally with ECK 2.11.0 it is possible to configure Kibana as well using Elastic Stack configuration policies, the following settings can be configured for Kibana:

A policy can be applied to one or more Elasticsearch clusters or Kibana instances in any namespace managed by the ECK operator. Configuration policy settings applied by the ECK operator are immutable through the Elasticsearch REST API.

With ECK 3.3.0 and later, multiple Elastic Stack configuration policies can target the same Elasticsearch cluster and Kibana instance. When multiple policies target the same resource, the policy with the lowest weight value takes precedence. If multiple policies have the same weight value, the operator reports a conflict.

Note

Scale considerations

There is no hard limit to the maximum number of StackConfigPolicy resources that can target the same Elasticsearch cluster or Kibana instance. However, in our experimentation, we observed that when hundreds of StackConfigPolicy resources target the same Elasticsearch cluster or Kibana instance, the total reconciliation time (including the Elasticsearch cluster or Kibana instance and all StackConfigPolicy resources) can increase significantly, to the scale of minutes. To maintain fast total reconciliation times, we recommend efficiently utilizing the number of StackConfigPolicy resources by consolidating configurations where possible.

Elastic Stack configuration policies can be defined in a StackConfigPolicy resource. Each StackConfigPolicy must have the following field:

  • name is a unique name used to identify the policy.

At least one of spec.elasticsearch or spec.kibana needs to be defined with at least one of its attributes.

  • spec.elasticsearch describes the settings to configure for Elasticsearch. Each of the following fields except clusterSettings is an associative array where keys are arbitrary names and values are definitions:

    • clusterSettings are dynamic settings that can be set on a running cluster like with the Cluster Update Settings API.
    • snapshotRepositories are snapshot repositories for defining an off-cluster storage location for your snapshots. Check Specifics for snapshot repositories for more information.
    • snapshotLifecyclePolicies are snapshot lifecycle policies, to automatically take snapshots and control how long they are retained.
    • securityRoleMappings are role mappings, to define which roles are assigned to each user by identifying them through rules.
    • ingestPipelines are ingest pipelines, to perform common transformations on your data before indexing.
    • indexLifecyclePolicies are index lifecycle policies, to automatically manage the index lifecycle.
    • indexTemplates.componentTemplates are component templates that are building blocks for constructing index templates that specify index mappings, settings, and aliases.
    • indexTemplates.composableIndexTemplates are index templates to define settings, mappings, and aliases that can be applied automatically to new indices.
    • config are the settings that go into the elasticsearch.yml file.
    • secretMounts are the additional user created secrets that need to be mounted to the Elasticsearch Pods.
    • secureSettings is a list of Secrets containing Secure Settings to inject into the keystore(s) of the Elasticsearch cluster(s) to which this policy applies, similar to the Elasticsearch Secure Settings.
  • spec.kibana describes the settings to configure for Kibana.

    • config are the settings that go into the kibana.yml file.
    • secureSettings is a list of Secrets containing Secure Settings to inject into the keystore(s) of the Kibana instance(s) to which this policy applies, similar to the Kibana Secure Settings.

The following fields are optional:

  • weight is an integer that determines the priority of this policy when multiple policies target the same resource. [introduced in ECK 3.3.0 - See Policy priority and weight for details]
  • namespace is the namespace of the StackConfigPolicy resource and used to identify the Elasticsearch clusters and Kibana instances to which the policy applies. If it equals to the operator namespace, the policy applies to all namespaces managed by the operator, otherwise the policy only applies to the namespace of the policy.
  • resourceSelector is a label selector to identify the Elasticsearch clusters and Kibana instances to which the policy applies in combination with the namespace(s). No resourceSelector means all Elasticsearch clusters and Kibana instances in the namespace(s).

Example of applying a policy that configures snapshot repository, SLM Policies, and cluster settings:

apiVersion: stackconfigpolicy.k8s.elastic.co/v1alpha1
kind: StackConfigPolicy
metadata:
  name: test-stack-config-policy
  # namespace: elastic-system or test-namespace
spec:
  weight: 0
  resourceSelector:
    matchLabels:
      env: my-label
  elasticsearch:
    clusterSettings:
      indices.recovery.max_bytes_per_sec: "100mb"
    secureSettings:
    - secretName: "my-secure-settings"
    snapshotRepositories:
      test-repo:
        type: gcs
        settings:
          bucket: my-bucket
    snapshotLifecyclePolicies:
      test-slm:
        schedule: "0 1 2 3 4 ?"
        name: "<production-snap-{now/d}>"
        repository: test-repo
        config:
          indices: ["*"]
          ignore_unavailable: true
          include_global_state: false
        retention:
          expire_after: "7d"
          min_count: 1
          max_count: 20
		
  1. Optional: determines priority when multiple policies target the same resource

Another example of configuring role mappings, ingest pipelines, ILM and index templates:

apiVersion: stackconfigpolicy.k8s.elastic.co/v1alpha1
kind: StackConfigPolicy
metadata:
  name: test-stack-config-policy
spec:
  elasticsearch:
    securityRoleMappings:
      everyone-kibana:
        enabled: true
        metadata:
          _foo: something
          uuid: b9a59ba9-6b92-4be2-bb8d-02bb270cb3a7
        roles:
        - kibana_user
        rules:
          field:
            username: '*'
    ingestPipelines:
      test-pipeline:
        description: "optional description"
        processors:
        - set:
            field: my-keyword-field
            value: foo
      test-2-pipeline:
        description: "optional description"
        processors:
        - set:
            field: my-keyword-field
            value: foo
    indexLifecyclePolicies:
      test-ilm:
        phases:
          delete:
            actions:
              delete: {}
            min_age: 30d
          warm:
            actions:
              forcemerge:
                max_num_segments: 1
            min_age: 10d
    indexTemplates:
      componentTemplates:
        test-component-template:
          template:
            mappings:
              properties:
                '@timestamp':
                  type: date
        test-runtime-component-template-test:
          template:
            mappings:
              runtime:
                day_of_week:
                  type: keyword
      composableIndexTemplates:
        test-template:
          composed_of:
          - test-component-template
          - test-runtime-component-template-test
          index_patterns:
          - test*
          - bar*
          priority: 500
          template:
            aliases:
              mydata: {}
            mappings:
              _source:
                enabled: true
              properties:
                created_at:
                  format: EEE MMM dd HH:mm:ss Z yyyy
                  type: date
                host_name:
                  type: keyword
            settings:
              number_of_shards: 1
          version: 1
		

Example of configuring Elasticsearch and Kibana using an Elastic Stack configuration policy:

apiVersion: stackconfigpolicy.k8s.elastic.co/v1alpha1
kind: StackConfigPolicy
metadata:
  name: test-stack-config-policy
spec:
  resourceSelector:
    matchLabels:
      env: my-label
  elasticsearch:
    secureSettings:
    - secretName: shared-secret
    securityRoleMappings:
      jwt1-elastic-agent:
        roles: [ "remote_monitoring_collector" ]
        rules:
          all:
            - field: { realm.name: "jwt1" }
            - field: { username: "elastic-agent" }
        enabled: true
    config:
       logger.org.elasticsearch.discovery: DEBUG
       xpack.security.authc.realms.jwt.jwt1:
         order: -98
         token_type: id_token
         client_authentication.type: shared_secret
         allowed_issuer: "https://es.credentials.controller.k8s.elastic.co"
         allowed_audiences: [ "elasticsearch" ]
         allowed_subjects: ["elastic-agent"]
         allowed_signature_algorithms: [RS512]
         pkc_jwkset_path: jwks/jwkset.json
         claims.principal: sub
    secretMounts:
    - secretName: "testMountSecret"
      mountPath: "/usr/share/testmount"
    - secretName: jwks-secret
      mountPath: "/usr/share/elasticsearch/config/jwks"
  kibana:
    config:
      "xpack.canvas.enabled": true
    secureSettings:
    - secretName: kibana-shared-secret
		

Example showing how multiple StackConfigPolicy resources can target the same Elasticsearch cluster, with weight determining which policy takes precedence:

# Policy with higher priority (lower weight)
apiVersion: stackconfigpolicy.k8s.elastic.co/v1alpha1
kind: StackConfigPolicy
metadata:
  name: high-priority-policy
spec:
  weight: 0
  resourceSelector:
    matchLabels:
      cluster: my-cluster
  elasticsearch:
    clusterSettings:
      indices.recovery.max_bytes_per_sec: "200mb"

---
# Policy with lower priority (higher weight)
apiVersion: stackconfigpolicy.k8s.elastic.co/v1alpha1
kind: StackConfigPolicy
metadata:
  name: low-priority-policy
spec:
  weight: 100
  resourceSelector:
    matchLabels:
      cluster: my-cluster
  elasticsearch:
    clusterSettings:
      indices.recovery.max_bytes_per_sec: "100mb"
		
  1. Lower weight = higher priority
  2. Both policies target the same cluster
  3. Higher weight = lower priority
  4. Both policies target the same cluster

In this example, both policies target the same Elasticsearch cluster (using the cluster: my-cluster label). The high-priority-policy (weight: 0) settings take precedence and overwrite the low-priority-policy (weight: 100) settings. The low-priority-policy settings are applied first, then the high-priority-policy settings overwrite them. If both policies have the same weight value, a conflict occurs and no policies are applied to the cluster until the conflict is resolved. See Policy priority and weight for more details on how weight determines policy priority and conflict resolution.

In addition to the logs generated by the operator, a config policy status is maintained in the StackConfigPolicy resource. This status gives information in which phase the policy is ("Applying", "Ready", "Error") and it indicates the number of resources for which the policy could be applied.

kubectl get stackconfigpolicy
		
NAME                           READY   PHASE   AGE
test-stack-config-policy       1/1     Ready   1m42s
test-err-stack-config-policy   0/1     Error   1m42s
		

When not all resources are ready, you can get more information about the reason by reading the full status:

kubectl get -n b scp test-err-stack-config-policy -o jsonpath="{.status}" | jq .
		
{
  "errors": 1,
  "observedGeneration": 3,
  "phase": "Error",
  "readyCount": "1/2",
  "resources": 2,
  "details": {
    "elasticsearch": {
      "b/banana-staging": {
        "currentVersion": 1670342369361604600,
        "error": {
          "message": "Error processing slm state change: java.lang.IllegalArgumentException: Error on validating SLM requests\n\tSuppressed: java.lang.IllegalArgumentException: no such repository [es-snapshots]",
          "version": 1670342482739637500
        },
        "expectedVersion": 1670342482739637500,
        "phase": "Error"
      }
    },
    "kibana": {
      "b/banana-kb-staging": {
        "error": {},
        "phase": "Ready"
      }
    }
  }
}
		

Important events are also reported through Kubernetes events, such as when you don't have the appropriate license:

17s    Warning   ReconciliationError stackconfigpolicy/config-test   StackConfigPolicy is an enterprise feature. Enterprise features are disabled
		

In order to avoid a conflict between multiple Elasticsearch clusters writing their snapshots to the same location, ECK automatically:

  • sets the base_path to snapshots/<namespace>-<esName> when it is not provided, for Azure, GCS and S3 repositories
  • appends <namespace>-<esName> to location for a FS repository
  • appends <namespace>-<esName> to path for an HDFS repository

The weight field is an integer that determines the priority of a policy when multiple StackConfigPolicy resources target the same Elasticsearch cluster or Kibana instance. When multiple policies target the same resource, policies are evaluated in order of their weight values (from highest to lowest). Settings from policies with lower weight values take precedence and overwrite settings from policies with higher weight values. The policy with the lowest weight value has the highest priority.

The weight field is optional and defaults to 0 if not specified. Lower weight values have higher priority.

Important

Conflict resolution

If multiple policies have the same weight value and target the same resource, the operator reports a conflict. When a conflict occurs, no policies are applied to that resource—this includes not only the conflicting policies but also any other policies that target the same resource. The target resource remains unconfigured by any StackConfigPolicy until the conflict is resolved by adjusting the weight values of the conflicting policies.

This allows you to create a hierarchy of policies, for example:

  • Base policies with higher weights (e.g., weight: 100) that provide default configurations
  • Override policies with lower weights (e.g., weight: 0) that provide environment-specific or cluster-specific configurations and overwrite the base policy settings

Example of using weight to create a policy hierarchy:

# Base policy with default settings (lower priority)
apiVersion: stackconfigpolicy.k8s.elastic.co/v1alpha1
kind: StackConfigPolicy
metadata:
  name: base-policy
spec:
  weight: 100
  resourceSelector:
    matchLabels:
      env: production
  elasticsearch:
    clusterSettings:
      indices.recovery.max_bytes_per_sec: "50mb"

---
# Override policy with production-specific settings (higher priority)
apiVersion: stackconfigpolicy.k8s.elastic.co/v1alpha1
kind: StackConfigPolicy
metadata:
  name: production-override-policy
spec:
  weight: 0
  resourceSelector:
    matchLabels:
      env: production
      tier: critical
  elasticsearch:
    clusterSettings:
      indices.recovery.max_bytes_per_sec: "200mb"
		
  1. Higher weight = lower priority
  2. Lower weight = higher priority

In this example, clusters labeled with both env: production and tier: critical have the production-override-policy (weight: 0) settings applied, which overwrite the base-policy (weight: 100) settings. Other production clusters use only the base-policy (weight: 100) settings.

ECK 2.11.0 introduces spec.elasticsearch.secretMounts as a new field. This field allows users to specify a user created secret and a mountPath to indicate where this secret should be mounted in the Elasticsearch Pods that are managed by the Elastic Stack configuration policy. This field can be used to add additional secrets to the Elasticsearch Pods that may be needed for example for sensitive files required to configure Elasticsearch security realms. The secret should be created by the user in the same namespace as the Elastic Stack configuration policy. The operator reads this secret and copies it over to the namespace of Elasticsearch so that it can be mounted by the Elasticsearch Pods. Example of configuring secret mounts in the Elastic Stack configuration policy:

secretMounts:
  - secretName: jwks-secret
    mountPath: "/usr/share/elasticsearch/config/jwks"
		
  1. name of the secret created by the user in the Elastic Stack configuration policy namespace.
  2. mount path where the secret must be mounted to inside the Elasticsearch Pod.

Elastic Stack configuration policy can be used to configure authentication for Elasticsearch clusters. Check Managing authentication for multiple stacks using Elastic Stack configuration policy for some examples of the various authentication configurations that can be used.