Elastic Stack configuration policies
Elastic Stack configuration policies in Elastic Cloud on Kubernetes (ECK) provide a centralized, declarative way to manage configuration across multiple Elasticsearch clusters and Kibana instances. By defining reusable StackConfigPolicy resources in Kubernetes, platform administrators can enforce consistent settings, such as cluster configuration, security settings, snapshot policies, ingest pipelines, or index templates, without configuring each cluster individually.
Once applied, the ECK operator continuously reconciles these policies with the targeted Elasticsearch and Kibana resources to ensure that managed settings remain enforced, enabling configuration-as-code practices and simplifying governance, standardization, and large-scale operations across multiple clusters.
This helps keep deployment manifests simpler by moving reusable configuration into StackConfigPolicy resources.
We have identified an issue with Elasticsearch 8.15.1 and 8.15.2 that prevents security role mappings configured via Stack configuration policies to work correctly. Avoid these versions and upgrade to 8.16+ to remedy this issue if you are affected.
Elastic Stack configuration policies on ECK require a valid Enterprise license or Enterprise trial license. Check the license documentation for more details about managing licenses.
Component templates created in configuration policies cannot currently be referenced from index templates created through the Elasticsearch API or Kibana UI.
A policy can be applied to one or more Elasticsearch clusters or Kibana instances in any namespace managed by the ECK operator. Configuration policy settings applied by the ECK operator are immutable through the Elasticsearch REST API.
With ECK 3.3.0 and later, multiple Elastic Stack configuration policies can target the same Elasticsearch cluster and Kibana instance. When multiple policies target the same resource and define the same setting, the value from the policy with the highest weight takes precedence. If multiple policies have the same weight value, the operator reports a conflict. Refer to Policy priority and weight for more information.
While there is no hard limit on how many StackConfigPolicy resources can target the same Elasticsearch cluster or Kibana instance, targeting a single resource with more than 100 policies can increase total reconciliation time to several minutes. For optimal performance, combine related settings into fewer policies rather than creating many granular ones.
Additionally, the total size of settings configured through StackConfigPolicy resources for a given Elasticsearch cluster or Kibana instance is limited to 1MB due to Kubernetes secret size constraints.
You can define Elastic Stack configuration policies in a StackConfigPolicy resource. The following example shows a minimal policy that configures one Elasticsearch cluster setting.
apiVersion: stackconfigpolicy.k8s.elastic.co/v1alpha1
kind: StackConfigPolicy
metadata:
name: production-settings-all-clusters
namespace: elastic-system
spec:
resourceSelector:
matchLabels:
env: production
elasticsearch:
clusterSettings:
indices.recovery.max_bytes_per_sec: "100mb"
- Because this policy is created in the operator namespace (
elastic-system), it applies to all Elasticsearch clusters labeledenv=productionacross all namespaces managed by the operator.
For more advanced, feature-specific configurations, refer to Examples.
Each StackConfigPolicy must define the following fields under spec:
name: A unique name used to identify the policy.At least one of
elasticsearchorkibana, each defining at least one configuration field.Notespec.elasticsearchandspec.kibanacontain the configuration applied to the targeted resources. Each section can include one or more supported configuration fields.For the list of supported settings and their corresponding policy fields, refer to:
The following fields are optional. They control which Elasticsearch clusters and Kibana instances the policy targets.
weight: An integer that determines the priority of this policy when multiple policies target the same resource. Refer to Policy priority and weight for details.namespace: The namespace of theStackConfigPolicyresource, used to identify the Elasticsearch clusters and Kibana instances to which the policy applies. If it equals the operator namespace, the policy applies to all namespaces managed by the operator. Otherwise, the policy applies only to the namespace where the policy is defined.resourceSelector: A label selector to identify the Elasticsearch clusters and Kibana instances to which the policy applies in combination with the namespace(s). IfresourceSelectoris not defined, the policy applies to all Elasticsearch clusters and Kibana instances in the namespace(s).
This section describes the Elasticsearch settings that can be configured through Elastic Stack configuration policies. The syntax used for each field depends on the type of configuration being defined. For configurations backed by an Elasticsearch API, the structure follows the format of the corresponding API request. For an overview of the different syntax types, refer to Syntax types.
The following fields are available under StackConfigPolicy.spec.elasticsearch:
| Policy field | Description | Syntax and schema |
|---|---|---|
config |
Settings that go into elasticsearch.yml. |
Settings map Elasticsearch settings reference |
clusterSettings |
Dynamic cluster settings applied through the cluster settings API. | Settings map Cluster settings API |
secureSettings |
Secure settings for the Elasticsearch keystore. | List of secrets to add Elasticsearch secure settings |
secretMounts |
Mount Kubernetes secrets into Elasticsearch pods. Specifics for secret mounts |
List of secrets to mount |
snapshotRepositories |
Configure snapshot repositories for backup and restore. Specifics for snapshot repositories |
Named resources map Create snapshot repository API |
snapshotLifecyclePolicies |
Configure snapshot lifecycle policies to automatically take snapshots and control how long they are retained. | Named resources map SLM API |
ingestPipelines |
Configure ingest pipelines to perform common transformations on your data before indexing. | Named resources map Ingest pipeline API |
indexLifecyclePolicies |
Configure index lifecycle management policies to automatically manage the index lifecycle. | Named resources map ILM API |
indexTemplates.composableIndexTemplates |
Configure index templates to define settings, mappings, and aliases that can be applied automatically to new indices. Specifics for index and component templates |
Named resources map Index template API |
indexTemplates.componentTemplates |
Configure component templates, reusable building-blocks to define settings, mappings, and aliases for new indices. Specifics for index and component templates |
Named resources map Component template API |
securityRoleMappings |
Configure role mappings to associate roles to users based on rules. | Named resources map Role mapping API |
The secretMounts field allows users to specify a user created secret and a mountPath to indicate where this secret should be mounted in the Elasticsearch Pods that are managed by the Elastic Stack configuration policy. This can be used to add additional secrets to the Elasticsearch Pods that might be needed, for example for sensitive files required to configure Elasticsearch authentication realms.
The referenced secret should be created by the user in the same namespace as the Elastic Stack configuration policy. The operator reads this secret and copies it over to the namespace of Elasticsearch so that it can be mounted by the Elasticsearch Pods.
The following is an example of configuring secret mounts in the Elastic Stack configuration policy:
spec:
elasticsearch:
secretMounts:
- secretName: jwks-secret
mountPath: "/usr/share/elasticsearch/config/jwks"
- The name of the secret created by the user in the Elastic Stack configuration policy namespace.
- The mount path where the secret must be mounted to inside the Elasticsearch Pod.
To avoid a conflict between multiple Elasticsearch clusters writing their snapshots to the same location, ECK automatically does the following:
- Azure, GCS, and S3 repositories: sets the
base_pathtosnapshots/<namespace>-<esName>when it is not provided - FS repositories: appends
<namespace>-<esName>tolocation - HDFS repositories: appends
<namespace>-<esName>topath
composableIndexTemplates and componentTemplats must be defined under the indexTemplates field:
spec:
elasticsearch:
indexTemplates:
composableIndexTemplates:
my-index-template:
# ...
componentTemplates:
my-component-template:
# ...
The following settings can be configured for Kibana under StackConfigPolicy.spec.elasticsearch:
| Policy field | Description | Syntax and schema |
|---|---|---|
config |
Settings that go into kibana.yml |
Settings map Kibana settings reference |
secureSettings |
Secure settings for the Kibana keystore | List of secrets to add to the keystore Kibana Secure Settings |
The following examples show common StackConfigPolicy patterns you can copy and adapt to your deployments.
Multiple StackConfigPolicy resources can target the same Elasticsearch cluster or Kibana instance, with weight determining which policy takes precedence when applying settings. Refer to Policy priority and weight for more information.
An Elastic Stack configuration policy can be used to configure authentication for Elasticsearch clusters. Refer to Manage authentication for multiple clusters for some examples of the various authentication configurations that can be used.
Example of applying a policy that configures snapshot repository, SLM Policies, and cluster settings:
apiVersion: stackconfigpolicy.k8s.elastic.co/v1alpha1
kind: StackConfigPolicy
metadata:
name: test-stack-config-policy
# namespace: elastic-system or test-namespace
spec:
weight: 0
resourceSelector:
matchLabels:
env: my-label
elasticsearch:
clusterSettings:
indices.recovery.max_bytes_per_sec: "100mb"
secureSettings:
- secretName: "my-secure-settings"
snapshotRepositories:
test-repo:
type: gcs
settings:
bucket: my-bucket
snapshotLifecyclePolicies:
test-slm:
schedule: "0 1 2 3 4 ?"
name: "<production-snap-{now/d}>"
repository: test-repo
config:
indices: ["*"]
ignore_unavailable: true
include_global_state: false
retention:
expire_after: "7d"
min_count: 1
max_count: 20
-
Optional: determines priority when multiple policies target the same resource
Another example of configuring role mappings, ingest pipelines, ILM, and index templates:
apiVersion: stackconfigpolicy.k8s.elastic.co/v1alpha1
kind: StackConfigPolicy
metadata:
name: test-stack-config-policy
spec:
elasticsearch:
securityRoleMappings:
everyone-kibana:
enabled: true
metadata:
_foo: something
uuid: b9a59ba9-6b92-4be2-bb8d-02bb270cb3a7
roles:
- kibana_user
rules:
field:
username: '*'
ingestPipelines:
test-pipeline:
description: "optional description"
processors:
- set:
field: my-keyword-field
value: foo
test-2-pipeline:
description: "optional description"
processors:
- set:
field: my-keyword-field
value: foo
indexLifecyclePolicies:
test-ilm:
phases:
delete:
actions:
delete: {}
min_age: 30d
warm:
actions:
forcemerge:
max_num_segments: 1
min_age: 10d
indexTemplates:
componentTemplates:
test-component-template:
template:
mappings:
properties:
'@timestamp':
type: date
test-runtime-component-template-test:
template:
mappings:
runtime:
day_of_week:
type: keyword
composableIndexTemplates:
test-template:
composed_of:
- test-component-template
- test-runtime-component-template-test
index_patterns:
- test*
- bar*
priority: 500
template:
aliases:
mydata: {}
mappings:
_source:
enabled: true
properties:
created_at:
format: EEE MMM dd HH:mm:ss Z yyyy
type: date
host_name:
type: keyword
settings:
number_of_shards: 1
version: 1
Example of configuring Elasticsearch and Kibana using an Elastic Stack configuration policy. A mixture of config, secureSettings, and secretMounts:
apiVersion: stackconfigpolicy.k8s.elastic.co/v1alpha1
kind: StackConfigPolicy
metadata:
name: test-stack-config-policy
spec:
resourceSelector:
matchLabels:
env: my-label
elasticsearch:
secureSettings:
- secretName: shared-secret
securityRoleMappings:
jwt1-elastic-agent:
roles: [ "remote_monitoring_collector" ]
rules:
all:
- field: { realm.name: "jwt1" }
- field: { username: "elastic-agent" }
enabled: true
config:
logger.org.elasticsearch.discovery: DEBUG
xpack.security.authc.realms.jwt.jwt1:
order: -98
token_type: id_token
client_authentication.type: shared_secret
allowed_issuer: "https://es.credentials.controller.k8s.elastic.co"
allowed_audiences: [ "elasticsearch" ]
allowed_subjects: ["elastic-agent"]
allowed_signature_algorithms: [RS512]
pkc_jwkset_path: jwks/jwkset.json
claims.principal: sub
secretMounts:
- secretName: "testMountSecret"
mountPath: "/usr/share/testmount"
- secretName: jwks-secret
mountPath: "/usr/share/elasticsearch/config/jwks"
kibana:
config:
"xpack.canvas.enabled": true
secureSettings:
- secretName: kibana-shared-secret
In addition to the logs generated by the operator, a config policy status is maintained in the StackConfigPolicy resource. This status gives information in which phase the policy is ("Applying", "Ready", "Error") and it indicates the number of resources for which the policy could be applied.
kubectl get stackconfigpolicy
NAME READY PHASE AGE
test-stack-config-policy 1/1 Ready 1m42s
test-err-stack-config-policy 0/1 Error 1m42s
When not all resources are ready, you can get more information about the reason by reading the full status:
kubectl get -n b scp test-err-stack-config-policy -o jsonpath="{.status}" | jq .
{
"errors": 1,
"observedGeneration": 3,
"phase": "Error",
"readyCount": "1/2",
"resources": 2,
"details": {
"elasticsearch": {
"b/banana-staging": {
"currentVersion": 1670342369361604600,
"error": {
"message": "Error processing slm state change: java.lang.IllegalArgumentException: Error on validating SLM requests\n\tSuppressed: java.lang.IllegalArgumentException: no such repository [es-snapshots]",
"version": 1670342482739637500
},
"expectedVersion": 1670342482739637500,
"phase": "Error"
}
},
"kibana": {
"b/banana-kb-staging": {
"error": {},
"phase": "Ready"
}
}
}
}
Important events are also reported through Kubernetes events, such as when you don't have the appropriate license:
17s Warning ReconciliationError stackconfigpolicy/config-test StackConfigPolicy is an enterprise feature. Enterprise features are disabled
The weight field is an integer that determines the priority of a policy when multiple StackConfigPolicy resources target the same Elasticsearch cluster or Kibana instance. When multiple policies target the same resource, policies are evaluated in order of their weight values (from lowest to highest). Settings from policies with higher weight values take precedence and overwrite settings from policies with lower weight values. The policy with the highest weight value has the highest priority.
The weight field is optional and defaults to 0 if not specified. Higher weight values have higher priority.
If multiple policies have the same weight value and target the same resource, the operator reports a conflict. When a conflict occurs, no policies are applied to that resource—this includes not only the conflicting policies but also any other policies that target the same resource. The target resource remains unconfigured by any StackConfigPolicy until the conflict is resolved by adjusting the weight values of the conflicting policies.
This allows you to create a hierarchy of policies, for example:
- Base policies with lower weights (for example,
weight: 0) that provide default configurations - Override policies with higher weights (for example,
weight: 100) that provide environment-specific or cluster-specific configurations and overwrite the base policy settings
Example of using weight to create a policy hierarchy:
# Base policy with default settings (lower priority)
apiVersion: stackconfigpolicy.k8s.elastic.co/v1alpha1
kind: StackConfigPolicy
metadata:
name: base-policy
spec:
weight: 0
resourceSelector:
matchLabels:
env: production
elasticsearch:
clusterSettings:
indices.recovery.max_bytes_per_sec: "50mb"
---
# Override policy with production-specific settings (higher priority)
apiVersion: stackconfigpolicy.k8s.elastic.co/v1alpha1
kind: StackConfigPolicy
metadata:
name: production-override-policy
spec:
weight: 100
resourceSelector:
matchLabels:
env: production
tier: critical
elasticsearch:
clusterSettings:
indices.recovery.max_bytes_per_sec: "200mb"
- Lower weight = lower priority
- Higher weight = higher priority
In this example, clusters labeled with both env: production and tier: critical have the production-override-policy (weight: 100) settings applied, which overwrite the base-policy (weight: 0) settings. Other production clusters use only the base-policy (weight: 0) settings.
Configuration policy fields use one of the following syntax types, depending on the kind of setting being configured.
| Syntax type | Description |
|---|---|
| Settings map | A map where keys correspond directly to Elasticsearch or Kibana configuration setting names. The structure matches the settings accepted by the corresponding API or configuration file, expressed in YAML instead of JSON. Used in config and clusterSettings fields. |
| Named resources map | A map where each key is a user-defined logical name and the value contains the resource definition. The key represents the resource identifier used in the corresponding Elasticsearch API request, and the value contains the request payload, expressed in YAML instead of JSON. Used in fields such as snapshotRepositories, snapshotLifecyclePolicies, ingestPipelines, indexLifecyclePolicies, indexTemplates, and securityRoleMappings. |
| List of resources | A list of objects where each item defines a resource entry. Each object follows the schema expected by the corresponding configuration mechanism. Used in secureSettings and secretMounts fields. |
Settings map
spec:
elasticsearch:
clusterSettings:
indices.recovery.max_bytes_per_sec: 50mb
- The key corresponds to the name of a valid Elasticsearch cluster setting.
Named resources map
spec:
elasticsearch:
snapshotRepositories:
my-repo:
type: fs
settings:
location: /snapshots
- The key is a user-defined logical name. The value must match the payload accepted by the corresponding Elasticsearch API.
spec:
elasticsearch:
indexTemplates:
componentTemplates:
test-component-template:
template:
mappings:
properties:
'@timestamp':
type: date
test-runtime-component-template-test:
template:
mappings:
runtime:
day_of_week:
type: keyword
composableIndexTemplates:
test-template:
composed_of:
- test-component-template
- test-runtime-component-template-test
index_patterns:
- test*
- bar*
priority: 500
template:
aliases:
mydata: {}
mappings:
_source:
enabled: true
properties:
created_at:
format: EEE MMM dd HH:mm:ss Z yyyy
type: date
host_name:
type: keyword
settings:
number_of_shards: 1
version: 1
- Each top-level key represents a user-defined resource name.
List of resources
spec:
elasticsearch:
secretMounts:
- name: my-secret
mountPath: /etc/secrets
- name: my-certificate
mountPath: /usr/share/elasticsearch/config/my-certificate
- Each list item defines a secret mount entry and references an existing Kubernetes Secret.