High Availability and load balancing in Kibana
Self Managed
This page provides guidance on scaling Kibana by distributing traffic across multiple instances, accessing multiple load-balanced deployments, and configuring high availability with multiple Elasticsearch nodes.
For scaling considerations related to background tasks, and the alerting framework, refer to Kibana task manager: performance and scaling guide, and Kibana alerting: performance and scaling.
The configurations provided in this section are required only for self-managed deployments. Orchestration systems automatically apply the necessary settings when multiple Kibana instances belong to the same deployment.
To run multiple Kibana instances connected to the same Elasticsearch cluster, you need to adjust the configuration. See the Kibana configuration reference for details on each setting.
When adding multiple Kibana instances to the same deployment in Elastic Cloud Hosted, Elastic Cloud Enterprise, or Elastic Cloud on Kubernetes, the orchestrator applies the necessary configuration, requiring no manual setup.
These settings must be unique across each Kibana instance:
server.uuid1 server.name path.data pid.file server.port
- if not provided, this is autogenerated
When using a file appender, the target file must also be unique:
logging: appenders: default: type: file fileName: /unique/path/per/instance
These settings must be the same for all Kibana belonging to the same cluster or deployment:
xpack.security.encryptionKey1 xpack.security.authc.*2 xpack.security.session.*3 xpack.reporting.encryptionKey4 xpack.encryptedSavedObjects.encryptionKey5 xpack.encryptedSavedObjects.keyRotation.decryptionOnlyKeys6
- decrypting session information
- authentication configuration
- session configuration
- decrypting reports
- decrypting saved objects
- saved objects encryption key rotation, if any
WarningIf the authentication configuration does not match, sessions from unrecognized providers in each Kibana instance will be deleted during that instance’s regular session cleanup. Similarly, inconsistencies in session configuration can also lead to undesired session logouts. This also applies to any Kibana instances that are backed by the same Elasticsearch instance and share the same kibana.index, even if they are not behind the same load balancer.
Separate configuration files can be used from the command line by using the
-c
flag:bin/kibana -c config/instance1.yml bin/kibana -c config/instance2.yml
To access multiple load-balanced Kibana deployments from the same browser, explicitly set xpack.security.cookieName
to the same value across all Kibana instances within the same cluster, and use different values for other clusters.
This prevents cookie conflicts between Kibana instances, ensuring seamless high availability and maintaining the session active in case of an instance failure.
In this context, a Kibana cluster or deployment refers to multiple Kibana instances connected to the same Elasticsearch cluster.
Kibana can be configured to connect to multiple Elasticsearch nodes in the same cluster. In situations where a node becomes unavailable, Kibana will transparently connect to an available node and continue operating. Requests to available hosts will be routed in a round robin fashion (except for Dev Tools which will connect only to the first node available).
In kibana.yml:
elasticsearch.hosts:
- http://elasticsearch1:9200
- http://elasticsearch2:9200
Related configurations include elasticsearch.sniffInterval
, elasticsearch.sniffOnStart
, and elasticsearch.sniffOnConnectionFault
. These can be used to automatically update the list of hosts as a cluster is resized. Parameters can be found in the Kibana configuration reference.
This configuration can be useful when there is no load balancer or reverse proxy in front of Elasticsearch. If a load balancer is in place to distribute traffic among Elasticsearch instances, Kibana should be configured to connect to it instead.
In orchestrated deployments, Kibana is automatically configured to connect to Elasticsearch through load-balanced services—such as platform proxies in ECE or ECH, or Kubernetes services in the case of ECK.