Loading

Run Kibana in production

ECE ECK Elastic Cloud Hosted Self Managed

How you deploy Kibana largely depends on your use case. If you are the only user, you can run Kibana on your local machine and configure it to point to whatever Elasticsearch instance you want to interact with. Conversely, if you have a large number of heavy Kibana users, you might need to load balance across multiple Kibana instances that are all connected to the same Elasticsearch cluster or deployment.

Historically, Kibana’s scalability was primarily influenced by the number of concurrent users and the complexity of dashboards and visualizations. However, with the introduction of new capabilities such as Kibana Alerting and the Detection Rules engine, critical components for Observability and Security solutions, the scalability factors have evolved significantly.

Now, Kibana’s resource requirements extend beyond user activity. The system must also handle workloads generated by automated processes, such as scheduled alerts, background detection rules, and other periodic tasks. These operations are managed by Kibana Task Manager, which is responsible for scheduling, executing, and coordinating all background tasks.

Additionally, the task manager enables distributed coordination across multiple Kibana instances, allowing Kibana to function as a logical cluster in certain aspects.

Important
  • Kibana does not support rolling upgrades, and deploying mixed versions of Kibana can result in data loss or upgrade failures. Please shut down all instances of Kibana before performing an upgrade, and ensure all running Kibana instances have matching versions.
  • While Kibana isn’t resource intensive, we still recommend running Kibana separate from your Elasticsearch data or master nodes.

This section provides guidance on key configurations and optimizations for running Kibana in production environments. You’ll learn how to scale, secure, and optimize Kibana for high availability and performance, as well as how to manage background tasks and other features effectively.

Topics covered include:

  • High availability and traffic distribution: For self-managed deployments, learn how to load balance traffic across multiple Kibana instances, how to balance traffic to different deployments, and how to distribute Kibana traffic across multiple Elasticsearch instances.

  • Configure Kibana memory usage: Configure Kibana memory limit in self-managed deployments.

  • Manage Kibana background tasks: Learn how Kibana runs background tasks like alerting and reporting, and get guidance on scaling and throughput tuning for reliable task execution. Applicable to all deployment types.

  • Optimize Kibana alerting performance: Learn how Kibana runs alerting rules and actions using background tasks, and how to scale alerting by tuning task throughput and circuit breakers. Applicable to all deployment types.

  • Kibana reporting production setup: Learn how Kibana generates reports using a headless version of Chromium, and how to configure your environment securely for production, including sandboxing and OS compatibility.

Note

Not all recommendations in this section apply to every deployment type. Be sure to check the section headers or applicability notes to confirm whether a given configuration is relevant to your environment.

In addition to the guidance provided in this section, other areas of the documentation cover important aspects for running Kibana in production, such as: