Logging
ECE ECK Elastic Cloud Hosted Self Managed
You can configure several types of logs in Elastic Stack that can help you to gain insight into Elastic Stack operations, diagnose issues, and track certain types of events.
The following logging features are available:
Application and component logging: Logs messages related to running Elasticsearch.
You can configure the log level for Elasticsearch, and, in self-managed clusters, configure underlying Log4j settings to customize logging behavior.
Deprecation logging: Deprecation logs record a message to the Elasticsearch log directory when you use deprecated Elasticsearch functionality. You can use the deprecation logs to update your application before upgrading Elasticsearch to a new major version.
Audit logging: Logs security-related events on your deployment.
Slow query and index logging: Helps find and debug slow queries and indexing.
Application and component logging: Logs messages related to running Kibana.
You can configure the log level for Kibana, and, in self-managed, ECE, or ECK deployments, configure advanced settings to customize logging behavior.
Audit logging: Logs security-related events on your deployment.
The way that you access your logs differs depending on your deployment method.
Access your logs using one of the following options:
- All orchestrated deployments: Stack monitoring
- Elastic Cloud Hosted: Preconfigured logs and metrics
- Elastic Cloud Enterprise: Platform monitoring
If you run Kibana as a service, the default location of the logs varies based on your platform and installation method:
On Docker, log messages go to the console and are handled by the configured Docker logging driver. To access logs, run docker logs
.
For macOS and Linux .tar.gz
installations, Elasticsearch writes logs to $KIBANA_HOME/logs
.
Files in $KIBANA_HOME
risk deletion during an upgrade. In production, you should configure a different location for your logs.
For Windows .zip
installations, Elasticsearch writes logs to %KIBANA_HOME%\logs
.
Files in %KIBANA_HOME%
risk deletion during an upgrade. In production, you should configure a different location for your logs.
If you run Kibana from the command line, Kibana prints logs to the standard output (stdout
).
You can also consume logs using stack monitoring.
If you run Elasticsearch as a service, the default location of the logs varies based on your platform and installation method:
On Docker, log messages go to the console and are handled by the configured Docker logging driver. To access logs, run docker logs
.
For macOS and Linux .tar.gz
installations, Elasticsearch writes logs to $ES_HOME/logs
.
Files in $ES_HOME
risk deletion during an upgrade. In production, we strongly recommend you set path.logs
to a location outside of $ES_HOME
. See Path settings.
For Windows .zip
installations, Elasticsearch writes logs to %ES_HOME%\logs
.
Files in %ES_HOME%
risk deletion during an upgrade. In production, we strongly recommend you set path.logs
to a location outside of `%ES_HOME%``. See Path settings.
If you run Elasticsearch from the command line, Elasticsearch prints logs to the standard output (stdout
).
You can also consume logs using stack monitoring.
You can also collect and index the following types of logs from other components in your deployments:
apm*.log*
fleet-server-json.log-*
elastic-agent-json.log-*
The *
indicates that we also index the archived files of each type of log.
In Elastic Cloud Hosted and Elastic Cloud Enterprise, these types of logs are automatically ingested when stack monitoring is enabled.