Get started with ECS Logging Java
If you are using the Elastic APM Java agent, the easiest way to transform your logs into ECS-compatible JSON format is through the log_ecs_reformatting
configuration option. By only setting this option, the Java agent will automatically import the correct ECS-logging library and configure your logging framework to use it instead (OVERRIDE
/REPLACE
) or in addition to (SHADE
) your current configuration. No other changes required! Make sure to check out other Logging configuration options to unlock the full potential of this option.
Otherwise, follow the steps below to manually apply ECS-formatting through your logging framework configuration. The following logging frameworks are supported:
- Logback (default for Spring Boot)
- Log4j2
- Log4j
java.util.logging
(JUL)- JBoss Log Manager
The minimum required logback version is 1.1.
Download the latest version of Elastic logging:
Add a dependency to your application:
<dependency>
<groupId>co.elastic.logging</groupId>
<artifactId>logback-ecs-encoder</artifactId>
<version>${ecs-logging-java.version}</version>
</dependency>
If you are not using a dependency management tool, like maven, you have to manually add both logback-ecs-encoder
and ecs-logging-core
jars to the classpath. For example to the $CATALINA_HOME/lib
directory. Other than that, there are no required dependencies.
The minimum required log4j2 version is 2.6.
Download the latest version of Elastic logging:
Add a dependency to your application:
<dependency>
<groupId>co.elastic.logging</groupId>
<artifactId>log4j2-ecs-layout</artifactId>
<version>${ecs-logging-java.version}</version>
</dependency>
If you are not using a dependency management tool, like maven, you have to manually add both log4j2-ecs-layout
and ecs-logging-core
jars to the classpath. For example, to the $CATALINA_HOME/lib
directory. Other than that, there are no required dependencies.
The minimum required log4j version is 1.2.4.
Download the latest version of Elastic logging:
Add a dependency to your application:
<dependency>
<groupId>co.elastic.logging</groupId>
<artifactId>log4j-ecs-layout</artifactId>
<version>${ecs-logging-java.version}</version>
</dependency>
If you are not using a dependency management tool, like maven, you have to manually add both log4j-ecs-layout
and ecs-logging-core
jars to the classpath. For example, to the $CATALINA_HOME/lib
directory. Other than that, there are no required dependencies.
A formatter for JUL (java.util.logging
) which produces ECS-compatible records. Useful for applications that use JUL as primary logging framework, like Apache Tomcat.
Download the latest version of Elastic logging:
Add a dependency to your application:
<dependency>
<groupId>co.elastic.logging</groupId>
<artifactId>jul-ecs-formatter</artifactId>
<version>${ecs-logging-java.version}</version>
</dependency>
If you are not using a dependency management tool, like maven, you have to manually add both jul-ecs-formatter
and ecs-logging-core
jars to the classpath. For example, to the $CATALINA_HOME/lib
directory. Other than that, there are no required dependencies.
A formatter for JBoss Log Manager which produces ECS-compatible records. Useful for applications that use JBoss Log Manager as their primary logging framework, like WildFly.
Download the latest version of Elastic logging:
Add a dependency to your application:
<dependency>
<groupId>co.elastic.logging</groupId>
<artifactId>jboss-logmanager-ecs-formatter</artifactId>
<version>${ecs-logging-java.version}</version>
</dependency>
If you are not using a dependency management tool, like maven, you have to manually add both jboss-logmanager-ecs-formatter
and ecs-logging-core
jars to the classpath. Other than that, there are no required dependencies.
Spring Boot applications
In src/main/resources/logback-spring.xml
:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<property name="LOG_FILE" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/spring.log}"/>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<include resource="org/springframework/boot/logging/logback/console-appender.xml" />
<include resource="org/springframework/boot/logging/logback/file-appender.xml" />
<include resource="co/elastic/logging/logback/boot/ecs-console-appender.xml" />
<include resource="co/elastic/logging/logback/boot/ecs-file-appender.xml" />
<root level="INFO">
<appender-ref ref="ECS_JSON_CONSOLE"/>
<appender-ref ref="CONSOLE"/>
<appender-ref ref="ECS_JSON_FILE"/>
<appender-ref ref="FILE"/>
</root>
</configuration>
You also need to configure the following properties to your application.properties
:
spring.application.name=my-application
# for Spring Boot 2.2.x+
logging.file.name=/path/to/my-application.log
# for older Spring Boot versions
logging.file=/path/to/my-application.log
Other applications
All you have to do is to use the co.elastic.logging.logback.EcsEncoder
instead of the default pattern encoder in logback.xml
<encoder class="co.elastic.logging.logback.EcsEncoder">
<serviceName>my-application</serviceName>
<serviceVersion>my-application-version</serviceVersion>
<serviceEnvironment>my-application-environment</serviceEnvironment>
<serviceNodeName>my-application-cluster-node</serviceNodeName>
</encoder>
Encoder Parameters
Parameter name | Type | Default | Description |
---|---|---|---|
serviceName |
String | Sets the service.name field so you can filter your logs by a particular service name |
|
serviceVersion |
String | Sets the service.version field so you can filter your logs by a particular service version |
|
serviceEnvironment |
String | Sets the service.environment field so you can filter your logs by a particular service environment |
|
serviceNodeName |
String | Sets the service.node.name field so you can filter your logs by a particular node of your clustered service |
|
eventDataset |
String | ${serviceName} |
Sets the event.dataset field used by the machine learning job of the Logs app to look for anomalies in the log rate. |
includeMarkers |
boolean | false |
Log Markers as tags |
stackTraceAsArray |
boolean | false |
Serializes the error.stack_trace as a JSON array where each element is in a new line to improve readability.Note that this requires a slightly more complex Filebeat configuration. |
includeOrigin |
boolean | false |
If true , adds the log.origin.file.name , log.origin.file.line and log.origin.function fields. Note that you also have to set <includeCallerData>true</includeCallerData> on your appenders if you are using the async ones. |
To include any custom field in the output, use following syntax:
<additionalField>
<key>key1</key>
<value>value1</value>
</additionalField>
<additionalField>
<key>key2</key>
<value>value2</value>
</additionalField>
Instead of the usual <PatternLayout/>
, use <EcsLayout serviceName="my-app"/>
. For example:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="DEBUG">
<Appenders>
<Console name="LogToConsole" target="SYSTEM_OUT">
<EcsLayout serviceName="my-app" serviceVersion="my-app-version" serviceEnvironment="my-app-environment" serviceNodeName="my-app-cluster-node"/>
</Console>
<File name="LogToFile" fileName="logs/app.log">
<EcsLayout serviceName="my-app" serviceVersion="my-app-version" serviceEnvironment="my-app-environment" serviceNodeName="my-app-cluster-node"/>
</File>
</Appenders>
<Loggers>
<Root level="info">
<AppenderRef ref="LogToFile"/>
<AppenderRef ref="LogToConsole"/>
</Root>
</Loggers>
</Configuration>
Layout Parameters
Parameter name | Type | Default | Description |
---|---|---|---|
serviceName |
String | Sets the service.name field so you can filter your logs by a particular service name |
|
serviceVersion |
String | Sets the service.version field so you can filter your logs by a particular service version |
|
serviceEnvironment |
String | Sets the service.environment field so you can filter your logs by a particular service environment |
|
serviceNodeName |
String | Sets the service.node.name field so you can filter your logs by a particular node of your clustered service |
|
eventDataset |
String | ${serviceName} |
Sets the event.dataset field used by the machine learning job of the Logs app to look for anomalies in the log rate. |
includeMarkers |
boolean | false |
Log Markers as tags |
stackTraceAsArray |
boolean | false |
Serializes the error.stack_trace as a JSON array where each element is in a new line to improve readability. Note that this requires a slightly more complex Filebeat configuration. |
includeOrigin |
boolean | false |
If true , adds the log.origin.file.name fields. Note that you also have to set includeLocation="true" on your loggers and appenders if you are using the async ones. |
To include any custom field in the output, use following syntax:
<EcsLayout>
<KeyValuePair key="key1" value="constant value"/>
<KeyValuePair key="key2" value="$${ctx:key}"/>
</EcsLayout>
Custom fields are included in the order they are declared. The values support lookups. This means that the KeyValuePair
setting can be utilized to dynamically set predefined fields as well:
<EcsLayout serviceName="myService">
<KeyValuePair key="service.version" value="$${spring:project.version}"/>
<KeyValuePair key="service.node.name" value="${env:HOSTNAME}"/>
</EcsLayout>
The log4j2 EcsLayout
does not allocate any memory (unless the log event contains an Exception
) to reduce GC pressure. This is achieved by manually serializing JSON so that no intermediate JSON or map representation of a log event is needed.
Instead of the usual layout class "org.apache.log4j.PatternLayout"
, use "co.elastic.logging.log4j.EcsLayout"
. For example:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">
<appender name="LogToConsole" class="org.apache.log4j.ConsoleAppender">
<param name="Target" value="System.out"/>
<layout class="co.elastic.logging.log4j.EcsLayout">
<param name="serviceName" value="my-app"/>
<param name="serviceNodeName" value="my-app-cluster-node"/>
</layout>
</appender>
<appender name="LogToFile" class="org.apache.log4j.RollingFileAppender">
<param name="File" value="logs/app.log"/>
<layout class="co.elastic.logging.log4j.EcsLayout">
<param name="serviceName" value="my-app"/>
<param name="serviceNodeName" value="my-app-cluster-node"/>
</layout>
</appender>
<root>
<priority value="INFO"/>
<appender-ref ref="LogToFile"/>
<appender-ref ref="LogToConsole"/>
</root>
</log4j:configuration>
Layout Parameters
Parameter name | Type | Default | Description |
---|---|---|---|
serviceName |
String | Sets the service.name field so you can filter your logs by a particular service name |
|
serviceVersion |
String | Sets the service.version field so you can filter your logs by a particular service version |
|
serviceEnvironment |
String | Sets the service.environment field so you can filter your logs by a particular service environment |
|
serviceNodeName |
String | Sets the service.node.name field so you can filter your logs by a particular node of your clustered service |
|
eventDataset |
String | ${serviceName} |
Sets the event.dataset field used by the machine learning job of the Logs app to look for anomalies in the log rate. |
stackTraceAsArray |
boolean | false |
Serializes the error.stack_trace as a JSON array where each element is in a new line to improve readability.Note that this requires a slightly more complex Filebeat configuration. |
includeOrigin |
boolean | false |
If true , adds the log.origin.file.name fields.Note that you also have to set <param name="LocationInfo" value="true"/> if you are using AsyncAppender . |
To include any custom field in the output, use following syntax:
<layout class="co.elastic.logging.log4j.EcsLayout">
<param name="additionalField" value="key1=value1"/>
<param name="additionalField" value="key2=value2"/>
</layout>
Custom fields are included in the order they are declared.
Specify co.elastic.logging.jul.EcsFormatter
as formatter
for the required log handler.
For example, in $CATALINA_HOME/conf/logging.properties
:
java.util.logging.ConsoleHandler.level = FINE
java.util.logging.ConsoleHandler.formatter = co.elastic.logging.jul.EcsFormatter
co.elastic.logging.jul.EcsFormatter.serviceName=my-app
co.elastic.logging.jul.EcsFormatter.serviceVersion=my-app-version
co.elastic.logging.jul.EcsFormatter.serviceEnvironment=my-app-environment
co.elastic.logging.jul.EcsFormatter.serviceNodeName=my-app-cluster-node
Layout Parameters
Parameter name | Type | Default | Description |
---|---|---|---|
serviceName |
String | Sets the service.name field so you can filter your logs by a particular service name |
|
serviceVersion |
String | Sets the service.version field so you can filter your logs by a particular service version |
|
serviceEnvironment |
String | Sets the service.environment field so you can filter your logs by a particular service environment |
|
serviceNodeName |
String | Sets the service.node.name field so you can filter your logs by a particular node of your clustered service |
|
eventDataset |
String | ${serviceName} |
Sets the event.dataset field used by the machine learning job of the Logs app to look for anomalies in the log rate. |
stackTraceAsArray |
boolean | false |
Serializes the error.stack_trace as a JSON array where each element is in a new line to improve readability. Note that this requires a slightly more complex Filebeat configuration. |
includeOrigin |
boolean | false |
If true , adds the log.origin.file.name fields. Note that JUL does not stores line number and log.origin.file.line will have 1 value. |
additionalFields |
String | Adds additional static fields to all log events. The fields are specified as comma-separated key-value pairs. Example: co.elastic.logging.jul.EcsFormatter.additionalFields=key1=value1,key2=value2 . |
Specify co.elastic.logging.jboss.logmanager.EcsFormatter
as formatter
for the required log handler.
For example, with Wildfly, create a jboss-logmanager-ecs-formatter
module:
$WILDFLY_HOME/bin/jboss-cli.sh -c 'module add --name=co.elastic.logging.jboss-logmanager-ecs-formatter --resources=jboss-logmanager-ecs-formatter-${ecs-logging-java.version}.jar:/tmp/ecs-logging-core-${ecs-logging-java.version}.jar --dependencies=org.jboss.logmanager'
Add the formatter to a handler in the logging subsystem:
$WILDFLY_HOME/bin/jboss-cli.sh -c '/subsystem=logging/custom-formatter=ECS:add(module=co.elastic.logging.jboss-logmanager-ecs-formatter,
class=co.elastic.logging.jboss.logmanager.EcsFormatter, properties={serviceName=my-app,serviceVersion=my-app-version,serviceEnvironment=my-app-environment,serviceNodeName=my-app-cluster-node}),\
/subsystem=logging/console-handler=CONSOLE:write-attribute(name=named-formatter,value=ECS)'
Layout Parameters
Parameter name | Type | Default | Description |
---|---|---|---|
serviceName |
String | Sets the service.name field so you can filter your logs by a particular service name |
|
serviceVersion |
String | Sets the service.version field so you can filter your logs by a particular service version |
|
serviceEnvironment |
String | Sets the service.environment field so you can filter your logs by a particular service environment |
|
serviceNodeName |
String | Sets the service.node.name field so you can filter your logs by a particular node of your clustered service |
|
eventDataset |
String | ${serviceName} |
Sets the event.dataset field used by the machine learning job of the Logs app to look for anomalies in the log rate. |
stackTraceAsArray |
boolean | false |
Serializes the error.stack_trace as a JSON array where each element is in a new line to improve readability. Note that this requires a slightly more complex Filebeat configuration. |
includeOrigin |
boolean | false |
If true , adds the log.origin.file.name fields. |
additionalFields |
String | Adds additional static fields to all log events. The fields are specified as comma-separated key-value pairs. Example: additionalFields=key1=value1,key2=value2 . |
If you’re using the Elastic APM Java agent, log correlation is enabled by default starting in version 1.30.0. In previous versions, log correlation is off by default, but can be enabled by setting the enable_log_correlation
config to true
.
- Follow the Filebeat quick start
- Add the following configuration to your
filebeat.yaml
file.
For Filebeat 7.16+
filebeat.inputs:
- type: filestream 1
paths: /path/to/logs.json
parsers:
- ndjson:
overwrite_keys: true 2
add_error_key: true 3
expand_keys: true 4
processors: 5
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
- Use the filestream input to read lines from active log files.
- Values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts.
- Filebeat adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors.
- Filebeat will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure.
- Processors enhance your data. See processors to learn more.
For Filebeat < 7.16
filebeat.inputs:
- type: log
paths: /path/to/logs.json
json.keys_under_root: true
json.overwrite_keys: true
json.add_error_key: true
json.expand_keys: true
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
- Make sure your application logs to stdout/stderr.
- Follow the Run Filebeat on Kubernetes guide.
- Enable hints-based autodiscover (uncomment the corresponding section in
filebeat-kubernetes.yaml
). - Add these annotations to your pods that log using ECS loggers. This will make sure the logs are parsed appropriately.
annotations:
co.elastic.logs/json.overwrite_keys: true 1
co.elastic.logs/json.add_error_key: true 2
co.elastic.logs/json.expand_keys: true 3
- Values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts.
- Filebeat adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors.
- Filebeat will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure.
- Make sure your application logs to stdout/stderr.
- Follow the Run Filebeat on Docker guide.
- Enable hints-based autodiscover.
- Add these labels to your containers that log using ECS loggers. This will make sure the logs are parsed appropriately.
labels:
co.elastic.logs/json.overwrite_keys: true 1
co.elastic.logs/json.add_error_key: true 2
co.elastic.logs/json.expand_keys: true 3
- Values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts.
- Filebeat adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors.
- Filebeat will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure.
For more information, see the Filebeat reference.
Filebeat can normally only decode JSON if there is one JSON object per line. When stackTraceAsArray
is enabled, there will be a new line for each stack trace element which improves readability. But when combining the multiline settings with a decode_json_fields
we can also handle multi-line JSON:
filebeat.inputs:
- type: log
paths: /path/to/logs.json
multiline.pattern: '^{'
multiline.negate: true
multiline.match: after
processors:
- decode_json_fields:
fields: message
target: ""
overwrite_keys: true
# flattens the array to a single string
- script:
when:
has_fields: ['error.stack_trace']
lang: javascript
id: my_filter
source: >
function process(event) {
event.Put("error.stack_trace", event.Get("error.stack_trace").join("\n"));
}