ECS Logging with Winston
This Node.js package provides a formatter for the winston logger, compatible with Elastic Common Schema (ECS) logging. In combination with the Filebeat shipper, you can monitor all your logs in one place in the Elastic Stack. winston
3.x versions >=3.3.3 are supported.
$ npm install @elastic/ecs-winston-format
const winston = require('winston');
const { ecsFormat } = require('@elastic/ecs-winston-format');
const logger = winston.createLogger({
format: ecsFormat(/* options */), 1
transports: [
new winston.transports.Console()
]
});
logger.info('hi');
logger.error('oops there is a problem', { err: new Error('boom') });
- Pass the ECS formatter to winston here.
The best way to collect the logs once they are ECS-formatted is with Filebeat:
- Follow the Filebeat quick start
- Add the following configuration to your
filebeat.yaml
file.
For Filebeat 7.16+
filebeat.inputs:
- type: filestream 1
paths: /path/to/logs.json
parsers:
- ndjson:
overwrite_keys: true 2
add_error_key: true 3
expand_keys: true 4
processors: 5
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
- Use the filestream input to read lines from active log files.
- Values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts.
- Filebeat adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors.
- Filebeat will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure.
- Processors enhance your data. See processors to learn more.
For Filebeat < 7.16
filebeat.inputs:
- type: log
paths: /path/to/logs.json
json.keys_under_root: true
json.overwrite_keys: true
json.add_error_key: true
json.expand_keys: true
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
- Make sure your application logs to stdout/stderr.
- Follow the Run Filebeat on Kubernetes guide.
- Enable hints-based autodiscover (uncomment the corresponding section in
filebeat-kubernetes.yaml
). - Add these annotations to your pods that log using ECS loggers. This will make sure the logs are parsed appropriately.
annotations:
co.elastic.logs/json.overwrite_keys: true 1
co.elastic.logs/json.add_error_key: true 2
co.elastic.logs/json.expand_keys: true 3
- Values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts.
- Filebeat adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors.
- Filebeat will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure.
- Make sure your application logs to stdout/stderr.
- Follow the Run Filebeat on Docker guide.
- Enable hints-based autodiscover.
- Add these labels to your containers that log using ECS loggers. This will make sure the logs are parsed appropriately.
labels:
co.elastic.logs/json.overwrite_keys: true 1
co.elastic.logs/json.add_error_key: true 2
co.elastic.logs/json.expand_keys: true 3
- Values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts.
- Filebeat adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors.
- Filebeat will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure.
For more information, see the Filebeat reference.
You might like to try out our tutorial using Node.js ECS logging with winston: Ingest logs from a Node.js web application using Filebeat.
const winston = require('winston');
const { ecsFormat } = require('@elastic/ecs-winston-format');
const logger = winston.createLogger({
level: 'info',
format: ecsFormat(/* options */), 1
transports: [
new winston.transports.Console()
]
});
logger.info('hi');
logger.error('oops there is a problem', { foo: 'bar' });
- See available options below.
Running this script (available here) will produce log output similar to the following:
% node examples/basic.js
{"@timestamp":"2023-10-14T02:14:17.302Z","log.level":"info","message":"hi","ecs.version":"8.10.0"}
{"@timestamp":"2023-10-14T02:14:17.304Z","log.level":"error","message":"oops there is a problem","ecs.version":"8.10.0","foo":"bar"}
The formatter handles serialization to JSON, so you don’t need to add the json formatter. As well, a timestamp is automatically generated by the formatter, so you don’t need to add the timestamp formatter.
By default, the formatter will convert an err
meta field that is an Error instance to ECS Error fields. For example:
const winston = require('winston');
const { ecsFormat } = require('@elastic/ecs-winston-format');
const logger = winston.createLogger({
format: ecsFormat(),
transports: [
new winston.transports.Console()
]
});
const myErr = new Error('boom');
logger.info('oops', { err: myErr });
will yield (pretty-printed for readability):
% node examples/error.js | jq .
{
"@timestamp": "2021-01-26T17:25:07.983Z",
"log.level": "info",
"message": "oops",
"error": {
"type": "Error",
"message": "boom",
"stack_trace": "Error: boom\n at Object.<anonymous> (..."
},
"ecs.version": "8.10.0"
}
Special handling of the err
meta field can be disabled via the convertErr: false
option:
...
const logger = winston.createLogger({
format: ecsFormat({ convertErr: false }),
...
With the convertReqRes: true
option, the formatter will automatically convert Node.js core request and response objects when passed as the req
and res
meta fields, respectively.
const http = require('http');
const winston = require('winston');
const { ecsFormat } = require('@elastic/ecs-winston-format');
const logger = winston.createLogger({
level: 'info',
format: ecsFormat({ convertReqRes: true }), 1
transports: [
new winston.transports.Console()
]
});
const server = http.createServer(handler);
server.listen(3000, () => {
logger.info('listening at http://localhost:3000')
});
function handler (req, res) {
res.setHeader('Foo', 'Bar');
res.end('ok');
logger.info('handled request', { req, res }); 2
}
- use
convertReqRes
option - log
req
and/orres
meta fields
This will produce logs with request and response info using ECS HTTP fields. For example:
% node examples/http.js | jq . 1
... # run 'curl http://localhost:3000/'
{
"@timestamp": "2023-10-14T02:15:54.768Z",
"log.level": "info",
"message": "handled request",
"http": {
"version": "1.1",
"request": {
"method": "GET",
"headers": {
"host": "localhost:3000",
"user-agent": "curl/8.1.2",
"accept": "*/*"
}
},
"response": {
"status_code": 200,
"headers": {
"foo": "Bar"
}
}
},
"url": {
"path": "/",
"full": "http://localhost:3000/"
},
"client": {
"address": "::ffff:127.0.0.1",
"ip": "::ffff:127.0.0.1",
"port": 49538
},
"user_agent": {
"original": "curl/8.1.2"
},
"ecs.version": "8.10.0"
}
- using jq for pretty printing
This ECS log formatter integrates with Elastic APM. If your Node app is using the Node.js Elastic APM Agent, then a number of fields are added to log records to correlate between APM services or traces and logging data:
- Log statements (e.g.
logger.info(...)
) called when there is a current tracing span will include tracing fields —trace.id
,transaction.id
,span.id
. - A number of service identifier fields determined by or configured on the APM agent allow cross-linking between services and logs in Kibana —
service.name
,service.version
,service.environment
,service.node.name
. event.dataset
enables log rate anomaly detection in the Elastic Observability app.
For example, running examples/http-with-elastic-apm.js and curl -i localhost:3000/
results in a log record with the following:
% node examples/http-with-elastic-apm.js | jq .
...
"service.name": "http-with-elastic-apm",
"service.version": "1.4.0",
"service.environment": "development",
"event.dataset": "http-with-elastic-apm"
"trace.id": "7fd75f0f33ff49aba85d060b46dcad7e",
"transaction.id": "6c97c7c1b468fa05"
}
These IDs match trace data reported by the APM agent.
Integration with Elastic APM can be explicitly disabled via the apmIntegration: false
option, for example:
const logger = winston.createLogger({
format: ecsFormat({ apmIntegration: false }),
// ...
})
The ecs-logging spec suggests that the first three fields in log records should be @timestamp
, log.level
, and message
. As of version 1.5.0, this formatter does not follow this suggestion. It would be possible but would require creating a new Object in ecsFields
for each log record. Given that ordering of ecs-logging fields is for human readability and does not affect interoperability, the decision was made to prefer performance.
options
{type-object}
The following options are supported:convertErr
{type-boolean}
Whether to convert a loggederr
field to ECS error fields. Default:true
.convertReqRes
{type-boolean}
Whether to loggedreq
andres
HTTP request and response fields to ECS HTTP, User agent, and URL fields. Default:false
.apmIntegration
{type-boolean}
Whether to enable APM agent integration. Default:true
.serviceName
{type-string}
A "service.name" value. If specified this overrides any value from an active APM agent.serviceVersion
{type-string}
A "service.version" value. If specified this overrides any value from an active APM agent.serviceEnvironment
{type-string}
A "service.environment" value. If specified this overrides any value from an active APM agent.serviceNodeName
{type-string}
A "service.node.name" value. If specified this overrides any value from an active APM agent.eventDataset
{type-string}
A "event.dataset" value. If specified this overrides the default of using${serviceVersion}
.
Create a formatter for winston that emits in ECS Logging format. This is a single format that handles both ecsFields([options])
and ecsStringify([options])
. The following two are equivalent:
const { ecsFormat, ecsFields, ecsStringify } = require('@elastic/ecs-winston-format');
const winston = require('winston');
const logger = winston.createLogger({
format: ecsFormat(/* options */),
// ...
});
const logger = winston.createLogger({
format: winston.format.combine(
ecsFields(/* options */),
ecsStringify()
),
// ...
});
options
{type-object}
The following options are supported:convertErr
{type-boolean}
Whether to convert a loggederr
field to ECS error fields. Default:true
.convertReqRes
{type-boolean}
Whether to loggedreq
andres
HTTP request and response fields to ECS HTTP, User agent, and URL fields. Default:false
.apmIntegration
{type-boolean}
Whether to enable APM agent integration. Default:true
.serviceName
{type-string}
A "service.name" value. If specified this overrides any value from an active APM agent.serviceVersion
{type-string}
A "service.version" value. If specified this overrides any value from an active APM agent.serviceEnvironment
{type-string}
A "service.environment" value. If specified this overrides any value from an active APM agent.serviceNodeName
{type-string}
A "service.node.name" value. If specified this overrides any value from an active APM agent.eventDataset
{type-string}
A "event.dataset" value. If specified this overrides the default of using${serviceVersion}
.
Create a formatter for winston that converts fields on the log record info object to ECS Logging format.
Create a formatter for winston that stringifies/serializes the log record to JSON.
This is similar to logform.json()
. They both use the safe-stable-stringify
package to produce the JSON. Some differences:
- This stringifier skips serializing the
level
field, because it is not an ECS field. - Winston provides a
replacer
that converts bigints to strings The argument for doing so is that a JavaScript JSON parser looses precision when parsing a bigint. The argument against is that a BigInt changes type to a string rather than a number. For now this stringifier does not convert BitInts to strings.