Get started with ECS Logging Go (Zap)
Add the package to your go.mod
file:
require go.elastic.co/ecszap master
Set up a default logger. For example:
encoderConfig := ecszap.NewDefaultEncoderConfig()
core := ecszap.NewCore(encoderConfig, os.Stdout, zap.DebugLevel)
logger := zap.New(core, zap.AddCaller())
You can customize your ECS logger. For example:
encoderConfig := ecszap.EncoderConfig{
EncodeName: customNameEncoder,
EncodeLevel: zapcore.CapitalLevelEncoder,
EncodeDuration: zapcore.MillisDurationEncoder,
EncodeCaller: ecszap.FullCallerEncoder,
}
core := ecszap.NewCore(encoderConfig, os.Stdout, zap.DebugLevel)
logger := zap.New(core, zap.AddCaller())
// Add fields and a logger name
logger = logger.With(zap.String("custom", "foo"))
logger = logger.Named("mylogger")
// Use strongly typed Field values
logger.Info("some logging info",
zap.Int("count", 17),
zap.Error(errors.New("boom")))
The example above produces the following log output:
{
"log.level": "info",
"@timestamp": "2020-09-13T10:48:03.000Z",
"log.logger": "mylogger",
"log.origin": {
"file.name": "main/main.go",
"file.line": 265
},
"message": "some logging info",
"ecs.version": "1.6.0",
"custom": "foo",
"count": 17,
"error": {
"message":"boom"
}
}
err := errors.New("boom")
logger.Error("some error", zap.Error(pkgerrors.Wrap(err, "crash")))
The example above produces the following log output:
{
"log.level": "error",
"@timestamp": "2020-09-13T10:48:03.000Z",
"log.logger": "mylogger",
"log.origin": {
"file.name": "main/main.go",
"file.line": 290
},
"message": "some error",
"ecs.version": "1.6.0",
"custom": "foo",
"error": {
"message": "crash: boom",
"stack_trace": "\nexample.example\n\t/Users/xyz/example/example.go:50\nruntime.example\n\t/Users/xyz/.gvm/versions/go1.13.8.darwin.amd64/src/runtime/proc.go:203\nruntime.goexit\n\t/Users/xyz/.gvm/versions/go1.13.8.darwin.amd64/src/runtime/asm_amd64.s:1357"
}
}
sugar := logger.Sugar()
sugar.Infow("some logging info",
"foo", "bar",
"count", 17,
)
The example above produces the following log output:
{
"log.level": "info",
"@timestamp": "2020-09-13T10:48:03.000Z",
"log.logger": "mylogger",
"log.origin": {
"file.name": "main/main.go",
"file.line": 311
},
"message": "some logging info",
"ecs.version": "1.6.0",
"custom": "foo",
"foo": "bar",
"count": 17
}
encoderConfig := ecszap.NewDefaultEncoderConfig()
encoder := zapcore.NewJSONEncoder(encoderConfig.ToZapCoreEncoderConfig())
syslogCore := newSyslogCore(encoder, level) //create your own loggers
core := ecszap.WrapCore(syslogCore)
logger := zap.New(core, zap.AddCaller())
Depending on your needs there are different ways to create the logger:
encoderConfig := ecszap.ECSCompatibleEncoderConfig(zap.NewDevelopmentEncoderConfig())
encoder := zapcore.NewJSONEncoder(encoderConfig)
core := zapcore.NewCore(encoder, os.Stdout, zap.DebugLevel)
logger := zap.New(ecszap.WrapCore(core), zap.AddCaller())
config := zap.NewProductionConfig()
config.EncoderConfig = ecszap.ECSCompatibleEncoderConfig(config.EncoderConfig)
logger, err := config.Build(ecszap.WrapCoreOption(), zap.AddCaller())
- Follow the Filebeat quick start
- Add the following configuration to your
filebeat.yaml
file.
For Filebeat 7.16+
filebeat.inputs:
- type: filestream 1
paths: /path/to/logs.json
parsers:
- ndjson:
overwrite_keys: true 2
add_error_key: true 3
expand_keys: true 4
processors: 5
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
- Use the filestream input to read lines from active log files.
- Values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts.
- Filebeat adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors.
- Filebeat will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure.
- Processors enhance your data. See processors to learn more.
For Filebeat < 7.16
filebeat.inputs:
- type: log
paths: /path/to/logs.json
json.keys_under_root: true
json.overwrite_keys: true
json.add_error_key: true
json.expand_keys: true
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
- Make sure your application logs to stdout/stderr.
- Follow the Run Filebeat on Kubernetes guide.
- Enable hints-based autodiscover (uncomment the corresponding section in
filebeat-kubernetes.yaml
). - Add these annotations to your pods that log using ECS loggers. This will make sure the logs are parsed appropriately.
annotations:
co.elastic.logs/json.overwrite_keys: true 1
co.elastic.logs/json.add_error_key: true 2
co.elastic.logs/json.expand_keys: true 3
- Values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts.
- Filebeat adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors.
- Filebeat will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure.
- Make sure your application logs to stdout/stderr.
- Follow the Run Filebeat on Docker guide.
- Enable hints-based autodiscover.
- Add these labels to your containers that log using ECS loggers. This will make sure the logs are parsed appropriately.
labels:
co.elastic.logs/json.overwrite_keys: true 1
co.elastic.logs/json.add_error_key: true 2
co.elastic.logs/json.expand_keys: true 3
- Values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts.
- Filebeat adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors.
- Filebeat will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure.
For more information, see the Filebeat reference.