Loading

Upstream OpenTelemetry Collectors and language SDKs

Note

This is one of several approaches you can use to integrate Elastic with OpenTelemetry. To compare approaches and choose the best approach for your use case, refer to OpenTelemetry.

The Elastic Stack natively supports the OpenTelemetry protocol (OTLP). This means trace data and metrics collected from your applications and infrastructure can be sent directly to the Elastic Stack.

Connect your OpenTelemetry Collector instances to Elastic Observability or Elastic Observability Serverless using the OTLP exporter:

receivers: 1
  # ...
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
processors: 2
  # ...
  memory_limiter:
    check_interval: 1s
    limit_mib: 2000
  batch:

exporters:
  debug:
    verbosity: detailed 3
  otlp: 4
    # Elastic APM server https endpoint without the "https://" prefix
    endpoint: "${env:ELASTIC_APM_SERVER_ENDPOINT}" <5> 57
    headers:
      # Elastic APM Server secret token
      Authorization: "Bearer ${env:ELASTIC_APM_SECRET_TOKEN}" <6> 67

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [..., memory_limiter, batch]
      exporters: [debug, otlp]
    metrics:
      receivers: [otlp]
      processors: [..., memory_limiter, batch]
      exporters: [debug, otlp]
    logs: 8
      receivers: [otlp]
      processors: [..., memory_limiter, batch]
      exporters: [debug, otlp]
  1. The receivers, like the OTLP receiver, that forward data emitted by APM agents, or the host metrics receiver.
  2. We recommend using the Batch processor and the memory limiter processor. For more information, see recommended processors.
  3. The debug exporter is helpful for troubleshooting, and supports configurable verbosity levels: basic (default), normal, and detailed.
  4. Elastic Observability endpoint configuration. APM Server supports a ProtoBuf payload via both the OTLP protocol over gRPC transport (OTLP/gRPC) and the OTLP protocol over HTTP transport (OTLP/HTTP). To learn more about these exporters, see the OpenTelemetry Collector documentation: OTLP/HTTP Exporter or OTLP/gRPC exporter. When adding an endpoint to an existing configuration an optional name component can be added, like otlp/elastic, to distinguish endpoints as described in the OpenTelemetry Collector Configuration Basics.
  5. Hostname and port of the APM Server endpoint. For example, elastic-apm-server:8200.
  6. Credential for Elastic APM secret token authorization (Authorization: "Bearer a_secret_token") or API key authorization (Authorization: "ApiKey an_api_key").
  7. Environment-specific configuration parameters can be conveniently passed in as environment variables documented here (e.g. ELASTIC_APM_SERVER_ENDPOINT and ELASTIC_APM_SECRET_TOKEN).
  8. [preview] To send OpenTelemetry logs to Elastic Stack version 8.0+, declare a logs pipeline.
receivers:   1
  # ...
  otlp:

processors:   2
  # ...
  memory_limiter:
    check_interval: 1s
    limit_mib: 2000
  batch:

exporters:
  logging:
    loglevel: warn   3
  otlp/elastic:   4
    # Elastic https endpoint without the "https://" prefix
    endpoint: "${ELASTIC_APM_SERVER_ENDPOINT}" <5> 57
    headers:
      # Elastic API key
      Authorization: "ApiKey ${ELASTIC_APM_API_KEY}" <6> 67

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [..., memory_limiter, batch]
      exporters: [logging, otlp/elastic]
    metrics:
      receivers: [otlp]
      processors: [..., memory_limiter, batch]
      exporters: [logging, otlp/elastic]
    logs:   8
      receivers: [otlp]
      processors: [..., memory_limiter, batch]
      exporters: [logging, otlp/elastic]
  1. The receivers, like the OTLP receiver, that forward data emitted by APM agents, or the host metrics receiver.
  2. We recommend using the Batch processor and the memory limiter processor. For more information, see recommended processors.
  3. The logging exporter is helpful for troubleshooting and supports various logging levels, like debug, info, warn, and error.
  4. Elastic Observability Serverless endpoint configuration. Elastic supports a ProtoBuf payload via both the OTLP protocol over gRPC transport (OTLP/gRPC) and the OTLP protocol over HTTP transport (OTLP/HTTP). To learn more about these exporters, see the OpenTelemetry Collector documentation: OTLP/HTTP Exporter or OTLP/gRPC exporter.
  5. Hostname and port of the Elastic endpoint. For example, elastic-apm-server:8200.
  6. Credential for Elastic APM API key authorization (Authorization: "ApiKey an_api_key").
  7. Environment-specific configuration parameters can be conveniently passed in as environment variables documented here (e.g. ELASTIC_APM_SERVER_ENDPOINT and ELASTIC_APM_API_KEY).
  8. [preview] To send OpenTelemetry logs to your project, declare a logs pipeline.

You’re now ready to export traces and metrics from your services and applications.

Tip

When using the OpenTelemetry Collector, you should always prefer sending data via the OTLP exporter. Using other methods, like the elasticsearch exporter, will bypass all of the validation and data processing that Elastic performs. In addition, your data will not be viewable in your Observability project if you use the elasticsearch exporter.

Note

This document outlines how to send data directly from an upstream OpenTelemetry SDK to Elastic, which is appropriate when getting started. However, in many cases you should use the OpenTelemetry SDK to send data to an OpenTelemetry Collector that processes and exports data to Elastic. Read more about when and how to use a collector in the OpenTelemetry documentation.

To export traces and metrics to Elastic, instrument your services and applications with the OpenTelemetry API, SDK, or both. For example, if you are a Java developer, you need to instrument your Java app with the OpenTelemetry agent for Java. See the OpenTelemetry Instrumentation guides to download the OpenTelemetry agent or SDK for your language.

Define environment variables to configure the OpenTelemetry agent or SDK and enable communication with Elastic APM. For example, if you are instrumenting a Java app, define the following environment variables:

export OTEL_RESOURCE_ATTRIBUTES=service.name=checkoutService,service.version=1.1,deployment.environment=production
export OTEL_EXPORTER_OTLP_ENDPOINT=https://apm_server_url:8200
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer an_apm_secret_token"
export OTEL_METRICS_EXPORTER="otlp" \
export OTEL_LOGS_EXPORTER="otlp" \ 1
java -javaagent:/path/to/opentelemetry-javaagent-all.jar \
     -classpath lib/*:classes/ \
     com.mycompany.checkout.CheckoutServiceServer
  1. [preview] The OpenTelemetry logs intake via APM Server is currently in technical preview.
OTEL_RESOURCE_ATTRIBUTES
Fields that describe the service and the environment that the service runs in. See resource attributes for more information.
OTEL_EXPORTER_OTLP_ENDPOINT
APM Server URL. The host and port that APM Server listens for events on.
OTEL_EXPORTER_OTLP_HEADERS

Authorization header that includes the Elastic APM Secret token or API key: "Authorization=Bearer an_apm_secret_token" or "Authorization=ApiKey an_api_key".

For information on how to format an API key, see API keys.

Please note the required space between Bearer and an_apm_secret_token, and ApiKey and an_api_key.

Note

If you are using a version of the Python OpenTelemetry agent before 1.27.0, the content of the header must be URL-encoded. You can use the Python standard library’s urllib.parse.quote function to encode the content of the header.

OTEL_METRICS_EXPORTER
Metrics exporter to use. See exporter selection for more information.
OTEL_LOGS_EXPORTER

Logs exporter to use. See exporter selection for more information.

export OTEL_RESOURCE_ATTRIBUTES=service.name=checkoutService,service.version=1.1,deployment.environment=production
export OTEL_EXPORTER_OTLP_ENDPOINT=https://apm_server_url:8200
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=ApiKey an_apm_api_key"
export OTEL_METRICS_EXPORTER="otlp" \
export OTEL_LOGS_EXPORTER="otlp" \   1
java -javaagent:/path/to/opentelemetry-javaagent-all.jar \
     -classpath lib/*:classes/ \
     com.mycompany.checkout.CheckoutServiceServer
  1. [preview] The OpenTelemetry logs intake via Elastic is currently in technical preview.
OTEL_RESOURCE_ATTRIBUTES
Fields that describe the service and the environment that the service runs in. See resource attributes for more information.
OTEL_EXPORTER_OTLP_ENDPOINT
Elastic URL. The host and port that Elastic listens for APM events on.
OTEL_EXPORTER_OTLP_HEADERS

Authorization header that includes the Elastic APM API key: "Authorization=ApiKey an_api_key". Note the required space between ApiKey and an_api_key.

For information on how to format an API key, refer to Secure communication with APM agents.

Note

If you are using a version of the Python OpenTelemetry agent before 1.27.0, the content of the header must be URL-encoded. You can use the Python standard library’s urllib.parse.quote function to encode the content of the header.

OTEL_METRICS_EXPORTER
Metrics exporter to use. See exporter selection for more information.
OTEL_LOGS_EXPORTER

Logs exporter to use. See exporter selection for more information.

You are now ready to collect traces and metrics before verifying metrics and visualizing metrics.

APM Server supports both the OTLP/gRPC and OTLP/HTTP protocol on the same port as Elastic APM agent requests. For ease of setup, we recommend using OTLP/HTTP when proxying or load balancing requests to Elastic.

If you use the OTLP/gRPC protocol, requests to Elastic must use either HTTP/2 over TLS or HTTP/2 Cleartext (H2C). No matter which protocol is used, OTLP/gRPC requests will have the header: "Content-Type: application/grpc".

When using a layer 7 (L7) proxy like AWS ALB, requests must be proxied in a way that ensures requests to Elastic follow the rules outlined above. For example, with ALB you can create rules to select an alternative backend protocol based on the headers of requests coming into ALB. In this example, you’d select the gRPC protocol when the "Content-Type: application/grpc" header exists on a request.

Many L7 load balancers handle HTTP and gRPC traffic separately and rely on explicitly defined routes and service configurations to correctly proxy requests. Since APM Server serves both protocols on the same port, it may not be compatible with some L7 load balancers. For example, to work around this issue in Ingress NGINX Controller for Kubernetes, either:

  • Use the otlp exporter in the OTel collector. Set annotation nginx.ingress.kubernetes.io/backend-protocol: "GRPC" on the K8s Ingress object proxying to APM Server.
  • Use the otlphttp exporter in the OTel collector. Set annotation nginx.ingress.kubernetes.io/backend-protocol: "HTTP" (or "HTTPS" if APM Server expects TLS) on the K8s Ingress object proxying to APM Server.

For more information on how to configure an AWS ALB to support gRPC, see this AWS blog post: Application Load Balancer Support for End-to-End HTTP/2 and gRPC.

For more information on how APM Server services gRPC requests, see Muxing gRPC and HTTP/1.1.