Quickstart: Ingest custom metrics with EDOT
Serverless Observability Stack EDOT Collector
Use this quickstart to send custom metrics to Elastic using the Elastic Distribution of the OpenTelemetry Collector (EDOT).
You’ll install a lightweight EDOT Collector, configure a minimal Open Telemetry Protocol (OTLP) metrics pipeline, and verify the data in Elastic Observability.
- An Elastic deployment (Serverless, Elastic Cloud Hosted, or self-managed)
- An Observability project Kibana instance
- Permissions to create API keys
- A system to run the EDOT Collector (Docker, host, or VM)
- Optional: An application that emits OpenTelemetry metrics
-
Create an Elastic API key
In your Elastic Observability deployment:
- Go to Management > Stack Management > API keys.
- Create a new API key and copy the value.
- Note your deployment's OTLP ingest endpoint.
-
Run the EDOT Collector with a minimal metrics pipeline
Update the
collector-config.yamlfile with the following Collector configuration to receive OTLP metrics and export them to Elastic:receivers: otlp: protocols: http: grpc: processors: batch: {} exporters: elasticsearch: endpoints: ["<OTLP_ENDPOINT>"] api_key: "<YOUR_API_KEY>" mapping: mode: otel metrics_dynamic_index: enabled: true service: pipelines: metrics: receivers: [otlp] processors: [batch] exporters: [elasticsearch]Run the configuration, for example with Docker:
docker run --rm \ -v $(pwd)/collector-config.yaml:/etc/otel/config.yaml \ -p 4317:4317 -p 4318:4318 \ docker.elastic.co/observability/otel-collector:latest -
Optional: Port conflict handling
If you encounter a port conflict error like:
bind: address already in useAdd the following to the
servicesection:service: telemetry: metrics: address: localhost:8889- Using different port if 8888 is already in use
You can also verify if the Collector is listening on the correct ports:
lsof -i :4318 -i :4317 -
Send a custom metric
In this Python example, you use an application that emits OTLP metrics:
from opentelemetry import metrics from opentelemetry.exporter.otlp.proto.http.metric_exporter import OTLPMetricExporter from opentelemetry.sdk.metrics import MeterProvider from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader exporter = OTLPMetricExporter(endpoint="http://localhost:4318/v1/metrics") reader = PeriodicExportingMetricReader(exporter) provider = MeterProvider(metric_readers=[reader]) metrics.set_meter_provider(provider) meter = metrics.get_meter("custom-app") temperature = meter.create_observable_gauge( "custom.temperature", callbacks=[lambda options: [metrics.Observation(26.7)]], ) input("Sending metrics periodically... press Enter to stop") -
Verify metrics in Elastic Observability
In Kibana:
- Go to Infrastructure > Metrics Explorer.
- Search for
custom.temperature. - Visualize or aggregate the metric data.
You've successfully set up a minimal OTLP metrics pipeline with the EDOT Collector. Your custom metrics are flowing into Elastic Observability and can be visualized in Kibana.
Now you can:
- Use Metrics Explorer to create custom visualizations and dashboards
- Set up alerts based on your custom metrics
- Aggregate and analyze metric trends over time
You can expand your metrics collection setup in several ways:
- Add more receivers to collect additional metrics
- Configure the same Collector to send logs and traces alongside metrics
To learn more, refer to the Elastic Distribution of the OpenTelemetry Collector documentation.