﻿---
title: 429 errors when using the Elastic Cloud Managed OTLP Endpoint
description: Resolve HTTP 429 `Too Many Requests` errors when sending data through the Elastic Cloud Managed OTLP (mOTLP) endpoint in Elastic Cloud Serverless or Elastic Cloud Hosted (ECH).
url: https://www.elastic.co/elastic/docs-builder/docs/3028/troubleshoot/ingest/opentelemetry/429-errors-motlp
products:
  - Elastic Cloud Hosted
  - Elastic Cloud Serverless
  - Elastic Distribution of OpenTelemetry Collector
  - Elastic Observability
applies_to:
  - Serverless Observability projects: Generally available
  - Elastic Stack: Generally available
  - Elastic Distribution of OpenTelemetry Collector: Generally available
---

# 429 errors when using the Elastic Cloud Managed OTLP Endpoint
When sending telemetry data through the Elastic Cloud Managed OTLP Endpoint (mOTLP), you might encounter HTTP `429 Too Many Requests` errors. These indicate that your ingest rate has temporarily exceeded the rate or burst limits configured for your Elastic Cloud project.
This issue can occur in both Elastic Cloud Serverless and Elastic Cloud Hosted (ECH) environments.

## Symptoms

You might notice log messages similar to the following in your EDOT Collector output or SDK logs:
```json
{
  "code": 8,
  "message": "error exporting items, request to <ingest endpoint> responded with HTTP Status Code 429"
}
```

Sometimes, you might also notice warnings or backpressure metrics increase in your Collector’s internal telemetry. For example, queue length or failed send count.

## Causes

A 429 status means that the rate of requests sent to the Managed OTLP endpoint has exceeded allowed thresholds. This can happen for several reasons:
- Your telemetry pipeline is sending data faster than the allowed ingest rate.
- Bursts of telemetry data exceed the short-term burst limit, even if your sustained rate is within limits.
  Exact limits vary by deployment type, subscription, and current configuration.
  Refer to the [Rate limiting section](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/reference/opentelemetry/motlp/#rate-limiting) in the mOTLP reference documentation for details.
- In Elastic Cloud Hosted, the Elasticsearch capacity for your deployment might be underscaled for the current ingest rate.
- In Elastic Cloud Serverless, rate limiting should not result from Elasticsearch capacity, since the platform automatically scales ingest capacity. If you suspect a scaling issue, [contact Elastic Support](https://www.elastic.co/elastic/docs-builder/docs/3028/troubleshoot/ingest/opentelemetry/contact-support).
- Multiple Collectors or SDKs are sending data concurrently without load balancing or backoff mechanisms.


## Resolution

To resolve 429 errors, identify whether the bottleneck is caused by ingest limits or Elasticsearch capacity.

### Scale your deployment or request higher limits

If you’ve confirmed that your ingest configuration is stable but still encounter 429 errors:
- Elastic Cloud Serverless: [Contact Elastic Support](https://www.elastic.co/elastic/docs-builder/docs/3028/troubleshoot/ingest/opentelemetry/contact-support) to request an increase in ingest limits.
- Elastic Cloud Hosted (ECH): Increase your Elasticsearch capacity by scaling or resizing your deployment:
  - [Scaling considerations](https://www.elastic.co/elastic/docs-builder/docs/3028/deploy-manage/production-guidance/scaling-considerations)
- [Resize deployment](https://www.elastic.co/elastic/docs-builder/docs/3028/deploy-manage/deploy/cloud-enterprise/resize-deployment)
- [Autoscaling in ECE and ECH](https://www.elastic.co/elastic/docs-builder/docs/3028/deploy-manage/autoscaling/autoscaling-in-ece-and-ech)

After scaling, monitor your ingest metrics to verify that the rate of accepted requests increases and 429 responses stop appearing.

### Reduce ingest rate or enable backpressure

Lower the telemetry export rate by enabling batching and retry mechanisms in your EDOT Collector or SDK configuration. For example:
```yaml
processors:
  batch:
    send_batch_size: 1000
    timeout: 5s

exporters:
  otlp:
    retry_on_failure:
      enabled: true
      initial_interval: 1s
      max_interval: 30s
      max_elapsed_time: 300s
```

These settings help smooth out spikes and automatically retry failed exports after rate-limit responses.

### Enable retry logic and queueing

To minimize data loss during temporary throttling, configure your exporter to use a sending queue and retry logic. For example:
```yaml
exporters:
  otlp:
    sending_queue:
      enabled: true
      num_consumers: 10
      queue_size: 1000
    retry_on_failure:
      enabled: true
```

This ensures the Collector buffers data locally while waiting for the ingest endpoint to recover from throttling. For more information on export failures and queue configuration, refer to [Export failures when sending telemetry data](https://www.elastic.co/elastic/docs-builder/docs/3028/troubleshoot/ingest/opentelemetry/edot-collector/trace-export-errors).

## Best practices

To prevent 429 errors and maintain reliable telemetry data flow, implement these best practices:
- Monitor internal Collector metrics (such as `otelcol_exporter_send_failed` and `otelcol_exporter_queue_capacity`) to detect backpressure early.
- Distribute telemetry load evenly across multiple Collectors instead of sending all data through a single instance.
- When possible, enable batching and compression to reduce payload size.
- Keep retry and backoff intervals conservative to avoid overwhelming the endpoint after a temporary throttle.


## Resources

- [Elastic Cloud Managed OTLP Endpoint reference](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3028/reference/opentelemetry/motlp)
- [Quickstart: Send OTLP data to Elastic Serverless or Elastic Cloud Hosted](https://www.elastic.co/elastic/docs-builder/docs/3028/solutions/observability/get-started/quickstart-elastic-cloud-otel-endpoint)