Load balancers
Elastic Cloud Enterprise architecture is designed to be used in conjunction with at least one load balancer. A load balancer is not included with Elastic Cloud Enterprise, so you need to provide one yourself and place it in front of the Elastic Cloud Enterprise proxies.
Use the following recommendations when configuring your load balancer:
- High availability: The exact number of load balancers depends on the utilization rate for your clusters. In a highly available installation, use at least two load balancers for each availability zone in your installation.
- Inbound ports: Load balancers require that inbound traffic is open on the ports used by Elasticsearch, Kibana, and the transport client.
- X-found-cluster: The ECE proxy uses the header
X-found-clusterto route traffic to the correct cluster via the cluster UUID (Universally Unique Identifier). If the load balancer rewrites a URL, make sure the HTTP headerX-Found-Clustergets added. For example:X-found-cluster: d59109b8d542c5c4845679e597810796. - Deployment traffic and Admin traffic: Create separate load balancers for deployment traffic (Elasticsearch and Kibana traffic) and admin traffic (Cloud UI Console and Admin API). This separation allows you to migrate to a large installation topology without reconfiguring or creating an additional load balancer.
- Load balancing algorithm: Select a load balancing algorithm that will balance traffic evenly across all proxies. Proxies are constantly updated with internal routing information on how to direct requests to clusters on allocators hosting their nodes across zones. Proxies prefer cluster nodes in their local zone and route requests primarily to nodes in their own zone. In case of doubt, consult your load balancer provider.
- Network: Use a network that is fast enough from a latency and throughput perspective to be considered local for the Elasticsearch clustering requirement. There shouldn't be a major advantage in "preferring local" from a load balancer perspective (rather than a proxy perspective), and it might lead to potential hot spotting on specific proxies, so it should be avoided.
- TCP timeout: Use the default (or required) TCP timeout value from the cloud provider. Do not set a custom timeout on the load balancer.
The following table describes the supported load balancer modes for each port range. TLS termination occurs at the ECE proxy for all ports.
| Ports | Traffic type | Supported modes | Client IP mechanism | Notes |
|---|---|---|---|---|
| 9200/9243 | Elasticsearch HTTP | HTTP (L7) or TCP (L4) | X-Forwarded-For (L7) or Proxy Protocol v2 (L4) |
|
| 9300/9343 | Transport client | TCP (L4) | Proxy Protocol v2 | Enable proxy protocol on the LB |
| 9400 | CCS/CCR (TLS auth) | TCP (L4) | — | Do not enable proxy protocol |
| 9443 | CCS/CCR (API key auth) | HTTP (L7) | X-Forwarded-For |
Must send HTTP/1.1 traffic |
| 12400/12443 | Admin console | HTTP (L7) | X-Forwarded-For |
The ECE proxy must be able to determine the real client IP address for IP filtering and request logging. The mechanism depends on the load balancer mode:
X-Forwarded-Forheader (HTTP/L7 mode): Configure the load balancer to strip inboundX-Forwarded-Forheaders and replace them with the client source IP. This prevents clients from spoofing their IP addresses. Elastic Cloud Enterprise usesX-Forwarded-Forfor logging client IP addresses and, if you have implemented IP filtering, for traffic management.- Proxy Protocol v2 (TCP/L4 mode): The load balancer prepends client connection metadata that the ECE proxy reads directly. Enable Proxy Protocol v2 on both the load balancer and the ECE proxy configuration.
- Direct source IP preservation: If the load balancer forwards connections transparently without modifying the source IP, no additional configuration is needed.
If you use TCP mode for ports 9200/9243, make sure one of these mechanisms is in place. Without real client IP information, IP filtering cannot function correctly and proxy logs will only show the load balancer's IP address.
You can use /__elb_health__ on your proxy hosts and check for a 200 response that indicates healthy.
http://<proxy-address>:9200>/__elb_health__
or
https://<proxy-address>:9243>/__elb_health__
This returns a healthy response as:
{"ok":true,"status":200}
For Elastic Cloud Enterprise 2.1 and later, the health check endpoint has changed. You can use /_health on proxy hosts with a result of either a 200 OK to indicate healthy or a 502 Bad Gateway response for unhealthy. A healthy response also means that internal routing tables in the proxy are valid and initialized, but not necessarily up-to-date.
http://<PROXY_ADDRESS>:9200/_health
or
https://<PROXY_ADDRESS>:9243/_health
This returns a healthy response as:
{"ok":true,"status":200}