Loading

Connect an ECK-managed cluster to an external cluster or deployment

ECK

These steps describe how to configure a remote cluster connection from an Elasticsearch cluster managed by Elastic Cloud on Kubernetes (ECK) to an external Elasticsearch cluster, where external refers to any cluster not managed by the same ECK operator. The remote cluster can be self-managed, part of an Elastic Cloud Hosted (ECH) or Elastic Cloud Enterprise (ECE) deployment, or managed by a different ECK operator.

After the connection is established, you’ll be able to run CCS queries from Elasticsearch or set up CCR.

Note about terminology

In the case of remote clusters, the Elasticsearch cluster or deployment initiating the connection and requests is often referred to as the local cluster, while the Elasticsearch cluster or deployment receiving the requests is referred to as the remote cluster.

In this scenario, most of the configuration must be performed manually, as Elastic Cloud on Kubernetes cannot orchestrate the setup across both clusters. For fully automated configuration between ECK-managed clusters, refer to Connect to Elasticsearch clusters in the same ECK environment.

For other remote cluster scenarios with ECK, refer to Remote clusters on ECK.

Note

This guide uses API key authentication as the security model, which is the recommended option and replaces the deprecated TLS certificate–based model.

If you need to configure TLS certificate authentication for this scenario, refer to the steps in Connect from an external cluster and create the remote in the opposite direction. The mutual-TLS trust setup steps the similar.

Follow these steps to configure the API key security model for remote clusters. If you run into any issues, refer to Troubleshooting.

Follow the steps corresponding to the deployment type of your remote cluster:

If the remote cluster is part of an Elastic Cloud Hosted deployment, the remote cluster server is enabled by default and it uses a publicly trusted certificate provided by the platform proxies. Therefore, you can skip this step.

If the remote cluster is part of an Elastic Cloud Enterprise deployment, the remote cluster server is enabled by default, and secured with TLS certificates.

Depending on the type of certificate used by the ECE proxies or load-balancing layer, the local cluster requires the associated certificate authority (CA) to establish trust:

  • If your ECE proxies use publicly trusted certificates, no additional CA is required.

  • If your ECE proxies use certificates signed by a private CA, retrieve the root CA from the ECE Cloud UI:

    1. In the remote ECE environment, go to Platform > Settings > TLS certificates.

    2. Under Proxy, select Show certificate chain.

    3. Click Copy root certificate and paste it into a new file. The root certificate is the last certificate shown in the chain.

    4. Save the file as .crt, and keep it available for the trust configuration on the local cluster.

If the remote cluster is managed by a different ECK operator, it must be prepared to accept incoming connections.

  1. Enable the remote cluster server

    By default, the remote cluster server interface is deactivated on ECK-managed clusters. To use the API key–based security model for cross-cluster connections, you must first enable it on the remote Elasticsearch cluster by setting spec.remoteClusterServer.enabled: true:

    apiVersion: elasticsearch.k8s.elastic.co/v1
    kind: Elasticsearch
    metadata:
      name: <cluster-name>
      namespace: <namespace>
    spec:
      version: 9.2.2
      remoteClusterServer:
        enabled: true
      nodeSets:
        - name: default
          count: 3
          ...
          ...
    		
    Note

    Enabling the remote cluster server triggers a restart of the Elasticsearch cluster.

  2. Expose the remote cluster server interface

    When the remote cluster server is enabled, ECK automatically creates a Kubernetes service named <cluster-name>-es-remote-cluster that exposes the server internally on port 9443.

    To allow clusters running outside your Kubernetes environment to connect to this Elasticsearch cluster, you must expose this service externally. The way to expose this service depends on your ECK version.

    You can customize how the remote cluster service is exposed by overriding its service specification directly under spec.remoteClusterServer.service in the Elasticsearch resource. By default, this service listens on port 9443.

    apiVersion: elasticsearch.k8s.elastic.co/v1
    kind: Elasticsearch
    metadata:
      name: <cluster-name>
      namespace: <namespace>
    spec:
      version: 9.2.1
      remoteClusterServer:
        enabled: true
        service:
          spec:
            type: LoadBalancer
      nodeSets:
        - name: default
          count: 3
          ...
          ...
    		
    1. On cloud providers that support external load balancers, setting the type to LoadBalancer provisions a load balancer for your service. Alternatively, expose the service <cluster-name>-es-remote-cluster through one of the Kubernetes Ingress controllers that support TCP services.

    In ECK 3.2 and earlier, you can't customize the service that ECK generates for the remote cluster interface, but you can create your own LoadBalancer service, Ingress object, or use another method available in your environment.

    For example, for a cluster named quickstart, the following command creates a separate LoadBalancer service named quickstart-es-remote-cluster-lb, pointing to the ECK-managed service quickstart-es-remote-cluster:

    kubectl expose service quickstart-es-remote-cluster \
      --name=quickstart-es-remote-cluster-lb \
      --type=LoadBalancer \
      --port=9443 --target-port=9443
    		
    1. On cloud providers that support external load balancers, setting the type to LoadBalancer provisions a load balancer for your service. Alternatively, expose the service <cluster-name>-es-remote-cluster through one of the Kubernetes Ingress controllers that support TCP services.
    Warning

    If you change the service’s port, set targetPort explicitly to 9443, which is the default remote cluster server listening port. Otherwise, Kubernetes uses the same value for both fields, resulting in failed connections.

  3. Retrieve the certificate authority (CA)

    The certificate authority (CA) used by ECK to issue certificates for the remote cluster server interface is stored in the ca.crt key of the secret named <cluster_name>-es-transport-certs-public.

    If the external connections reach the Elasticsearch Pods on port 9443 without any intermediate TLS termination, you need to retrieve this CA because it is required in the local cluster configuration to establish trust.

    If TLS is terminated by any intermediate component and the certificate presented is not the ECK-managed one, use the CA associated with that component, or omit the CA entirely if it uses a publicly trusted certificate.

    To save the transport CA certificate of a cluster named quickstart into a local file, run the following command:

    kubectl get secret quickstart-es-transport-certs-public \
    -o go-template='{{index .data "ca.crt" | base64decode}}' > eck_transport_ca.crt
    		
    Important

    ECK-managed CA certificates are automatically rotated after one year by default, but you can configure a different validity period. When the CA certificate is rotated, ensure that this CA is updated in all environments where it's used to preserve trust.

  1. Enable and secure the remote cluster server

    1. Enable the remote cluster server on every node of the remote cluster. In elasticsearch.yml:

      1. Set remote_cluster_server.enabled to true.
      2. Configure the bind and publish address for remote cluster server traffic, for example using remote_cluster.host. Without configuring the address, remote cluster traffic can be bound to the local interface, and remote clusters running on other machines can't connect.
      3. Optionally, configure the remote server port using remote_cluster.port (defaults to 9443).
    2. Generate a certificate authority (CA) and a server certificate/key pair. On one of the nodes of the remote cluster, from the directory where Elasticsearch has been installed:

      1. Create a CA, if you don't have a CA already:

        ./bin/elasticsearch-certutil ca --pem --out=cross-cluster-ca.zip --pass CA_PASSWORD
        		

        Replace CA_PASSWORD with the password you want to use for the CA. You can remove the --pass option and its argument if you are not deploying to a production environment.

      2. Unzip the generated cross-cluster-ca.zip file. This compressed file contains the following content:

        /ca
        |_ ca.crt
        |_ ca.key
        		
      3. Generate a certificate and private key pair for the nodes in the remote cluster:

        ./bin/elasticsearch-certutil cert --out=cross-cluster.p12 --pass=CERT_PASSWORD --ca-cert=ca/ca.crt --ca-key=ca/ca.key --ca-pass=CA_PASSWORD --dns=<CLUSTER_FQDN> --ip=192.0.2.1
        		
        • Replace CA_PASSWORD with the CA password from the previous step.
        • Replace CERT_PASSWORD with the password you want to use for the generated private key.
        • Use the --dns option to specify the relevant DNS name for the certificate. You can specify it multiple times for multiple DNS.
        • Use the --ip option to specify the relevant IP address for the certificate. You can specify it multiple times for multiple IP addresses.
      4. If the remote cluster has multiple nodes, you can do one of the following:

        • Create a single wildcard certificate for all nodes.
        • Create separate certificates for each node either manually or in batch with the silent mode.
    3. On every node of the remote cluster, do the following:

      1. Copy the cross-cluster.p12 file from the earlier step to the config directory. If you didn't create a wildcard certificate, make sure you copy the correct node-specific p12 file.

      2. Add following configuration to elasticsearch.yml:

        xpack.security.remote_cluster_server.ssl.enabled: true
        xpack.security.remote_cluster_server.ssl.keystore.path: cross-cluster.p12
        		
      3. Add the SSL keystore password to the Elasticsearch keystore:

        ./bin/elasticsearch-keystore add xpack.security.remote_cluster_server.ssl.keystore.secure_password
        		

        When prompted, enter the CERT_PASSWORD from the earlier step.

    4. Restart the remote cluster.

  2. Retrieve the certificate authority (CA)

    If the remote cluster server is exposed with a certificate signed by private certificate authority (CA), save the corresponding ca.crt file. It is required when configuring trust on the local cluster.

  1. On the remote cluster, use the Elasticsearch API or Kibana to create a cross-cluster API key. Configure it to include access to the indices you want to use for cross-cluster search or cross-cluster replication.
  2. Copy the encoded key (encoded in the response) to a safe location. It is required for the local cluster configuration.

The API key created previously is needed by the local cluster to authenticate with the corresponding set of permissions to the remote deployment or cluster. To enable this, add the API key to the local cluster's keystore.

The steps to follow depend on whether the certificate authority (CA) presented by the remote cluster server, proxy, or load-balancing infrastructure is publicly trusted or private.

Note

If the remote cluster is part of an Elastic Cloud Hosted deployment, follow the The CA is public path. Elastic Cloud Hosted proxies use publicly trusted certificates, so no CA configuration is required.

On the local cluster, add the remote cluster using Kibana or the Elasticsearch API.

About connection modes

This guide uses the proxy connection mode, which is the only practical option when connecting to Elastic Cloud Hosted, Elastic Cloud Enterprise, or Elastic Cloud on Kubernetes clusters from outside their Kubernetes environment.

If the remote cluster is self-managed (or another ECK cluster within the same Kubernetes network) and the local cluster can reach the remote nodes’ publish addresses directly, you can use sniff mode instead. Refer to connection modes documentation for details on each mode and their connectivity requirements.

  1. Go to the Remote Clusters management page in the navigation menu or use the global search field.

  2. Select Add a remote cluster.

  3. In Select connection type, choose the API keys authentication mechanism and click Next.

  4. Set the Remote cluster name: This name must match the <remote-cluster-name> you configured when adding the API key in the local cluster's keystore.

  5. In Connection mode, select Manually enter proxy address and server name to enable the proxy mode and fill in the following fields:

    • Proxy address: Identify the endpoint of the remote cluster, including the hostname, FQDN, or IP address, and the port:

      Obtain the endpoint from the Security page of the ECH deployment you want to use as a remote. Copy the Proxy address from the Remote cluster parameters section, and replace its port with 9443, which is the port used by the remote cluster server interface.

      Obtain the endpoint from the Security page of the ECE deployment you want to use as a remote. Copy the Proxy address from the Remote cluster parameters, and replace its port with 9443, which is the port used by the remote cluster server interface.

      Use the FQDN or IP address of the LoadBalancer service, or similar resource, you created to expose the remote cluster server interface on port 9443.

      If your environment presents the ECK-managed certificates during the TLS handshake, configure the server name field as <cluster-name>-es-remote-cluster.<namespace>.svc. Otherwise, the local cluster cannot establish the connection due to SSL trust errors.

      The endpoint depends on your network architecture and the selected connection mode (sniff or proxy). It can be one or more Elasticsearch nodes, or a TCP (layer 4) load balancer or reverse proxy in front of the cluster, as long as the local cluster can reach them over port 9443.

      If you are configuring sniff mode, set the seeds parameter instead of the proxy address. Refer to the connection modes documentation for details and connectivity requirements of each mode.

      Starting with Kibana 9.2, this field also supports IPv6 addresses. When using an IPv6 address, enclose it in square brackets followed by the port number. For example: [2001:db8::1]:9443.

    • Server name (optional): Specify a value if the TLS certificate presented by the remote cluster is signed for a different name than the remote address.

  6. Click Next.

  7. In Confirm setup, click Add remote cluster (you have already established trust in a previous step).

To add a remote cluster, use the cluster update settings API. Configure the following fields:

  • Remote cluster alias: The cluster alias must match the <remote-cluster-name> you configured when adding the API key in the local cluster's keystore.

  • mode: Use proxy mode in almost all cases. sniff mode is only applicable when the remote cluster is self-managed and the local cluster can reach the nodes’ publish addresses directly.

  • proxy_address: Identify the endpoint of the remote cluster, including the hostname, FQDN, or IP address, and the port. Both IPv4 and IPv6 addresses are supported.

    Obtain the endpoint from the Security page of the ECH deployment you want to use as a remote. Copy the Proxy address from the Remote cluster parameters section, and replace its port with 9443, which is the port used by the remote cluster server interface.

    Obtain the endpoint from the Security page of the ECE deployment you want to use as a remote. Copy the Proxy address from the Remote cluster parameters, and replace its port with 9443, which is the port used by the remote cluster server interface.

    Use the FQDN or IP address of the LoadBalancer service, or similar resource, you created to expose the remote cluster server interface on port 9443.

    If your environment presents the ECK-managed certificates during the TLS handshake, configure the server name field as <cluster-name>-es-remote-cluster.<namespace>.svc. Otherwise, the local cluster cannot establish the connection due to SSL trust errors.

    The endpoint depends on your network architecture and the selected connection mode (sniff or proxy). It can be one or more Elasticsearch nodes, or a TCP (layer 4) load balancer or reverse proxy in front of the cluster, as long as the local cluster can reach them over port 9443.

    If you are configuring sniff mode, set the seeds parameter instead of the proxy address. Refer to the connection modes documentation for details and connectivity requirements of each mode.

    When using an IPv6 address, enclose it in square brackets followed by the port number. For example: [2001:db8::1]:9443.

  • server_name: Specify a value if the certificate presented by the remote cluster is signed for a different name than the proxy_address.

This is an example of the API call to add or update a remote cluster:

PUT /_cluster/settings
{
  "persistent": {
    "cluster": {
      "remote": {
        "alias-for-my-remote-cluster": {
          "mode":"proxy",
          "proxy_address": "<REMOTE_CLUSTER_ADDRESS>:9443",
          "server_name": "<REMOTE_CLUSTER_SERVER_NAME>"
        }
      }
    }
  }
}
		
  1. Align the alias with the remote cluster name used when adding the API key as a secure setting.

For a full list of available client connection settings, refer to the remote cluster settings reference.

From the local cluster, check the status of the connection to the remote cluster. If you encounter issues, refer to the Troubleshooting guide.

				GET _remote/info
		

In the response, verify that connected is true:

{
  "<remote-alias>": {
    "connected": true,
    "mode": "proxy",
    "proxy_address": "<REMOTE_CLUSTER_ADDRESS>:9443",
    "server_name": "<REMOTE_CLUSTER_SERVER_NAME>",
    "num_proxy_sockets_connected": 18,
    "max_proxy_socket_connections": 18,
    "initial_connect_timeout": "30s",
    "skip_unavailable": true,
    "cluster_credentials": "::es_redacted::"
  }
}
		

If you're using the API key–based security model for cross-cluster replication or cross-cluster search, you can define user roles with remote indices privileges on the local cluster to further restrict the permissions granted by the API key. For more details, refer to Configure roles and users.