Loading

Configure Elastic Agent Add-On on Amazon EKS

Amazon EKS (Elastic Kubernetes Service) is a managed service that allows you to use Kubernetes on AWS without installing and operating your Kubernetes infrastructure.

Follow these steps to configure the Elastic Agent Add-On on Amazon EKS.

You can create a managed node group with either of the following:

To see all available add-ons, check the AWS Add-ons. You can also view the most current list of available add-ons using eksctl, the AWS Management Console, or the AWS CLI.

  1. Choose the version of Elastic Agent desired.
  2. Get the URL and enrollment token from the cluster the Elastic Agent needs to register to.
  3. From the Elastic Agent EKS add-on, go to Configuration values to enter the relevant URL and token values from your cluster.
agent:
	    fleet:
            enabled: true
            url: <insert url from onboarding>
            token: <insert enrollment token from onboarding>

Make sure the configuration override is selected in case there are conflicts.

Review the data and add the Elastic Agent EKS add-on to your cluster.

Once created, you can see the Elastic Agent Add-on is Active on the AWS EKS console. Elastic Agent runs as a daemonset, so it is present on each node.

When the Elastic Agent is Active, it appears in Fleet and its configuration is downloaded.

  1. Launch Kibana:

    1. Log in to your Elastic Cloud account.
    2. Navigate to the Kibana endpoint in your deployment.

    Point your browser to http://localhost:5601, replacing localhost with the name of the Kibana host.

  2. To check if your Elastic Agent is enrolled in Fleet, go to Management → Fleet → Agents.

{{agent}}s {{fleet}} page

On managed Kubernetes solutions like EKS, Elastic Agent has no access to several data sources. Find below the list of the non available data:

  1. Metrics from Kubernetes control plane components are not available. Consequently metrics are not available for kube-scheduler and kube-controller-manager components. In this regard, the respective dashboards will not be populated with data.

  2. Audit logs are available only on Kubernetes master nodes as well, hence cannot be collected by Elastic Agent.

  3. Fields orchestrator.cluster.name and orchestrator.cluster.url are not populated. orchestrator.cluster.name field is used as a cluster selector for default Kubernetes dashboards, shipped with Kubernetes integration.

    In this regard, you can use add_fields processor to add orchestrator.cluster.name and orchestrator.cluster.url fields for each Kubernetes integration's component:

    - add_fields:
        target: orchestrator.cluster
        fields:
          name: clusterName
          url: clusterURL