﻿---
title: Monitor Amazon Web Services (AWS) with Beats
description: In this tutorial, you’ll learn how to monitor your AWS infrastructure using Elastic Observability: Logs and Infrastructure metrics. You’ll learn how to:...
url: https://www.elastic.co/elastic/docs-builder/docs/3016/solutions/observability/cloud/monitor-amazon-web-services-aws-with-beats
products:
  - Elastic Observability
applies_to:
  - Elastic Cloud Serverless: Generally available
  - Elastic Stack: Generally available
---

# Monitor Amazon Web Services (AWS) with Beats
In this tutorial, you’ll learn how to monitor your AWS infrastructure using Elastic Observability: Logs and Infrastructure metrics.

## What you’ll learn

You’ll learn how to:
- Create and configure an S3 bucket
- Create and configure an SQS queue.
- Install and configure Filebeat and Metricbeat to collect Logs and Infrastructure metrics
- Collect logs from S3
- Collect metrics from Amazon CloudWatch


## Before you begin

Create an [Elastic Cloud Hosted](https://cloud.elastic.co/registration?page=docs&placement=docs-body) deployment or [Elastic Observability Serverless](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/deploy/elastic-cloud/create-serverless-project) project. Both include an Elasticsearch cluster for storing and searching your data and Kibana for visualizing and managing your data.
With this tutorial, we assume that your logs and your infrastructure data are already shipped to CloudWatch. We are going to show you how you can stream your data from CloudWatch to Elasticsearch. If you don’t know how to put your AWS logs and infrastructure data in CloudWatch, Amazon provides a lot of documentation around this specific topic:
- Collect your logs and infrastructure data from specific [AWS services](https://www.youtube.com/watch?v=vAnIhIwE5hY)
- Export your logs [to an S3 bucket](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.html)


## Step 1:  Create an S3 Bucket

To centralize your logs in Elasticsearch, you need to have an S3 bucket. Filebeat, the agent you’ll use to collect logs, has an input for S3.
In the [AWS S3 console](https://s3.console.aws.amazon.com/s3), click on **Create bucket**. Give the bucket a **name** and specify the **region** in which you want it deployed.
![S3 bucket creation](https://www.elastic.co/elastic/docs-builder/docs/3016/solutions/images/observability-creating-a-s3-bucket.png)


## Step 2:  Create an SQS Queue

You should now have an S3 bucket in which you can export your logs, but you will also need an SQS queue. To avoid significant lagging with polling all log files from each S3 bucket, we will use Amazon Simple Queue Service (SQS). This will provide us with an Amazon S3 notification when a new S3 object is created. The [Filebeat S3 input](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/beats/filebeat/filebeat-input-aws-s3) checks SQS for new messages regarding the new object created in S3 and uses the information in these messages to retrieve logs from S3 buckets. With this setup, periodic polling from each S3 bucket is not needed. Instead, the Filebeat S3 input guarantees near real-time data collection from S3 buckets with both speed and reliability.
Create an SQS queue and configure our S3 bucket to send a message to the SQS queue whenever new logs are present in the S3 bucket. Go to the [SQS console](https://eu-central-1.console.aws.amazon.com/sqs/)
<note>
  Make sure that the queue is created in the same region as the S3 bucket.
</note>

![Queue Creation](https://www.elastic.co/elastic/docs-builder/docs/3016/solutions/images/observability-creating-a-queue.png)

Create a standard SQS queue and edit the access policy by using a JSON object to define an advanced access policy:
<note>
  Replace `<sqs-arn>` with the ARN of the SQS queue, `<s3-bucket-arn>` with the ARN of the S3 bucket you just created, the `<source-account>` with your source account.
</note>

```json
{
  "Version": "2012-10-17",
  "Id": "example-ID",
  "Statement": [
    {
      "Sid": "example-statement-ID",
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "SQS:SendMessage",
      "Resource": "<sqs-arn>",
      "Condition": {
        "StringEquals": {
          "aws:SourceAccount": "<source-account>"
        },
        "ArnLike": {
          "aws:SourceArn": "<s3-bucket-arn>"
        }
      }
    }
  ]
}
```


## Step 3:  Event Notification

Now that your queue is created, go to the properties of the S3 bucket you created and click **Create event notification**.
Specify that you want to send a notification on every object creation event.
![Event Notification Setting](https://www.elastic.co/elastic/docs-builder/docs/3016/solutions/images/observability-configure-event-notification.png)

Set the destination as the SQS queue you just created.
![Event Notification Setting](https://www.elastic.co/elastic/docs-builder/docs/3016/solutions/images/observability-configure-notification-output.png)


## Step 4: Install and configure Filebeat

To monitor AWS using the Elastic Stack, you need two main components: an Elastic deployment to store and analyze the data and an agent to collect and ship the data.

### Install Filebeat

Download and install Filebeat.
<tab-set>
  <tab-item title="DEB">
    ```shell
    curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.3.2-amd64.deb
    sudo dpkg -i filebeat-9.3.2-amd64.deb
    ```
  </tab-item>

  <tab-item title="RPM">
    ```shell
    curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.3.2-x86_64.rpm
    sudo rpm -vi filebeat-9.3.2-x86_64.rpm
    ```
  </tab-item>

  <tab-item title="MacOS">
    ```shell
    curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.3.2-darwin-x86_64.tar.gz
    tar xzvf filebeat-9.3.2-darwin-x86_64.tar.gz
    ```
  </tab-item>

  <tab-item title="Linux">
    ```shell
    curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.3.2-linux-x86_64.tar.gz
    tar xzvf filebeat-9.3.2-linux-x86_64.tar.gz
    ```
  </tab-item>

  <tab-item title="Windows">
    1. Download the [Filebeat Windows zip file](https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.3.2-windows-x86_64.zip).
    2. Extract the contents of the zip file into `C:\Program Files`.
    3. Rename the `filebeat-[version]-windows-x86_64` directory to `Filebeat`.
    4. Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select *Run As Administrator*).
    5. From the PowerShell prompt, run the following commands to install Filebeat as a Windows service:

    ```shell
    PS > cd 'C:\Program Files\Filebeat'
    PS C:\Program Files\Filebeat> .\install-service-filebeat.ps1
    ```

    <note>
      If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-filebeat.ps1`.
    </note>
  </tab-item>
</tab-set>


### Set up assets

<applies-switch>
  <applies-item title="stack: ga" applies-to="Elastic Stack: Generally available">
    Filebeat comes with predefined assets for parsing, indexing, and visualizing your data. Run the following command to load these assets. It can take a few minutes.
    ```bash
    ./filebeat setup -e -E 'cloud.id=YOUR_DEPLOYMENT_CLOUD_ID' -E 'cloud.auth=elastic:YOUR_SUPER_SECRET_PASS' 
    ```
  </applies-item>

  <applies-item title="serverless: ga" applies-to="Elastic Cloud Serverless: Generally available">
    Filebeat comes with predefined assets for parsing, indexing, and visualizing your data. Run the following command to load these assets. It can take a few minutes.
    ```bash
    ./filebeat setup -e -E 'output.elasticsearch.hosts=["https://hostname:port"]' -E 'output.elasticsearch.api_key=YOUR_API_KEY' 
    ```
  </applies-item>
</applies-switch>

<important>
  Setting up Filebeat is an admin-level task that requires extra privileges. As a best practice, [use an administrator role to set up](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/beats/filebeat/privileges-to-setup-beats) and a more restrictive role for event publishing (which you will do next).
</important>


### Configure Filebeat output

Next, you are going to configure Filebeat output to Elasticsearch.
1. Use the Filebeat keystore to store [secure settings](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/beats/filebeat/keystore). Store your connection details in the keystore.
   <applies-switch>
   <applies-item title="stack: ga" applies-to="Elastic Stack: Generally available">
   ```bash
   ./filebeat keystore create
   echo -n "<Your Deployment Cloud ID>" | ./filebeat keystore add CLOUD_ID --stdin
   ```
   </applies-item>

   <applies-item title="serverless: ga" applies-to="Elastic Cloud Serverless: Generally available">
   ```bash
   ./filebeat keystore create
   echo -n "<Your Elasticsearch endpoint URL>" | ./filebeat keystore add ES_HOST --stdin
   ```
   </applies-item>
   </applies-switch>
2. To store logs in Elasticsearch with minimal permissions, create an API key to send data from Filebeat to Elasticsearch. Log into Kibana (you can do so from the Cloud Console without typing in any permissions) and find `Dev Tools` in the [global search field](https://www.elastic.co/elastic/docs-builder/docs/3016/explore-analyze/find-and-organize/find-apps-and-objects). Send the following request:
   ```json

   {
     "name": "filebeat-monitor-gcp",
     "role_descriptors": {
       "filebeat_writer": {
         "cluster": [
           "monitor",
           "read_ilm",
           "cluster:admin/ingest/pipeline/get", <1>
           "cluster:admin/ingest/pipeline/put" <1>
         ],
         "index": [
           {
             "names": ["filebeat-*"],
             "privileges": ["view_index_metadata", "create_doc"]
           }
         ]
       }
     }
   }
   ```
3. The response contains an `api_key` and an `id` field, which can be stored in the Filebeat keystore in the following format: `id:api_key`.
   ```bash
   echo -n "IhrJJHMB4JmIUAPLuM35:1GbfxhkMT8COBB4JWY3pvQ" | ./filebeat keystore add ES_API_KEY --stdin
   ```
   <note>
   Make sure you specify the `-n` parameter; otherwise, you will have painful debugging sessions due to adding a newline at the end of your API key.
   </note>
4. To see if both settings have been stored, run the following command:
   ```bash
   ./filebeat keystore list
   ```
5. To configure Filebeat to output to Elasticsearch, edit the `filebeat.yml` configuration file. Add the following lines to the end of the file.
   <applies-switch>
   <applies-item title="stack: ga" applies-to="Elastic Stack: Generally available">
   ```yaml
   cloud.id: ${CLOUD_ID}
   output.elasticsearch:
   api_key: ${ES_API_KEY}
   ```
   </applies-item>

   <applies-item title="serverless: ga" applies-to="Elastic Cloud Serverless: Generally available">
   ```yaml
   output.elasticsearch:
   hosts: ["${ES_HOST}"]
   api_key: ${ES_API_KEY}
   ```
   </applies-item>
   </applies-switch>
6. Finally, test if the configuration is working. If it is not working, verify that you used the right credentials and, if necessary, add them again.
   ```bash
   ./filebeat test output
   ```


## Step 5: Configure the AWS Module

Now that the output is working, you can set up the [Filebeat AWS](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/beats/filebeat/filebeat-module-aws) module which will automatically create the AWS input. This module checks SQS for new messages regarding the new object created in the S3 bucket and uses the information in these messages to retrieve logs from S3 buckets. With this setup, periodic polling from each S3 bucket is not needed.
There are many different filesets available: `cloudtrail`, `vpcflow`, `ec2`, `cloudwatch`, `elb` and `s3access`. In this tutorial, we are going to show you a few examples using the `ec2` and the `s3access` filesets.
The `ec2` fileset is used to ship and process logs stored in CloudWatch, and export them to an S3 bucket. The `s3access` fileset is used when S3 access logs need to be collected. It provides detailed records for the requests that are made to a bucket. Server access logs are useful for many applications. For example, access log information can be useful in security and access audits. It can also help you learn about your customer base and understand your Amazon S3 bill.
Let’s enable the AWS module in Filebeat.
```bash
./filebeat modules enable aws
```

Edit the `modules.d/aws.yml` file with the following configurations.
```yaml
- module: aws
  cloudtrail:
    enabled: false
  cloudwatch:
    enabled: false
  ec2:
    enabled: true 
    var.credential_profile_name: fb-aws 
    var.queue_url: https://sqs.eu-central-1.amazonaws.com/836370109380/howtoguide-tutorial 
  elb:
    enabled: false
  s3access:
    enabled: false
  vpcflow:
    enabled: false
```

Make sure that the AWS user used to collect the logs from S3 has at least the following permissions attached to it:
```yaml
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
              "s3:GetObject",
              "sqs:ReceiveMessage",
              "sqs:ChangeMessageVisibility",
              "sqs:DeleteMessage"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ]
}
```

You can now upload your logs to the S3 bucket. If you are using CloudWatch, make sure to edit the policy of your bucket as shown in [step 3 of the AWS user guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.html). This will help you avoid permissions issues.
Start Filebeat to collect the logs.
```bash
./filebeat -e
```

Here’s what we’ve achieved so far:
![Current Architecture](https://www.elastic.co/elastic/docs-builder/docs/3016/solutions/images/observability-one-bucket-archi.png)

Now, let’s configure the `s3access` fileset. The goal here is to be able to monitor how people access the bucket we created. To do this, we’ll create another bucket and another queue. The new architecture will look like this:
![Architecture with Access Logging Enabled](https://www.elastic.co/elastic/docs-builder/docs/3016/solutions/images/observability-two-buckets-archi.png)

Create a new S3 bucket and SQS queue. Ensure that the event notifications on the new bucket are enabled, and that it’s sending notifications to the new queue.
Now go back to the first bucket, and go to **Properties** > **Server access logging**. Specify that you want to ship the access logs to the bucket you most recently created.
![Enabling Server Access Logging](https://www.elastic.co/elastic/docs-builder/docs/3016/solutions/images/observability-Server-Access-Logging.png)

Copy the URL of the queue you created. Edit the `modules.d/aws.yml`file with the following configurations.
```yaml
- module: aws
  cloudtrail:
    enabled: false
  cloudwatch:
    enabled: false
  ec2:
    enabled: true 
    var.credential_profile_name: fb-aws 
    var.queue_url: https://sqs.eu-central-1.amazonaws.com/836370109380/howtoguide-tutorial 
  elb:
    enabled: false
  s3access:
    enabled: true 
    var.credential_profile_name: fb-aws 
    var.queue_url: https://sqs.eu-central-1.amazonaws.com/836370109380/access-log 
  vpcflow:
    enabled: false
```

Once you have edited the config file, you need to restart Filebeat. To stop Filebeat, you can press CTRL + C in the terminal. Now let’s restart Filebeat by running the following command:
```bash
./filebeat -e
```


## Step 6: Visualize Logs

Now that the logs are being shipped to Elasticsearch we can visualize them in Kibana. To see the raw logs, find **Discover** in the main menu or use the [global search field](https://www.elastic.co/elastic/docs-builder/docs/3016/explore-analyze/find-and-organize/find-apps-and-objects).
The filesets we used in the previous steps also come with pre-built dashboards that you can use to visualize the data. In Kibana, find **Dashboards** in the main menu or use the [global search field](https://www.elastic.co/elastic/docs-builder/docs/3016/explore-analyze/find-and-organize/find-apps-and-objects). Search for S3 and select the dashboard called: **[Filebeat AWS] S3 Server Access Log Overview**:
![S3 Server Access Log Overview](https://www.elastic.co/elastic/docs-builder/docs/3016/solutions/images/observability-S3-Server-Access-Logs.png)

This gives you an overview of how your S3 buckets are being accessed.

## Step 7: Collect Infrastructure metrics

To monitor your AWS infrastructure you will need to first make sure your infrastructure data are being shipped to CloudWatch. To ship the data to Elasticsearch we are going to use the AWS module from Metricbeat. This module periodically fetches monitoring metrics from AWS CloudWatch using **GetMetricData** API for AWS services.
<important>
  Extra AWS charges on CloudWatch API requests will be generated by this module. See [AWS API requests](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/beats/metricbeat/metricbeat-module-aws#aws-api-requests) for more details.
</important>


## Step 8: Install and configure Metricbeat

In a new terminal window, run the following commands.

### Install Metricbeat

Download and install Metricbeat.
<tab-set>
  <tab-item title="DEB">
    ```shell
    curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-9.3.2-amd64.deb
    sudo dpkg -i metricbeat-9.3.2-amd64.deb
    ```
  </tab-item>

  <tab-item title="RPM">
    ```shell
    curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-9.3.2-x86_64.rpm
    sudo rpm -vi metricbeat-9.3.2-x86_64.rpm
    ```
  </tab-item>

  <tab-item title="MacOS">
    ```shell
    curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-9.3.2-darwin-x86_64.tar.gz
    tar xzvf metricbeat-9.3.2-darwin-x86_64.tar.gz
    ```
  </tab-item>

  <tab-item title="Linux">
    ```shell
    curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-9.3.2-linux-x86_64.tar.gz
    tar xzvf metricbeat-9.3.2-linux-x86_64.tar.gz
    ```
  </tab-item>

  <tab-item title="Windows">
    1. Download the [Metricbeat Windows zip file](https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-9.3.2-windows-x86_64.zip).
    2. Extract the contents of the zip file into `C:\Program Files`.
    3. Rename the `metricbeat-[version]-windows-x86_64` directory to `Metricbeat`.
    4. Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select *Run As Administrator*).
    5. From the PowerShell prompt, run the following commands to install Metricbeat as a Windows service:

    ```shell
    PS > cd 'C:\Program Files\Metricbeat'
    PS C:\Program Files\Metricbeat> .\install-service-metricbeat.ps1
    ```

    <note>
      If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-metricbeat.ps1`.
    </note>
  </tab-item>
</tab-set>


### Set up assets

<applies-switch>
  <applies-item title="stack: ga" applies-to="Elastic Stack: Generally available">
    Metricbeat comes with predefined assets for parsing, indexing, and visualizing your data. Run the following command to load these assets. It can take a few minutes.
    ```bash
    ./metricbeat setup -e -E 'cloud.id=YOUR_DEPLOYMENT_CLOUD_ID' -E 'cloud.auth=elastic:YOUR_SUPER_SECRET_PASS' 
    ```
  </applies-item>

  <applies-item title="serverless: ga" applies-to="Elastic Cloud Serverless: Generally available">
    Metricbeat comes with predefined assets for parsing, indexing, and visualizing your data. Run the following command to load these assets. It can take a few minutes.
    ```bash
    ./metricbeat setup -e -E 'output.elasticsearch.hosts=["https://hostname:port"]' -E 'output.elasticsearch.api_key=YOUR_API_KEY' 
    ```
  </applies-item>
</applies-switch>

<important>
  Setting up Metricbeat is an admin-level task that requires extra privileges. As a best practice, [use an administrator role to set up](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/beats/metricbeat/privileges-to-setup-beats), and a more restrictive role for event publishing (which you will do next).
</important>


### Configure Metricbeat output

Next, you are going to configure Metricbeat output to Elasticsearch.
1. Use the Metricbeat keystore to store [secure settings](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/beats/metricbeat/keystore). Store your connection details in the keystore.
   <applies-switch>
   <applies-item title="stack: ga" applies-to="Elastic Stack: Generally available">
   ```bash
   ./metricbeat keystore create
   echo -n "<Your Deployment Cloud ID>" | ./metricbeat keystore add CLOUD_ID --stdin
   ```
   </applies-item>

   <applies-item title="serverless: ga" applies-to="Elastic Cloud Serverless: Generally available">
   ```bash
   ./metricbeat keystore create
   echo -n "<Your Elasticsearch endpoint URL>" | ./metricbeat keystore add ES_HOST --stdin
   ```
   </applies-item>
   </applies-switch>
2. To store metrics in Elasticsearch with minimal permissions, create an API key to send data from Metricbeat to Elasticsearch. Log into Kibana (you can do so from the Cloud Console without typing in any permissions) and find `Dev Tools` in the [global search field](https://www.elastic.co/elastic/docs-builder/docs/3016/explore-analyze/find-and-organize/find-apps-and-objects). From the **Console**, send the following request:
   ```json

   {
     "name": "metricbeat-monitor",
     "role_descriptors": {
       "metricbeat_writer": {
         "cluster": ["monitor", "read_ilm"],
         "index": [
           {
             "names": ["metricbeat-*"],
             "privileges": ["view_index_metadata", "create_doc"]
           }
         ]
       }
     }
   }
   ```
3. The response contains an `api_key` and an `id` field, which can be stored in the Metricbeat keystore in the following format: `id:api_key`.
   ```bash
   echo -n "IhrJJHMB4JmIUAPLuM35:1GbfxhkMT8COBB4JWY3pvQ" | ./metricbeat keystore add ES_API_KEY --stdin
   ```
   <note>
   Make sure you specify the `-n` parameter; otherwise, you will have painful debugging sessions due to adding a newline at the end of your API key.
   </note>
4. To see if both settings have been stored, run the following command:
   ```bash
   ./metricbeat keystore list
   ```
5. To configure Metricbeat to output to Elasticsearch, edit the `metricbeat.yml` configuration file. Add the following lines to the end of the file.
   <applies-switch>
   <applies-item title="stack: ga" applies-to="Elastic Stack: Generally available">
   ```yaml
   cloud.id: ${CLOUD_ID}
   output.elasticsearch:
   api_key: ${ES_API_KEY}
   ```
   </applies-item>

   <applies-item title="serverless: ga" applies-to="Elastic Cloud Serverless: Generally available">
   ```yaml
   output.elasticsearch:
   hosts: ["${ES_HOST}"]
   api_key: ${ES_API_KEY}
   ```
   </applies-item>
   </applies-switch>
6. Finally, test if the configuration is working. If it is not working, verify if you used the right credentials and add them again.
   ```bash
   ./metricbeat test output
   ```

Now that the output is working, you are going to set up the AWS module.

## Step 9: Configure the AWS module

To collect metrics from your AWS infrastructure, we’ll use the [Metricbeat AWS](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/beats/metricbeat/metricbeat-module-aws) module. This module contains many metricsets: `billing`, `cloudwatch`, `dynamodb`, `ebs`, `ec2`, `elb`, `lambda`, and many more. Each metricset is created to help you stream and process your data. In this tutorial, we’re going to show you a few examples using the `ec2` and the `billing` metricsets.
1. Let’s enable the AWS module in Metricbeat.
   ```bash
   ./metricbeat modules enable aws
   ```
2. Edit the `modules.d/aws.yml` file with the following configurations.
   ```yaml
   - module: aws 
     period: 24h 
     metricsets:
       - billing 
     credential_profile_name: mb-aws 
     cost_explorer_config:
       group_by_dimension_keys:
         - "AZ"
         - "INSTANCE_TYPE"
         - "SERVICE"
       group_by_tag_keys:
         - "aws:createdBy"
   - module: aws 
     period: 300s 
     metricsets:
       - ec2 
     credential_profile_name: mb-aws 
   ```

Make sure that the AWS user used to collect the metrics from CloudWatch has at least the following permissions attached to it:
```yaml
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "ec2:DescribeInstances",
                "ec2:DescribeRegions",
                "cloudwatch:GetMetricData",
                "cloudwatch:ListMetrics",
                "sts:GetCallerIdentity",
                "iam:ListAccountAliases",
                "tag:getResources",
                "ce:GetCostAndUsage"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ]
}
```

You can now start Metricbeat:
```bash
./metricbeat -e
```


## Step 10: Visualize metrics

Now that the metrics are being streamed to Elasticsearch we can visualize them in Kibana. To open **Infrastructure inventory**, find **Infrastructure** in the main menu or use the [global search field](https://www.elastic.co/elastic/docs-builder/docs/3016/explore-analyze/find-and-organize/find-apps-and-objects). Make sure to show the **AWS** source and the **EC2 Instances**:
![Your EC2 Infrastructure](https://www.elastic.co/elastic/docs-builder/docs/3016/solutions/images/observability-EC2-instances.png)

The metricsets we used in the previous steps also comes with pre-built dashboard that you can use to visualize the data. In Kibana, find **Dashboards** in the main menu or use the [global search field](https://www.elastic.co/elastic/docs-builder/docs/3016/explore-analyze/find-and-organize/find-apps-and-objects). Search for EC2 and select the dashboard called: **[Metricbeat AWS] EC2 Overview**:
![EC2 Overview](https://www.elastic.co/elastic/docs-builder/docs/3016/solutions/images/observability-ec2-dashboard.png)

If you want to track your billings on AWS, you can also check the **[Metricbeat AWS] Billing Overview** dashboard:
![Billing Overview](https://www.elastic.co/elastic/docs-builder/docs/3016/solutions/images/observability-aws-billing.png)