Loading

Amazon Security Lake

<div class="condensed-table">
| | |
| --- | --- |
| Version | 2.3.0 (View all) |
| Compatible Kibana version(s) | 8.16.2 or higher |
| Supported Serverless project types
What’s this? | Security
Observability |
| Subscription level
What’s this? | Basic |
| Level of support
What’s this? | Elastic |

</div>
This Amazon Security Lake integration helps you analyze security data, so you can get a more complete understanding of your security posture across the entire organization. With Security Lake, you can also improve the protection of your workloads, applications, and data.

Security Lake automates the collection of security-related log and event data from integrated AWS services and third-party services. It also helps you manage the lifecycle of data with customizable retention and replication settings. Security Lake converts ingested data into Apache Parquet format and a standard open-source schema called the Open Cybersecurity Schema Framework (OCSF). With OCSF support, Security Lake normalizes and combines security data from AWS and a broad range of enterprise security data sources.

The Amazon Security Lake integration can be used in two different modes to collect data:

  • AWS S3 polling mode: Amazon Security Lake writes data to S3, and Elastic Agent polls the S3 bucket by listing its contents and reading new files.
  • AWS S3 SQS mode: Amazon Security Lake writes data to S3, S3 sends a notification of a new object to SQS, the Elastic Agent receives the notification from SQS, and then reads the S3 object. Multiple agents can be used in this mode.

This module follows the OCSF Schema Version v1.1.0.

The Amazon Security Lake integration collects logs from both Third-party services and AWS services in an event data stream.

  • The Amazon Security Lake integration supports events collected from AWS services and third-party services.
  • Due to the nature and structure of the OCSF schema, this integration has limitations on how deep the mappings run. Some important objects like Actor, User and Product have more fleshed-out mappings compared to others which get flattened after the initial 2-3 levels of nesting to keep them maintainable and stay within field mapping limits. This will evolve as needed.
  • Elastic Agent must be installed.
  • Elastic Agent is required to stream data from Amazon Security Lake and ship the data to Elastic, where the events will then be processed via the integration’s ingest pipelines.
  1. To enable and start Amazon Security Lake, follow the steps mentioned here: https://docs.aws.amazon.com/security-lake/latest/userguide/getting-started.html.

  2. After creating the data lake, follow the steps below to create data subscribers to consume data.

    • Open the Security Lake console.
    • By using the AWS Region selector in the upper-right corner of the page, select the Region where you want to create the subscriber.
    • In the navigation pane, choose Subscribers.
    • On the Subscribers page, choose Create subscriber.
    • For Subscriber details, enter Subscriber name and an optional Description.
    • For Log and event sources, choose which sources the subscriber is authorized to consume.
    • For Data access method, choose S3 to set up data access for the subscriber.
    • For Subscriber credentials, provide the subscriber’s AWS account ID and external ID.
    • For Notification details, select SQS queue.
    • Choose Create.
  3. Above mentioned steps will create and provide the required details such as IAM roles/AWS role ID, external ID and queue URL to configure AWS Security Lake Integration.

  1. In Kibana go to Management > Integrations.

  2. In "Search for integrations" search bar, type Amazon Security Lake. image::images/amazon_security_lake/search.png[Search]

  3. Click on the "Amazon Security Lake" integration from the search results.

  4. Click on the Add Amazon Security Lake Integration button to add the integration. image::images/amazon_security_lake/home_page.png[Home Page]

  5. By default collect logs via S3 Bucket toggle will be off and collect logs for AWS SQS.

    • queue url image::images/amazon_security_lake/queue_url.png[Queue URL]
    • collect logs via S3 Bucket toggled off
    • role ARN
    • external id image::images/amazon_security_lake/role_arn_and_external_id.png[Role ARN and External ID]
  6. If you want to collect logs via AWS S3, then you have to put the following details:

    • bucket ARN or access point ARN
    • role ARN
    • external id

NOTE:

  • There are other input combination options available, please check here.
  • Metrics are not part of the Amazon Security Lake integration.
  • Events are included in the Amazon Security Lake integration.
  • Service checks are not incorporated into the Amazon Security Lake integration.
  • To troubleshoot, ensure that the IAM role in your AWS account has the correct permissions.

This is the Event dataset.