Loading

Cloudflare Logpush

<div class="condensed-table">
| | |
| --- | --- |
| Version | 1.31.0 (View all) |
| Compatible Kibana version(s) | 8.16.2 or higher |
| Supported Serverless project types
What’s this? | Security
Observability |
| Subscription level
What’s this? | Basic |
| Level of support
What’s this? | Elastic |

</div>

The Cloudflare Logpush integration allows you to monitor Access Request, Audit, CASB, Device Posture, DNS, DNS Firewall, Firewall Event, Gateway DNS, Gateway HTTP, Gateway Network, HTTP Request, Magic IDS, NEL Report, Network Analytics, Sinkhole HTTP, Spectrum Event, Network Session and Workers Trace Events logs. Cloudflare is a content delivery network and DDoS mitigation company. Cloudflare provides a network designed to make everything you connect to the Internet secure, private, fast, and reliable; secure your websites, APIs, and Internet applications; protect corporate networks, employees, and devices; and write and deploy code that runs on the network edge.

The Cloudflare Logpush integration can be used in three different modes to collect data:

  • HTTP Endpoint mode - Cloudflare pushes logs directly to an HTTP endpoint hosted by your Elastic Agent.
  • AWS S3 polling mode - Cloudflare writes data to S3 and Elastic Agent polls the S3 bucket by listing its contents and reading new files.
  • AWS S3 SQS mode - Cloudflare writes data to S3, S3 pushes a new object notification to SQS, Elastic Agent receives the notification from SQS, and then reads the S3 object. Multiple Agents can be used in this mode.

For example, you could use the data from this integration to know which websites have the highest traffic, which areas have the highest network traffic, or observe mitigation statistics.

The Cloudflare Logpush integration collects logs for the following types of events.

Access Request: See Example Schema here.

Audit: See Example Schema here.

CASB findings: See Example Schema here.

Device Posture Results: See Example Schema here.

Gateway DNS: See Example Schema here.

Gateway HTTP: See Example Schema here.

Gateway Network: See Example Schema here.

Zero Trust Network Session: See Example Schema here.

DNS: See Example Schema here.

DNS Firewall: See Example Schema here.

Firewall Event: See Example Schema here.

HTTP Request: See Example Schema here.

Magic IDS: See Example Schema here.

NEL Report: See Example Schema here.

Network Analytics: See Example Schema here.

Sinkhole HTTP: See Example Schema here.

Spectrum Event: See Example Schema here.

Workers Trace Events: See Example Schema here.

You need Elasticsearch for storing and searching your data and Kibana for visualizing and managing it. You can use our hosted Elasticsearch Service on Elastic Cloud, which is recommended, or self-manage the Elastic Stack on your own hardware.

This module has been tested against Cloudflare version v4.

Note

It is recommended to use AWS SQS for Cloudflare Logpush.

  • Configure Cloudflare Logpush to Amazon S3 to send Cloudflare’s data to an AWS S3 bucket.

  • The default values of the "Bucket List Prefix" are listed below. However, users can set the parameter "Bucket List Prefix" according to their requirements.

    Data Stream Name Bucket List Prefix
    Access Request access_request
    Audit Logs audit_logs
    CASB findings casb
    Device Posture Results device_posture
    DNS dns
    DNS Firewall dns_firewall
    Firewall Event firewall_event
    Gateway DNS gateway_dns
    Gateway HTTP gateway_http
    Gateway Network gateway_network
    HTTP Request http_request
    Magic IDS magic_ids
    NEL Report nel_report
    Network Analytics network_analytics_logs
    Zero Trust Network Session network_session
    Sinkhole HTTP sinkhole_http
    Spectrum Event spectrum_event
    Workers Trace Events workers_trace
  1. If Logpush forwarding to an AWS S3 Bucket hasn’t been configured, then first setup an AWS S3 Bucket as mentioned in the above documentation.

  2. Follow the steps below for each Logpush data stream that has been enabled:

    1. Create an SQS queue

      • To setup an SQS queue, follow "Step 1: Create an Amazon SQS queue" mentioned in the Amazon documentation.
      • While creating an SQS Queue, please provide the same bucket ARN that has been generated after creating an AWS S3 Bucket.
    2. Setup event notification from the S3 bucket using the instructions here. Use the following settings:

      • Event type: All object create events (s3:ObjectCreated:*)
      • Destination: SQS Queue
      • Prefix (filter): enter the prefix for this Logpush data stream, e.g. audit_logs/
      • Select the SQS queue that has been created for this data stream

NOTE:

  • A separate SQS queue and S3 bucket notification is required for each enabled data stream.
  • Permissions for the above AWS S3 bucket and SQS queues should be configured according to the Filebeat S3 input documentation
  • Credentials for the above AWS S3 and SQS input types should be configured using the link.
  • Data collection via AWS S3 Bucket and AWS SQS are mutually exclusive in this case.

NOTE:

  • When creating the API token, make sure it has Admin permissions. This is needed to list buckets and view bucket configuration.

When configuring the integration to read from S3-Compatible Buckets such as Cloudflare R2, the following steps are required:

  • Enable the toggle Collect logs via S3 Bucket.
  • Make sure that the Bucket Name is set.
  • Although you have to create an API token, that token should not be used for authentication with the S3 API. You just have to set the Access Key ID and Secret Access Key.
  • Set the endpoint URL which can be found in Bucket Details. Endpoint should be a full URI that will be used as the API endpoint of the service. For Cloudflare R2 buckets, the URI is typically in the form of https(s)://<accountid>.r2.cloudflarestorage.com.
  • Bucket Prefix is optional for each data stream.

NOTE:

  • The AWS region is not a requirement when configuring the R2 Bucket, as the region for any R2 Bucket is auto from the API perspective. However, the error failed to get AWS region for bucket: operation error S3: GetBucketLocation may appear when starting the integration. The reason is that GetBucketLocation is the first request made to the API when starting the integration, so any configuration, credentials or permissions errors would cause this. Focus on the API response error to identify the original issue.
  • Configure the Data Forwarder to ingest data into a GCS bucket.
  • Configure the GCS bucket names and credentials along with the required configs under the "Collect Cloudflare Logpush logs via Google Cloud Storage" section.
  • Make sure the service account and authentication being used, has proper levels of access to the GCS bucket Manage Service Account Keys

NOTE:

  • The GCS input currently does not support fetching of buckets using bucket prefixes, so the bucket names have to be configured manually for each data stream.
  • The GCS input currently only accepts a service account JSON key or a service account JSON file for authentication.
  • The GCS input currently only supports json data.
  • Reference link to Enable HTTP destination for Cloudflare Logpush.
  • Add same custom header along with its value on both the side for additional security.
  • For example, while creating a job along with a header and value for a particular dataset:
curl --location --request POST 'https://api.cloudflare.com/client/v4/zones/<ZONE ID>/logpush/jobs' \
--header 'X-Auth-Key: <X-AUTH-KEY>' \
--header 'X-Auth-Email: <X-AUTH-EMAIL>' \
--header 'Authorization: <BASIC AUTHORIZATION>' \
--header 'Content-Type: application/json' \
--data-raw '{
    "name":"<public domain>",
    "destination_conf": "https://<public domain>:<public port>/<dataset path>?header_Content-Type=application/json&header_<secret_header>=<secret_value>",
    "dataset": "audit",
    "logpull_options": "fields=RayID,EdgeStartTimestamp&timestamps=rfc3339"
}'

NOTE:

  • The destination_conf parameter inside the request data should set the Content-Type header to application/json. This is the content type that the HTTP endpoint expects for incoming events.
  • Default port for the HTTP Endpoint is 9560.
  • When using the same port for more than one dataset, be sure to specify different dataset paths.
  • To enable request ACKing, add a wait_for_completion_timeout request query with the timeout for an ACK. See the HTTP Endpoint documentation for details.
  1. In Kibana, go to Management > Integrations
  2. In the integrations search bar type Cloudflare Logpush.
  3. Click the Cloudflare Logpush integration from the search results.
  4. Click the Add Cloudflare Logpush button to add Cloudflare Logpush integration.
  5. Enable the Integration with the HTTP Endpoint, AWS S3 input or GCS input.
  6. Under the AWS S3 input, there are two types of inputs: using AWS S3 Bucket or using SQS.
  7. Configure Cloudflare to send logs to the Elastic Agent via HTTP Endpoint, or any R2, AWS or GCS Bucket following the specific guides above.

This is the access_request dataset.

This is the audit dataset.

This is the casb dataset.

This is the device_posture dataset.

This is the dns dataset.

This is the dns_firewall dataset.

This is the firewall_event dataset.

This is the gateway_dns dataset.

This is the gateway_http dataset.

This is the gateway_network dataset.

This is the http_request dataset.

This is the magic_ids dataset.

This is the nel_report dataset.

This is the network_analytics dataset.

This is the network_session dataset.

This is the sinkhole_http dataset.

This is the spectrum_event dataset.

This is the workers_trace dataset.