Loading

S3 output plugin

For other versions, see the Versioned plugin docs.

For questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github. For the list of Elastic supported plugins, please consult the Elastic Support Matrix.

This plugin batches and uploads logstash events into Amazon Simple Storage Service (Amazon S3).

Important

The S3 output plugin only supports AWS S3. Other S3 compatible storage solutions are not supported.

S3 outputs create temporary files into the OS' temporary directory. You can specify where to save them using the temporary_directory option.

Important

For configurations containing multiple s3 outputs with the restore option enabled, each output should define its own temporary_directory.

  • Amazon S3 Bucket and S3 Access Permissions (Typically access_key_id and secret_access_key)
  • S3 PutObject permission
`ls.s3.312bc026-2f5d-49bc-ae9f-5940cf4ad9a6.2013-04-18T10.00.tag_hello.part0.txt`
ls.s3 indicates logstash plugin s3
312bc026-2f5d-49bc-ae9f-5940cf4ad9a6 a new, random uuid per file.
2013-04-18T10.00 represents the time whenever you specify time_file.
tag_hello indicates the event’s tag.
part0 If you indicate size_file, it will generate more parts if your file.size > size_file.When a file is full, it gets pushed to the bucket and then deleted from the temporary directory.If a file is empty, it is simply deleted. Empty files will not be pushed.

This plugin will recover and upload temporary log files after crash/abnormal termination when using restore set to true

This is an example of logstash config:

output {
   s3{
     access_key_id => "crazy_key"             (optional)
     secret_access_key => "monkey_access_key" (optional)
     region => "eu-west-1"                    (optional, default = "us-east-1")
     bucket => "your_bucket"                  (required)
     size_file => 2048                        (optional) - Bytes
     time_file => 5                           (optional) - Minutes
     codec => "plain"                         (optional)
     canned_acl => "private"                  (optional. Options are "private", "public-read", "public-read-write", "authenticated-read", "aws-exec-read", "bucket-owner-read", "bucket-owner-full-control", "log-delivery-write". Defaults to "private" )
   }

This plugin supports the following configuration options plus the Common options described later.

Setting Input type Required
access_key_id string No
additional_settings hash No
aws_credentials_file string No
bucket string Yes
canned_acl string, one of ["private", "public-read", "public-read-write", "authenticated-read", "aws-exec-read", "bucket-owner-read", "bucket-owner-full-control", "log-delivery-write"] No
encoding string, one of ["none", "gzip"] No
endpoint string No
prefix string No
proxy_uri string No
region string No
restore boolean No
retry_count number No
retry_delay number No
role_arn string No
role_session_name string No
rotation_strategy string, one of ["size_and_time", "size", "time"] No
secret_access_key string No
server_side_encryption boolean No
server_side_encryption_algorithm string, one of ["AES256", "aws:kms"] No
session_token string No
signature_version string, one of ["v2", "v4"] No
size_file number No
ssekms_key_id string No
storage_class string, one of ["STANDARD", "REDUCED_REDUNDANCY", "STANDARD_IA", "ONEZONE_IA", "INTELLIGENT_TIERING", "GLACIER", "DEEP_ARCHIVE", "OUTPOSTS", "GLACIER_IR", "SNOW", "EXPRESS_ONEZONE"] No
temporary_directory string No
time_file number No
upload_multipart_threshold number No
upload_queue_size number No
upload_workers_count number No
use_aws_bundled_ca boolean No
validate_credentials_on_root_bucket boolean No

Also see Common options for a list of options supported by all output plugins.

  • Value type is string
  • There is no default value for this setting.

This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order:

  1. Static configuration, using access_key_id and secret_access_key params in logstash plugin config
  2. External credentials file specified by aws_credentials_file
  3. Environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
  4. Environment variables AMAZON_ACCESS_KEY_ID and AMAZON_SECRET_ACCESS_KEY
  5. IAM Instance Profile (available when running inside EC2)
  • Value type is hash
  • Default value is {}

Key-value pairs of settings and corresponding values used to parametrize the connection to S3. See full list in the AWS SDK documentation. Example:

output {
  s3 {
    access_key_id => "1234",
    secret_access_key => "secret",
    region => "eu-west-1",
    bucket => "logstash-test",
    additional_settings => {
      "force_path_style" => true,
      "follow_redirects" => false
    }
  }
}
  • Value type is string
  • There is no default value for this setting.

Path to YAML file containing a hash of AWS credentials. This file will only be loaded if access_key_id and secret_access_key aren’t set. The contents of the file should look like this:

:access_key_id: "12345"
:secret_access_key: "54321"
  • This is a required setting.
  • Value type is string
  • There is no default value for this setting.

S3 bucket

  • Value can be any of: private, public-read, public-read-write, authenticated-read, aws-exec-read, bucket-owner-read, bucket-owner-full-control, log-delivery-write
  • Default value is "private"

The S3 canned ACL to use when putting the file. Defaults to "private".

  • Value can be any of: none, gzip
  • Default value is "none"

Specify the content encoding. Supports ("gzip"). Defaults to "none"

  • Value type is string
  • There is no default value for this setting.

The endpoint to connect to. By default it is constructed using the value of region. This is useful when connecting to S3 compatible services, but beware that these aren’t guaranteed to work correctly with the AWS SDK. The endpoint should be an HTTP or HTTPS URL, e.g. https://example.com

  • Value type is string
  • Default value is ""

Specify a prefix to the uploaded filename to simulate directories on S3. Prefix does not require leading slash. This option supports Logstash interpolation. For example, files can be prefixed with the event date using prefix = "%{+YYYY}/%{+MM}/%{+dd}".

Important

Take care when you are using interpolated strings in prefixes. This has the potential to create large numbers of unique prefixes, causing large numbers of in-progress uploads. This scenario may result in performance and stability issues, which can be further exacerbated when you use a rotation_strategy that delays uploads.

  • Value type is string
  • There is no default value for this setting.

URI to proxy server if required

  • Value type is string
  • Default value is "us-east-1"

The AWS Region

  • Value type is boolean
  • Default value is true

Used to enable recovery after crash/abnormal termination. Temporary log files will be recovered and uploaded.

  • Value type is number
  • Default value is Infinity

Allows to limit number of retries when S3 uploading fails.

  • Value type is number
  • Default value is 1

Delay (in seconds) to wait between consecutive retries on upload failures.

  • Value type is string
  • There is no default value for this setting.

The AWS IAM Role to assume, if any. This is used to generate temporary credentials, typically for cross-account access. See the AssumeRole API documentation for more information.

  • Value type is string
  • Default value is "logstash"

Session name to use when assuming an IAM role.

  • Value can be any of: size_and_time, size, time
  • Default value is "size_and_time"

Controls when to close the file and push it to S3.

If you set this value to size, it uses the value set in size_file. If you set this value to time, it uses the value set in time_file. If you set this value to size_and_time, it uses the values from size_file and time_file, and splits the file when either one matches.

The default strategy checks both size and time. The first value to match triggers file rotation.

  • Value type is string
  • There is no default value for this setting.

The AWS Secret Access Key

  • Value type is boolean
  • Default value is false

Specifies whether or not to use S3’s server side encryption. Defaults to no encryption.

  • Value can be any of: AES256, aws:kms
  • Default value is "AES256"

Specifies what type of encryption to use when SSE is enabled.

  • Value type is string
  • There is no default value for this setting.

The AWS Session token for temporary credential

  • Value can be any of: v2, v4
  • There is no default value for this setting.

The version of the S3 signature hash to use. Normally uses the internal client default, can be explicitly specified here

  • Value type is number
  • Default value is 5242880

Set the file size in bytes. When the number of bytes exceeds the size_file value, a new file is created. If you use tags, Logstash generates a specific size file for every tag.

  • Value type is string
  • There is no default value for this setting.

The key to use when specified along with server_side_encryption ⇒ aws:kms. If server_side_encryption ⇒ aws:kms is set but this is not default KMS key is used. http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html

  • Value can be any of: STANDARD, REDUCED_REDUNDANCY, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, GLACIER, DEEP_ARCHIVE, OUTPOSTS, GLACIER_IR, SNOW, EXPRESS_ONEZONE
  • Default value is "STANDARD"

Specifies what S3 storage class to use when uploading the file. More information about the different storage classes can be found: http://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html Defaults to STANDARD.

  • Value type is string
  • Default value is "/tmp/logstash"

Set the directory where logstash will store the tmp files before sending it to S3 default to the current OS temporary directory in linux /tmp/logstash

  • Value type is number
  • Default value is 15

Set the time, in MINUTES, to close the current sub_time_section of bucket. If rotation_strategy is set to time or size_and_time, then time_file cannot be set to 0. Otherwise, the plugin raises a configuration error.

  • Value type is number
  • Default value is 15728640

Files larger than this number are uploaded using the S3 multipart APIs

  • Value type is number
  • Default value is 4

Number of items we can keep in the local queue before uploading them

  • Value type is number
  • Default value is 4

Specify how many workers to use to upload the files to S3

  • Value type is boolean
  • Default value is false

Use bundled CA certificates that ship with AWS SDK to verify SSL peer certificates. For cases where the default certificates are unavailable, e.g. Windows, you can set this to true.

  • Value type is boolean
  • Default value is true

The common use case is to define permissions on the root bucket and give Logstash full access to write logs. In some circumstances, you need more granular permissions on the subfolder. This allows you to disable the check at startup.

These configuration options are supported by all output plugins:

Setting Input type Required
codec codec No
enable_metric boolean No
id string No
  • Value type is codec
  • Default value is "line"

The codec used for output data. Output codecs are a convenient method for encoding your data before it leaves the output without needing a separate filter in your Logstash pipeline.

  • Value type is boolean
  • Default value is true

Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.

  • Value type is string
  • There is no default value for this setting.

Add a unique ID to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type. For example, if you have 2 s3 outputs. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.

output {
  s3 {
    id => "my_plugin_id"
  }
}
Note

Variable substitution in the id field only supports environment variables and does not support the use of values from the secret store.