Kafka output settings
Specify these settings to send data over a secure connection to Kafka. In the Fleet Output settings, make sure that the Kafka output type is selected.
If you plan to use Logstash to modify Elastic Agent output data before it’s sent to Kafka, please refer to our guidance for doing so, further in on this page.
Kafka version |
The Kafka protocol version that Elastic Agent will request when connecting. Defaults to 1.0.0 . Currently Kafka versions from 0.8.2.0 to 2.6.0 are supported, however the latest Kafka version (3.x.x ) is expected to be compatible when version 2.6.0 is selected. When using Kafka 4.0 and newer, the version must be set to at least 2.1.0 . |
Hosts |
The addresses your Elastic Agents will use to connect to one or more Kafka brokers. Use the format host:port (without any protocol http:// ). Click Add row to specify additional addresses.Examples: * localhost:9092 * mykafkahost:9092 Refer to the Fleet Server documentation for default ports and other configuration details. |
Select the mechanism that Elastic Agent uses to authenticate with Kafka.
None |
No authentication is used between Elastic Agent and Kafka. This is the default option. In production, it’s recommended to have an authentication method selected. Plaintext : Set this option for traffic between Elastic Agent and Kafka to be sent as plaintext, without any transport layer security. This is the default option when no authentication is set. Encryption : Set this option for traffic between Elastic Agent and Kafka to use transport layer security. When Encryption* is selected, the *Server SSL certificate authorities and Verification mode mode options become available. |
Username / Password |
Connect to Kafka with a username and password. Provide your username and password, and select a SASL (Simple Authentication and Security Layer) mechanism for your login credentials. When SCRAM is enabled, Elastic Agent uses the SCRAM mechanism to authenticate the user credential. SCRAM is based on the IETF RFC5802 standard which describes a challenge-response mechanism for authenticating users. * Plain - SCRAM is not used to authenticate * SCRAM-SHA-256 - uses the SHA-256 hashing function * SCRAM-SHA-512 - uses the SHA-512 hashing function To prevent unauthorized access your Kafka password is stored as a secret value. While secret storage is recommended, you can choose to override this setting and store the password as plain text in the agent policy definition. Secret storage requires Fleet Server version 8.12 or higher. Note that this setting can also be stored as a secret value or as plain text for preconfigured outputs. See Preconfiguration settings in the Kibana Guide to learn more. |
SSL |
Authenticate using the Secure Sockets Layer (SSL) protocol. Provide the following details for your SSL certificate: Client SSL certificate : The certificate generated for the client. Copy and paste in the full contents of the certificate. This is the certificate that all the agents will use to connect to Kafka. In cases where each client has a unique certificate, the local path to that certificate can be placed here. The agents will pick the certificate in that location when establishing a connection to Kafka. Client SSL certificate key : The private key generated for the client. This must be in PKCS 8 key. Copy and paste in the full contents of the certificate key. This is the certificate key that all the agents will use to connect to Kafka. In cases where each client has a unique certificate key, the local path to that certificate key can be placed here. The agents will pick the certificate key in that location when establishing a connection to Kafka. To prevent unauthorized access the certificate key is stored as a secret value. While secret storage is recommended, you can choose to override this setting and store the key as plain text in the agent policy definition. Secret storage requires Fleet Server version 8.12 or higher. Note that this setting can also be stored as a secret value or as plain text for preconfigured outputs. See Preconfiguration settings in the Kibana Guide to learn more. |
Server SSL certificate authorities |
The CA certificate to use to connect to Kafka. This is the CA used to generate the certificate and key for Kafka. Copy and paste in the full contents for the CA certificate. This setting is optional. This setting is not available when the authentication None and Plaintext options are selected.Click Add row to specify additional certificate authories. |
Verification mode |
Controls the verification of server certificates. Valid values are:Full : Verifies that the provided certificate is signed by a trusted authority (CA) and also verifies that the server’s hostname (or IP address) matches the names identified within the certificate. None : Performs no verification of the server’s certificate. This mode disables many of the security benefits of SSL/TLS and should only be used after cautious consideration. It is primarily intended as a temporary diagnostic mechanism when attempting to resolve TLS errors; its use in production environments is strongly discouraged. Strict : Verifies that the provided certificate is signed by a trusted authority (CA) and also verifies that the server’s hostname (or IP address) matches the names identified within the certificate. If the Subject Alternative Name is empty, it returns an error. Certificate : Verifies that the provided certificate is signed by a trusted authority (CA), but does not perform any hostname verification. The default value is Full . This setting is not available when the authentication None and Plaintext options are selected. |
The number of partitions created is set automatically by the Kafka broker based on the list of topics. Records are then published to partitions either randomly, in round-robin order, or according to a calculated hash.
Use this option to set the Kafka topic for each Elastic Agent event.
Default topic |
Set a default topic to use for events sent by Elastic Agent to the Kafka output. You can set a static topic, for example elastic-agent , or you can choose to set a topic dynamically based on an [Elastic Common Scheme (ECS)]Elastic Common Schema (ECS)) field. Available fields include:* data_stream_type * data_stream.dataset * data_stream.namespace * @timestamp * event-dataset You can also set a custom field. This is useful if you’re using the add_fields processor as part of your Elastic Agent input. Otherwise, setting a custom field is not recommended. |
A header is a key-value pair, and multiple headers can be included with the same key. Only string values are supported. These headers will be included in each produced Kafka message.
You can enable compression to reduce the volume of Kafka output.
Configure timeout and buffer size values for the Kafka brokers.
Key |
An optional formatted string specifying the Kafka event key. If configured, the event key can be extracted from the event using a format string. See the Kafka documentation for the implications of a particular choice of key; by default, the key is chosen by the Kafka cluster. |
Proxy |
Select a proxy URL for Elastic Agent to connect to Kafka. To learn about proxy configuration, refer to Using a proxy server with Elastic Agent and Fleet. |
Advanced YAML configuration |
YAML settings that will be added to the Kafka output section of each policy that uses this output. Make sure you specify valid YAML. The UI does not currently provide validation. See Advanced YAML configuration for descriptions of the available settings. |
Make this output the default for agent integrations |
When this setting is on, Elastic Agents use this output to send data if no other output is set in the agent policy. |
Make this output the default for agent monitoring |
When this setting is on, Elastic Agents use this output to send agent monitoring data if no other output is set in the agent policy. |
If you are considering using Logstash to ship the data from kafka
to Elasticsearch, please be aware the structure of the documents sent from Elastic Agent to kafka
must not be modified by Logstash. We suggest disabling ecs_compatibility
on both the kafka
input and the json
codec in order to make sure the input doesn’t edit the fields and their contents.
The data streams setup by the integrations expect to receive events having the same structure and field names as they were sent directly from an Elastic Agent.
The structure of the documents sent from Elastic Agent to kafka
must not be modified by Logstash. We suggest disabling ecs_compatibility
on both the kafka
input and the json
codec.
Refer to the Logstash output for Elastic Agent documentation for more details.
inputs {
kafka {
...
ecs_compatibility => "disabled"
codec => json { ecs_compatibility => "disabled" }
...
}
}
...