Loading

Tutorial 1: Install a self-managed Elastic Stack

This tutorial demonstrates how to install and configure the latest 9.3.2 version of the Elastic Stack in a self-managed environment. Following these steps sets up a three node Elasticsearch cluster, with Kibana, Fleet Server, and Elastic Agent, each on separate hosts. The Elastic Agent is configured with the System integration, enabling it to gather local system logs and metrics and deliver them into the Elasticsearch cluster. Finally, the tutorial shows how to view the system data in Kibana.

Note

This installation flow relies on the Elasticsearch automatic security setup, which secures Elasticsearch by default during the initial installation.

If you plan to use custom certificates (for example, corporate-provided or publicly trusted certificates), or if you need to configure HTTPS for browser-to-Kibana communication, you can follow this tutorial in combination with Tutorial 2: Customize certificates for a self-managed Elastic Stack at the appropriate stage of the installation.

For more details, refer to Security overview.

It should take between one and two hours to complete these steps.

To get started, you need the following:

  • A set of virtual or physical hosts on which to install each stack component.
  • On each host, a user account with sudo privileges, and curl installed.

The examples in this guide use RPM Package Manager (RPM) packages to install the Elastic Stack 9.3.2 components on hosts running Red Hat Enterprise Linux or a compatible distribution such as Rocky Linux.

For the full list of supported operating systems and platforms, refer to the Elastic Support Matrix.

The packages needed by this tutorial are:

Note

For Elastic Agent and Fleet Server (both of which use the elastic-agent-9.3.2-linux-x86_64.tar.gz package) we recommend using TAR/ZIP packages over RPM/DEB system packages, since only the former support upgrading using Fleet.

Special considerations such as firewalls and proxy servers are not covered here.

For the basic ports and protocols required for the installation to work, refer to the following overview section.

Before starting, take a moment to familiarize yourself with the Elastic Stack components.

Overview of the Elastic Stack components

To learn more about the Elastic Stack and how each of these components are related, refer to An overview of the Elastic Stack.

This tutorial results in a secure-by-default environment, but not every connection uses the same certificate model. Before you begin, it helps to understand the security layout produced by these steps:

  • Elasticsearch uses the automatic security setup during the initial installation flow. This process generates certificates and enables TLS for both the transport and HTTP layers.
  • Kibana connects to Elasticsearch using the enrollment flow from the initial Elasticsearch setup.
  • HTTPS for browser-to-Kibana communication is not configured in this tutorial, although it is strongly recommended for production environments. Kibana HTTPS is covered in Tutorial 2: Customize certificates for a self-managed Elastic Stack.
  • Fleet Server is installed using the Quick Start flow, which uses a self-signed certificate for its HTTPS endpoint.
  • Elastic Agent enrolls using that Quick Start flow, which requires the install command to include the --insecure flag.

If you plan to use certificates signed by your organization's certificate authority or by a public CA, complete this tutorial until Kibana is installed (Step 7), and then continue with Tutorial 2: Customize certificates for a self-managed Elastic Stack before installing Fleet Server and Elastic Agent.

To begin, use RPM to install Elasticsearch on the first host. This initial Elasticsearch instance bootstraps a new cluster. You can find details about all of the following steps in the document Install Elasticsearch with RPM.

Note

For installation steps for other supported methods, refer to Install Elasticsearch.

  1. Log in to the host where you'd like to set up your first Elasticsearch node.

  2. Create a working directory for the installation package:

    mkdir elastic-install-files
    		
  3. Change into the new directory:

    cd elastic-install-files
    		
  4. Download the Elasticsearch RPM and checksum file from the Elastic Artifact Registry.

    curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-9.3.2-x86_64.rpm
    curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-9.3.2-x86_64.rpm.sha512
    		
  5. (Optional) Confirm the validity of the downloaded package by checking the SHA of the downloaded RPM against the published checksum:

    sha512sum -c elasticsearch-9.3.2-x86_64.rpm.sha512
    		

    The command should return:

    elasticsearch-9.3.2-x86_64.rpm: OK.
    		
  6. (Optional) Import the Elasticsearch GPG key used to verify the RPM package signature:

    sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
    		
  7. Run the Elasticsearch install command:

    sudo rpm --install elasticsearch-9.3.2-x86_64.rpm
    		

    The Elasticsearch install process enables certain security features by default, including the following:

    • Authentication and authorization, including the built-in elastic superuser account.
    • TLS certificates and keys for the transport and HTTP layers, stored in /etc/elasticsearch/certs and configured automatically for use by Elasticsearch.
    • The transport interface is bound to the loopback interface (localhost), preventing other nodes from joining the cluster, while the HTTP interface listens on all network interfaces (http.host: 0.0.0.0).
  8. Copy the terminal output from the install command to a local file. In particular, you need the password for the built-in elastic superuser account. The output also contains the commands to enable Elasticsearch to run as a service, which you use in the next step.

  9. Run the following two commands to enable Elasticsearch to run as a service using systemd. This enables Elasticsearch to start automatically when the host system reboots. For more details, refer to Running Elasticsearch with systemd.

    sudo systemctl daemon-reload
    sudo systemctl enable elasticsearch.service
    		

Before moving ahead to configure additional Elasticsearch nodes, you need to update the Elasticsearch configuration on this first node so that other hosts are able to connect to it. This is done by updating the settings in the elasticsearch.yml file. For more details about Elasticsearch configuration and the most common settings, refer to Configure Elasticsearch and important settings configuration.

  1. Obtain your host IP address (for example, by running ifconfig). You need this value later.

  2. Open the Elasticsearch configuration file in a text editor, such as vim:

    sudo vim /etc/elasticsearch/elasticsearch.yml
    		
  3. In a multi-node Elasticsearch cluster, all of the Elasticsearch instances must have the same cluster name.

    In the configuration file, uncomment the line #cluster.name: my-application and give the Elasticsearch cluster any name that you'd like:

    cluster.name: elasticsearch-demo
    		
  4. (Optional) Set a node name for this instance. If you don't set one, Elasticsearch uses its host name by default.

    In the configuration file, uncomment the line #node.name: node-1 and give the Elasticsearch instance any name that you'd like:

    node.name: instance-1
    		
  5. Configure networking settings.

    1. Uncomment the line #transport.host: 0.0.0.0 to accept connections on all available network interfaces.

      By default, Elasticsearch listens for transport traffic on localhost, which prevents other Elasticsearch instances from joining the cluster. To allow communication between nodes, you need to bind the transport interface to a non-loopback address.

      transport.host: 0.0.0.0
      		
      1. If you want Elasticsearch to listen only on a specific interface, set this to the host IP address instead.
    2. Make sure http.host is configured.

      Elasticsearch should already be configured to listen on all network interfaces for HTTP traffic as part of the automatic setup.

      Verify that this setting is present in your configuration file. If it is not, add it:

      http.host: 0.0.0.0
      		
      1. If you want Elasticsearch to listen only on a specific interface, set this to the host IP address instead.
      Tip

      As an alternative to setting transport.host and http.host separately, you can use network.host to configure both interfaces at once. For details, refer to the Elasticsearch networking settings documentation.

  6. Save your changes and close the editor.

    Important

    After you configure Elasticsearch to use non-loopback addresses, it enforces bootstrap checks. If Elasticsearch does not start successfully in the next step, review the Important system configuration documentation.

  1. Now, it's time to start the Elasticsearch service on the first node:

    sudo systemctl start elasticsearch.service
    		

    If you need to, you can stop the service by running sudo systemctl stop elasticsearch.service.

    Tip

    If Elasticsearch does not start successfully, check the Elasticsearch log file at /var/log/elasticsearch/<cluster-name>.log to learn more. For example, if your cluster name is elasticsearch-demo, the log file is /var/log/elasticsearch/elasticsearch-demo.log.

  2. Make sure that Elasticsearch is running properly.

    sudo curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic:$ELASTIC_PASSWORD https://localhost:9200
    		

    In the command, replace $ELASTIC_PASSWORD with the elastic superuser password that you copied from the install command output.

    If all is well, the command returns a response like this:

    {
      "name" : "Cp9oae6",
      "cluster_name" : "elasticsearch-demo",
      "cluster_uuid" : "AT69_C_DTp-1qgIJlatQqA",
      "version" : {
        "number" : "{version_qualified}",
        "build_type" : "{build_type}",
        "build_hash" : "f27399d",
        "build_flavor" : "default",
        "build_date" : "2016-03-30T09:51:41.449Z",
        "build_snapshot" : false,
        "lucene_version" : "{lucene_version}",
        "minimum_wire_compatibility_version" : "1.2.3",
        "minimum_index_compatibility_version" : "1.2.3"
      },
      "tagline" : "You Know, for Search"
    }
    		
  3. Finally, check the status of Elasticsearch:

    sudo systemctl status elasticsearch
    		

    As with the previous curl command, the output should confirm that Elasticsearch started successfully. Type q to exit from the status command results.

To set up a second Elasticsearch node, you start by installing the Elasticsearch RPM package, but then follow a different configuration flow so that the node joins the existing cluster instead of creating a new one. You can find additional details in Reconfigure a node to join an existing cluster.

  1. Log in to the host where you'd like to set up your second Elasticsearch instance.

  2. Create a working directory for the installation package:

    mkdir elastic-install-files
    		
  3. Change into the new directory:

    cd elastic-install-files
    		
  4. Download the Elasticsearch RPM and checksum file:

    curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-9.3.2-x86_64.rpm
    curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-9.3.2-x86_64.rpm.sha512
    		
  5. Check the SHA of the downloaded RPM:

    sha512sum -c elasticsearch-9.3.2-x86_64.rpm.sha512
    		
  6. (Optional) Import the Elasticsearch GPG key used to verify the RPM package signature:

    sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
    		
  7. Run the Elasticsearch install command:

    sudo rpm --install elasticsearch-9.3.2-x86_64.rpm
    		

    Unlike the setup for the first Elasticsearch node, in this case you don't need to copy the output of the install command. By default, the installation prepares the node as a single-node cluster, but in a later step the elasticsearch-reconfigure-node tool updates that configuration so the node can join your existing cluster.

  8. Enable Elasticsearch to run as a service:

    sudo systemctl daemon-reload
    sudo systemctl enable elasticsearch.service
    		
    Important

    Don't start the Elasticsearch service yet. Complete the remaining configuration steps first.

  9. To enable the new Elasticsearch node to connect to the cluster, create an enrollment token from any node that is already part of the cluster.

    Return to your terminal shell on the first Elasticsearch node and generate a node enrollment token:

    sudo /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node
    		
  10. Copy the generated enrollment token from the command output.

    Tip

    An enrollment token has a lifespan of 30 minutes. In case the elasticsearch-reconfigure-node command returns an Invalid enrollment token error, try generating a new token.

    Be sure not to confuse an Elasticsearch enrollment token (for enrolling Elasticsearch nodes in an existing cluster) with a Kibana enrollment token (to enroll your Kibana instance with Elasticsearch, as described in the next section). These two tokens are not interchangeable.

  11. In the terminal shell for your second Elasticsearch node, pass the enrollment token as a parameter to the elasticsearch-reconfigure-node tool:

    sudo /usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token <enrollment-token>
    		
    1. Replace <enrollment-token> with the token that you copied in the previous step.
    Note

    If elasticsearch-reconfigure-node fails and indicates that the node has already been started or initialized, refer to Cases when security auto-configuration is skipped for a list of possible causes.

    This can happen, for example, if Elasticsearch was started previously, which creates the data directory and prevents the auto-configuration process from running again.

  12. Answer the Do you want to continue with the reconfiguration process prompt with yes (y). The new Elasticsearch node is reconfigured.

  13. Obtain your host IP address (for example, by running ifconfig). You need this value later.

  14. Open the new Elasticsearch instance configuration file in a text editor:

    sudo vim /etc/elasticsearch/elasticsearch.yml
    		

    Because of running the elasticsearch-reconfigure-node tool, certain settings have been updated. For example:

    • The transport.host: 0.0.0.0 and http.host: 0.0.0.0 settings are already uncommented.
    • The discovery_seed.hosts setting has the host IP address of the first Elasticsearch node. As you add each new Elasticsearch node to the cluster, the discovery_seed.hosts setting contains an array of the IP addresses and port numbers to connect to each Elasticsearch node that was previously added to the cluster.
  15. In the configuration file, uncomment the line #cluster.name: my-application and set it to match the name you specified on the first Elasticsearch node:

    cluster.name: elasticsearch-demo
    		
  16. (Optional) Set a node name for this instance. If you don't set one, Elasticsearch uses its host name by default.

    In the configuration file, uncomment the line #node.name: node-1 and give the Elasticsearch instance any name that you'd like:

    node.name: instance-2
    		
  17. (Optional) Review networking settings.

    After running elasticsearch-reconfigure-node, Elasticsearch is already configured to use non-loopback addresses for transport and HTTP traffic, so no changes are usually required. You can verify this in your configuration file:

    transport.host: 0.0.0.0
    http.host: 0.0.0.0
    		
    Note

    If you make changes to the networking settings, ensure that the networking configuration is consistent across all nodes. For example, use the same approach to binding (specific IP addresses or 0.0.0.0) and the same settings (transport.host, http.host, or network.host) across all nodes. For details, refer to the Elasticsearch networking settings documentation.

  18. Save your changes and close the editor.

  19. Start Elasticsearch on the second node:

    sudo systemctl start elasticsearch.service
    		
    Tip

    If Elasticsearch does not start successfully, check the Elasticsearch log file at /var/log/elasticsearch/<cluster-name>.log to learn more. For example, if your cluster name is elasticsearch-demo, the log file is /var/log/elasticsearch/elasticsearch-demo.log.

  20. (Optional) To monitor the second Elasticsearch node as it starts up and joins the cluster, open a new terminal into the second node and tail the Elasticsearch log file:

    sudo tail -f /var/log/elasticsearch/elasticsearch-demo.log
    		
    1. If needed, replace elasticsearch-demo with your cluster name.

    Notice in the log file some helpful diagnostics, such as:

    • Security is enabled
    • Profiling is enabled
    • using discovery type [multi-node]
    • initialized
    • starting...

    After a minute or so, the log should show a message like:

    [<hostname2>] master node changed {previous [], current [<hostname1>...]}
    		

    where hostname1 is your first Elasticsearch instance node, and hostname2 is your second Elasticsearch instance node.

    The message indicates that the second Elasticsearch node has successfully contacted the initial Elasticsearch node and joined the cluster.

  21. As a final check, verify that the new node is reachable and responding, and that it appears in the cluster. In the following commands, replace $ELASTIC_PASSWORD with the same elastic superuser password that you used on the first Elasticsearch node.

    To confirm that Elasticsearch is running properly on the new node, run:

    sudo curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic:$ELASTIC_PASSWORD https://localhost:9200
    		
    1. For a more complete check, replace localhost with the IP address of the new node to verify that it is reachable over the network.

    Response example:

    {
      "name" : "Cp9oae6",
      "cluster_name" : "elasticsearch-demo",
      "cluster_uuid" : "AT69_C_DTp-1qgIJlatQqA",
      "version" : {
        "number" : "{version_qualified}",
        "build_type" : "{build_type}",
        "build_hash" : "f27399d",
        "build_flavor" : "default",
        "build_date" : "2016-03-30T09:51:41.449Z",
        "build_snapshot" : false,
        "lucene_version" : "{lucene_version}",
        "minimum_wire_compatibility_version" : "1.2.3",
        "minimum_index_compatibility_version" : "1.2.3"
      },
      "tagline" : "You Know, for Search"
    }
    		

    To confirm that the node has joined the cluster, run the following command on any Elasticsearch node:

    sudo curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic:$ELASTIC_PASSWORD https://localhost:9200/_cat/nodes?v
    		
    1. You can replace localhost with the IP address of any of the nodes.

    The output should include the new node together with the existing node or nodes in the cluster, for example:

    203.0.113.25 46 97 18 0.21 0.23 0.10 cdfhilmrstw - instance-2
    203.0.113.21 31 96  1 0.04 0.03 0.01 cdfhilmrstw * instance-1
    		

To set up additional Elasticsearch nodes, repeat the process from Step 4: Set up a second Elasticsearch node for each new node that you add to the cluster.

As a recommended best practice, create a new enrollment token for each new node that you add.

Once you have added all your Elasticsearch nodes to the cluster, you need to consolidate the elasticsearch.yml configuration on all nodes so that they can restart and rejoin the cluster cleanly in the future.

  1. On each Elasticsearch node, open /etc/elasticsearch/elasticsearch.yml in a text editor.

  2. Comment out or remove the cluster.initial_master_nodes setting, if it is still present. This setting is only needed while bootstrapping a new cluster.

  3. Update discovery.seed_hosts so it includes the IP address and transport port of each master-eligible Elasticsearch node in the cluster.

    On the first node in the cluster, you need to add the discovery.seed_hosts setting manually. For example, if your cluster has three nodes:

    discovery.seed_hosts:
      - 203.0.113.84:9300
      - 203.0.113.132:9300
      - 203.0.113.156:9300
    		
    Note

    If you are not configuring node roles, then all your Elasticsearch nodes should appear in the discovery.seed_hosts list of all the nodes.

  4. Save your changes on each node.

  5. Optionally, restart the Elasticsearch service on each node to validate the updated configuration.

If you do not perform these steps, one or more nodes can fail the discovery configuration bootstrap check when restarted.

For more information, refer to Update the config files and Discovery and cluster formation.

As with Elasticsearch, you can use RPM to install Kibana on another host. You can find details about all of the following steps in the document Install Kibana with RPM.

Note

For installation steps using other supported methods, refer to Install Kibana.

  1. Log in to the host where you'd like to install Kibana and create a working directory for the installation package:

    mkdir kibana-install-files
    		
  2. Change into the new directory:

    cd kibana-install-files
    		
  3. Download the Kibana RPM and checksum file from the Elastic website.

    curl -L -O https://artifacts.elastic.co/downloads/kibana/kibana-9.3.2-x86_64.rpm
    curl -L -O https://artifacts.elastic.co/downloads/kibana/kibana-9.3.2-x86_64.rpm.sha512
    		
  4. Confirm the validity of the downloaded package by checking the SHA of the downloaded RPM against the published checksum:

    sha512sum -c kibana-9.3.2-x86_64.rpm.sha512
    		

    The command should return:

    kibana-9.3.2-x86_64.rpm: OK
    		
  5. (Optional) Import the Elasticsearch GPG key used to verify the RPM package signature:

    sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
    		
    1. The GPG key used to sign Kibana and Elasticsearch RPM packages is the same.
  6. Run the Kibana install command:

    sudo rpm --install kibana-9.3.2-x86_64.rpm
    		
  7. Run the following two commands to enable Kibana to run as a service using systemd, enabling Kibana to start automatically when the host system reboots.

    sudo systemctl daemon-reload
    sudo systemctl enable kibana.service
    		

Before starting the Kibana service, update kibana.yml with the following settings:

  • The network binding address so that Kibana listens on its host IP address.
  • A saved objects encryption key required for features such as Fleet.

For more details about Kibana configuration, refer to the Kibana configuration.

  1. Obtain the host IP address for your Kibana host (for example, by running ifconfig) and make note of it.

  2. Generate a saved objects encryption key on the Kibana host:

    sudo /usr/share/kibana/bin/kibana-encryption-keys generate
    		

    The command output includes several encryption-related settings. For this tutorial, copy only the value of xpack.encryptedSavedObjects.encryptionKey, which is required for Fleet features. You can ignore the other generated keys for now.

  3. Open the Kibana configuration file for editing:

    sudo vim /etc/kibana/kibana.yml
    		
  4. Uncomment the line #server.host: localhost and replace the default address with the host IP address that you copied. For example:

    server.host: 203.0.113.28
    		
    1. If you want Kibana to listen on all available network interfaces, you can use 0.0.0.0 instead.
  5. Add xpack.encryptedSavedObjects.encryptionKey setting with the value returned by the kibana-encryption-keys generate command:

    xpack.encryptedSavedObjects.encryptionKey: "min-32-byte-long-strong-encryption-key"
    		
    1. Replace the value with the actual key.
    Important

    In production environments, consider storing this setting in the Kibana keystore instead of kibana.yml. For guidance, refer to Kibana secure settings.

    Rotate encryption keys only as part of a planned process. This helps ensure existing encrypted saved objects remain readable. For guidance on rotation, refer to Encryption key rotation.

  6. Save your changes and close the editor.

Kibana is now ready to start and enroll with the Elasticsearch cluster.

In this section, you start Kibana for the first time and complete enrollment with your Elasticsearch cluster. This initial startup provides the verification code and enrollment prompt, and it finalizes Kibana setup by automatically applying the required connection settings.

  1. Start the Kibana service:

    sudo systemctl start kibana.service
    		

    If you need to, you can stop the service by running sudo systemctl stop kibana.service.

  2. Run the status command to get details about the Kibana service.

    sudo systemctl status kibana
    		
  3. In the status command output, a URL is shown with:

    • a host address to access Kibana
    • a six digit verification code

    For example:

    Kibana has not been configured.
    Go to http://203.0.113.28:5601/?code=<code> to get started.
    		
    1. If the URL shows 0.0.0.0, use the host IP address when connecting to Kibana in the next step.

    Make note of the verification code.

  4. Open a web browser to the external IP address of the Kibana host machine, for example: http://<kibana-host-address>:5601.

    It can take a minute or two for Kibana to start up, so refresh the page if you don't see a prompt right away.

  5. When Kibana starts, you're prompted for an enrollment token. You must generate this token in Elasticsearch:

    1. Return to the terminal session on the first Elasticsearch node.

    2. Run the elasticsearch-create-enrollment-token command with the -s kibana option to generate a Kibana enrollment token:

      sudo /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana
      		
    3. Copy the generated enrollment token from the command output and paste it into the enrollment prompt in the browser.

  6. Click Configure Elastic.

  7. If you're prompted to provide a verification code, copy and paste in the six digit code that was returned by the status command. Then, wait for the setup to complete.

  8. When the Welcome to Elastic page appears, sign in with the elastic superuser account and the password that was generated when you installed the first Elasticsearch node.

  9. Click Log in.

Kibana is now fully set up and communicating with your Elasticsearch cluster.

Important: Stop here if you plan to use your own TLS/SSL certificates

This tutorial already uses the Elasticsearch automatic security setup, which configures security for Elasticsearch by default, including TLS for both the transport and HTTP layers.

If you plan to use certificates signed by your organization's certificate authority or by a public CA instead, stop here after installing Kibana and continue with Tutorial 2: Customize certificates for a self-managed Elastic Stack. That tutorial is the right place to replace or adjust the default certificate configuration before installing Fleet Server and Elastic Agent.

Following that path avoids installing Fleet Server and Elastic Agent with the certificate setup from this tutorial and then needing to reinstall the components after changing the security configuration.

Note also that the automatic setup used here does not configure HTTPS for browser access to Kibana, which is highly recommended for production environments.

Now that Kibana is up and running, you can install Fleet Server. Fleet Server connects Elastic Agent instances to Fleet and serves as a control plane for updating agent policies and collecting agent status information.

Note

This tutorial uses the Quick Start installation flow, which generates a self-signed certificate for the Fleet Server by default. For more details about Quick Start and Advanced setup options, refer to Deploy on-premises and self-managed Fleet Server.

If you want to use custom SSL/TLS certificates, follow the Tutorial 2: Customize certificates for a self-managed Elastic Stack instead of continuing with these steps.

Before proceeding, confirm the following prerequisites:

  • If you're not using the built-in elastic superuser, ensure your Kibana user has All privileges for Fleet and Integrations.
  • Elastic Agent hosts have direct network connectivity to both the Fleet Server and the Elasticsearch cluster.
  • The Kibana host can connect to https://epr.elastic.co on port 443 to download integration packages.
  1. Log in to the host where you'd like to set up Fleet Server.

  2. Create a working directory for the installation package:

    mkdir fleet-install-files
    		
  3. Change into the new directory:

    cd fleet-install-files
    		
  4. Obtain the host IP address for your Fleet Server host (for example, by running ifconfig). You need this value later.

  5. Back to your web browser, open the Kibana menu and go to Management -> Fleet. Fleet opens with a message that you need to add a Fleet Server.

  6. Click Add Fleet Server. The Add a Fleet Server flyout opens.

  7. In the flyout, select the Quick Start tab.

  8. Specify a name for your Fleet Server host, for example Fleet Server.

  9. Specify the host URL that Elastic Agents need to use to reach the Fleet Server, for example: https://203.0.113.203:8220. This is the Fleet Server host IP address that you copied earlier.

    Be sure to include the port number. Port 8220 is the default used by Fleet Server in an on-premises environment. Refer to Default port assignments in the on-premises Fleet Server install documentation for a list of port assignments.

  10. Click Generate Fleet Server policy. A policy is created that contains all of the configuration settings for the Fleet Server instance.

  11. On the Install Fleet Server to a centralized host step, for this example we select the Linux tab. Be sure to select the tab that matches both your operating system and architecture (for example, aarch64 or x64). TAR/ZIP packages are recommended over RPM/DEB system packages, since only the former support upgrading Fleet Server using Fleet.

  12. Copy the generated commands and then run them one-by-one in the terminal on your Fleet Server host. These commands do the following:

    • Download the Fleet Server package from the Elastic Artifact Registry
    • Unpack the package archive
    • Change into the directory containing the install binaries
    • Install Fleet Server.

    If you'd like to learn about the install command options, refer to elastic-agent install in the Elastic Agent command reference.

  13. At the prompt, enter Y to install Elastic Agent and run it as a service. Wait for the installation to complete.

  14. In the Kibana Add a Fleet Server flyout, wait for confirmation that Fleet Server has connected.

  15. For now, ignore the Continue enrolling Elastic Agent option and close the flyout.

Fleet Server is now fully set up.

Next, install Elastic Agent on another host and use the System integration to monitor system logs and metrics.

Note

You can install only one Elastic Agent per host.

  1. Log in to the host where you'd like to set up Elastic Agent.

  2. Create a working directory for the installation package:

    mkdir agent-install-files
    		
  3. Change into the new directory:

    cd agent-install-files
    		
  4. Open Kibana and go to Management -> Fleet.

  5. On the Agents tab, you should see your new Fleet Server policy running with a healthy status.

  6. Open the Settings tab and review the Fleet Server hosts and Outputs URLs. Ensure the URLs and IP addresses are valid for reaching Fleet Server and the Elasticsearch cluster, and that they use the HTTPS protocol.

  7. Reopen the Agents tab and select Add agent. The Add agent flyout opens.

  8. In the flyout, choose a policy name, for example Demo Agent Policy.

  9. Leave Collect system logs and metrics enabled. This adds the System integration to the Elastic Agent policy.

  10. Click Create policy.

  11. For the Enroll in Fleet? step, leave Enroll in Fleet selected.

  12. On the Install Elastic Agent on your host step, for this example we select the Linux tab. Be sure to select the tab that matches both your operating system and architecture (for example, aarch64 or x64).

    As with Fleet Server, note that TAR/ZIP packages are recommended over RPM/DEB system packages, since only the former support upgrading Elastic Agent using Fleet.

  13. Copy the generated commands to a text editor. Do not run them yet, because you need to modify one of the commands in the next step.

  14. In the sudo ./elastic-agent install command, make two changes:

    • For the --url parameter, check that the port number is set to 8220 (used for on-premises Fleet Server).
    • Append an --insecure flag at the end.

    The --insecure flag is required in this tutorial to allow connections to a Fleet Server endpoint that uses a self-signed certificate. For related guidance, refer to Install Fleet-managed Elastic Agents.

    Tip

    If you want to set up secure communications using custom SSL certificates, refer to Tutorial 2: Customize certificates for a self-managed Elastic Stack.

    The result should be like the following:

    sudo ./elastic-agent install --url=https://203.0.113.203:8220 --enrollment-token=VWCobFhKd0JuUnppVYQxX0VKV5E6UmU3BGk0ck9RM2HzbWEmcS4Bc1YUUM== --insecure
    		
  15. Run the commands one-by-one in the terminal on your Elastic Agent host. The commands do the following:

    • Download the Elastic Agent package from the Elastic Artifact Registry.
    • Unpack the package archive.
    • Change into the directory containing the install binaries.
    • Install Elastic Agent.
  16. At the prompt, enter Y to install Elastic Agent and run it as a service. Wait for the installation to complete:

    Elastic Agent has been successfully installed.
    		
  17. In the Kibana Add agent flyout, wait for confirmation that Elastic Agent has connected.

  18. Close the flyout.

Your new Elastic Agent is now installed and enrolled with Fleet Server.

Now that all components are installed, you can view your data in multiple ways. Elastic Observability provides solution views for exploring host activity, and each integration can also provide dedicated dashboards and visualizations. In this tutorial, you'll first check the host view in Observability, then open example logs and metrics dashboards from the System integration.

The System integration assets (including dashboards) are installed automatically when you add the System integration to the Elastic Agent policy.

View your host data in Observability:

  1. Open the Kibana menu and go to Observability -> Infrastructure -> Hosts.
  2. Confirm that your host appears and is reporting data.

View your system log data:

  1. Open the Kibana menu and go to Analytics -> Dashboard.
  2. In the query field, search for Logs System.
  3. Select the [Logs System] Syslog dashboard link. The Kibana Dashboard opens with visualizations of Syslog events, hostnames and processes, and more.

View your system metrics data:

  1. Open the Kibana menu and return to Analytics -> Dashboard.
  2. In the query field, search for Metrics System.
  3. Select the [Metrics System] Host overview link. The Kibana Dashboard opens with visualizations of host metrics including CPU usage, memory usage, running processes, and others.

Sample Kibana dashboard

You've successfully set up a three-node Elasticsearch cluster, with Kibana, Fleet Server, and Elastic Agent.

Now that you've successfully configured an on-premises Elastic Stack, you can learn how to customize the certificate configuration for the Elastic Stack in a production environment using trusted CA-signed certificates. Refer to Tutorial 2: Customize certificates for a self-managed Elastic Stack to learn more.

You can also start using your newly set up Elastic Stack right away:

  • Do you have data ready to ingest? Learn how to bring your data to Elastic.
  • Use Elastic Observability to unify your logs, infrastructure metrics, uptime, and application performance data.
  • Want to protect your endpoints from security threats? Try Elastic Security. Adding endpoint protection is just another integration that you add to the agent policy!