Loading

Install Elasticsearch from archive on Linux or MacOS

Self Managed

Elasticsearch is available as a .tar.gz archive for Linux and MacOS.

This package contains both free and subscription features. Start a 30-day trial to try out all of the features.

The latest stable version of Elasticsearch can be found on the Download Elasticsearch page. Other versions can be found on the Past Releases page.

Note

Elasticsearch includes a bundled version of OpenJDK from the JDK maintainers (GPLv2+CE). To use your own version of Java, see the JVM version requirements.

Tip

Elastic recommends that you run the commands in this guide using a normal user account, and avoid running the commands as root.

Before you install Elasticsearch, do the following:

  • Review the supported operating systems and prepare virtual or physical hosts where you can install Elasticsearch.

    Elasticsearch is tested on the listed platforms, but it is possible that it will work on other platforms too.

  • Configure your operating system using the Important system configuration guidelines.

Download and install the archive for Linux or MacOS.

The Linux archive for Elasticsearch 9.0.0 can be downloaded and installed as follows:

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-9.0.0-linux-x86_64.tar.gz
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-9.0.0-linux-x86_64.tar.gz.sha512
shasum -a 512 -c elasticsearch-9.0.0-linux-x86_64.tar.gz.sha512
tar -xzf elasticsearch-9.0.0-linux-x86_64.tar.gz
cd elasticsearch-9.0.0/
  1. Compares the SHA of the downloaded .tar.gz archive and the published checksum, which should output elasticsearch-<version>-linux-x86_64.tar.gz: OK.
  2. This directory is known as $ES_HOME.

The MacOS archive for Elasticsearch 9.0.0 can be downloaded and installed as follows:

curl -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-9.0.0-darwin-x86_64.tar.gz
curl https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-9.0.0-darwin-x86_64.tar.gz.sha512 | shasum -a 512 -c -
tar -xzf elasticsearch-9.0.0-darwin-x86_64.tar.gz
cd elasticsearch-9.0.0/
  1. Compares the SHA of the downloaded .tar.gz archive and the published checksum, which should output elasticsearch-<version>-darwin-x86_64.tar.gz: OK.
  2. This directory is known as $ES_HOME.
macOS Gatekeeper warnings

Apple’s rollout of stricter notarization requirements affected the notarization of the 9.0.0 Elasticsearch artifacts. If macOS displays a dialog when you first run Elasticsearch that interrupts it, then you need to take an action to allow it to run.

To prevent Gatekeeper checks on the Elasticsearch files, run the following command on the downloaded .tar.gz archive or the directory to which was extracted:

xattr -d -r com.apple.quarantine <archive-or-directory>

Alternatively, you can add a security override by following the instructions in the If you want to open an app that hasn’t been notarized or is from an unidentified developer section of Safely open apps on your Mac.

Tip

This section is only required if you have previously changed action.auto_create_index from its default value.

Some features automatically create indices within Elasticsearch. By default, Elasticsearch is configured to allow automatic index creation, and no additional steps are required. However, if you have disabled automatic index creation in Elasticsearch, you must configure action.auto_create_index in elasticsearch.yml to allow features to create the following indices:

action.auto_create_index: .monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*

If you are using Logstash or Beats then you will most likely require additional index names in your action.auto_create_index setting, and the exact value will depend on your local configuration. If you are unsure of the correct value for your environment, you may consider setting the value to * which will allow automatic creation of all indices.

When Elasticsearch starts for the first time, the security auto-configuration process binds the HTTP layer to 0.0.0.0, but only binds the transport layer to localhost. This intended behavior ensures that you can start a single-node cluster with security enabled by default without any additional configuration.

Before enrolling a new node, additional actions such as binding to an address other than localhost or satisfying bootstrap checks are typically necessary in production clusters. During that time, an auto-generated enrollment token could expire, which is why enrollment tokens aren’t generated automatically.

Only nodes on the same host can join the cluster without additional configuration. If you want nodes from another host to join your cluster, you need make your instance reachable.

For more information about the cluster formation process, refer to Discovery and cluster formation.

Update the Elasticsearch configuration on this first node so that other hosts are able to connect to it by editing the settings in elasticsearch.yml:

  1. Open elasticsearch.yml in a text editor.

  2. In a multi-node Elasticsearch cluster, all of the Elasticsearch instances need to have the same name.

    In the configuration file, uncomment the line #cluster.name: my-application and give the Elasticsearch instance any name that you’d like:

    cluster.name: elasticsearch-demo
    
  3. By default, Elasticsearch runs on localhost. For Elasticsearch instances on other nodes to be able to join the cluster, you need to set up Elasticsearch to run on a routable, external IP address.

    Uncomment the line #network.host: 192.168.0.1 and replace the default address with 0.0.0.0. The 0.0.0.0 setting enables Elasticsearch to listen for connections on all available network interfaces. In a production environment, you might want to use a different value, such as a static IP address or a reference to a network interface of the host.

    network.host: 0.0.0.0
    
  4. Elasticsearch needs to be enabled to listen for connections from other, external hosts.

    Uncomment the line #transport.host: 0.0.0.0. The 0.0.0.0 setting enables Elasticsearch to listen for connections on all available network interfaces. In a production environment you might want to use a different value, such as a static IP address or a reference to a network interface of the host.

    transport.host: 0.0.0.0
    
    Tip

    You can find details about the network.host and transport.host settings in the Elasticsearch networking settings reference.

  5. Save your changes and close the editor.

To enroll new nodes in your cluster, create an enrollment token with the elasticsearch-create-enrollment-token tool on any existing node in your cluster. You can then start a new node with the --enrollment-token parameter so that it joins an existing cluster.

Tip

Before you enroll your new node, make sure that it is able to access the first node in your cluster. You can test this by running a curl command to the first node.

If you can't access the first node, then modify your network configuration before proceeding.

  1. Using a text editor, update the cluster.name in elasticsearch.yml to match the other nodes in your cluster.

    Note

    If this value isn't updated and you attempt to join an existing cluster, then the connection will fail with the following error:

    handshake failed: remote cluster name [cluster-to-join] does not match local cluster name [current-cluster-name]
    
  2. In a separate terminal from where Elasticsearch is running, navigate to the directory where you installed Elasticsearch and run the elasticsearch-create-enrollment-token tool to generate an enrollment token for your new nodes.

    bin/elasticsearch-create-enrollment-token -s node
    

    Copy the enrollment token, which you’ll use to enroll new nodes with your Elasticsearch cluster.

    An enrollment token has a lifespan of 30 minutes. You should create a new enrollment token for each new node that you add.

  3. From the installation directory of your new node, start Elasticsearch and pass the enrollment token with the --enrollment-token parameter.

    bin/elasticsearch --enrollment-token <enrollment-token>
    

    Elasticsearch automatically generates certificates and keys in the following directory:

    config/certs
    

You can repeat these steps for each additional Elasticsearch node that you would like to add to the cluster.

For more information about discovery and shard allocation, refer to Discovery and cluster formation and Cluster-level shard allocation and routing settings.

You have several options for starting Elasticsearch:

If you're starting a node that will be enrolled in an existing cluster, refer to Enroll the node in an existing cluster.

Run the following command to start Elasticsearch from the command line:

./bin/elasticsearch

By default, Elasticsearch prints its logs to the console (stdout) and to the <cluster name>.log file within the logs directory. Elasticsearch logs some information while it is starting, but after it has finished initializing it will continue to run in the foreground and won’t log anything further until something happens that is worth recording. While Elasticsearch is running you can interact with it through its HTTP interface which is on port 9200 by default.

To stop Elasticsearch, press Ctrl-C.

Note

All scripts packaged with Elasticsearch require a version of Bash that supports arrays and assume that Bash is available at /bin/bash. As such, Bash should be available at this path either directly or via a symbolic link.

When you start your first Elasticsearch node for the first time, it automatically performs the following security setup:

  • Generates TLS certificates for the transport and HTTP layers
  • Applies TLS configuration settings to elasticsearch.yml
  • Sets a password for the elastic superuser
  • Creates an enrollment token to securely connect Kibana to Elasticsearch

You can then start Kibana and enter the enrollment token, which is valid for 30 minutes. This token automatically applies the security settings from your Elasticsearch cluster, authenticates to Elasticsearch with the built-in kibana service account, and writes the security configuration to kibana.yml.

Note

There are some cases where security can’t be configured automatically because the node startup process detects that the node is already part of a cluster, or that security is already configured or explicitly disabled.

The password for the elastic user and the enrollment token for Kibana are output to your terminal.

We recommend storing the elastic password as an environment variable in your shell. For example:

export ELASTIC_PASSWORD="your_password"

If you have password-protected the Elasticsearch keystore, you will be prompted to enter the keystore’s password. See Secure settings for more details.

To learn how to reset this password, refer to Set passwords for native and built-in users in self-managed clusters.

Elasticsearch loads its configuration from the following location by default:

$ES_HOME/config/elasticsearch.yml

The format of this config file is explained in Configure Elasticsearch.

Any settings that can be specified in the config file can also be specified on the command line, using the -E syntax as follows:

./bin/elasticsearch -d -Ecluster.name=my_cluster -Enode.name=node_1
Note

Values that contain spaces must be surrounded with quotes. For instance -Epath.logs="C:\My Logs\logs".

Tip

Typically, any cluster-wide settings (like cluster.name) should be added to the elasticsearch.yml config file, while any node-specific settings such as node.name could be specified on the command line.

To run Elasticsearch as a daemon, specify -d on the command line, and record the process ID in a file using the -p option:

./bin/elasticsearch -d -p pid

If you have password-protected the Elasticsearch keystore, you will be prompted to enter the keystore’s password. See Secure settings for more details.

Log messages can be found in the $ES_HOME/logs/ directory.

To shut down Elasticsearch, kill the process ID recorded in the pid file:

pkill -F pid
Note

The Elasticsearch .tar.gz package does not include the systemd module. To manage Elasticsearch as a service, use the Debian or RPM package instead.

You can test that your Elasticsearch node is running by sending an HTTPS request to port 9200 on localhost:

curl --cacert $ES_HOME/config/certs/http_ca.crt \
-u elastic:$ELASTIC_PASSWORD https://localhost:9200
  1. --cacert: Path to the generated http_ca.crt certificate for the HTTP layer.
  2. Replace $ELASTIC_PASSWORD with the elastic superuser password. Ensure that you use https in your call, or the request will fail.

The call returns a response like this:

{
  "name" : "Cp8oag6",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "AT69_T_DTp-1qgIJlatQqA",
  "version" : {
    "number" : "9.0.0-SNAPSHOT",
    "build_type" : "tar",
    "build_hash" : "f27399d",
    "build_flavor" : "default",
    "build_date" : "2016-03-30T09:51:41.449Z",
    "build_snapshot" : false,
    "lucene_version" : "10.0.0",
    "minimum_wire_compatibility_version" : "1.2.3",
    "minimum_index_compatibility_version" : "1.2.3"
  },
  "tagline" : "You Know, for Search"
}

If you are deploying a multi-node cluster, then the enrollment process adds all existing nodes to each newly enrolled node's discovery.seed_hosts setting. However, you need to go back to all of the nodes in the cluster and edit them so each node in the cluster can restart and rejoin the cluster as expected.

Note

Because the initial node in the cluster is bootstrapped as a single-node cluster, it won't have discovery.seed_hosts configured. This setting is mandatory for multi-node clusters and must be added manually to the first node.

Perform the following steps on each node in the cluster:

  1. Open elasticsearch.yml in a text editor.
  2. Comment out or remove the cluster.initial_master_nodes setting, if present.
  3. Update the discovery.seed_hosts value so it contains the IP address and port of each of the master-eligible Elasticsearch nodes in the cluster. In the first node in the cluster, you need to add the discovery.seed_hosts setting manually.
  4. Optionally, restart the Elasticsearch service to validate your configuration changes.

If you don't perform these steps, then one or more nodes will fail the discovery configuration bootstrap check when they are restarted.

For more information, refer to Discovery and cluster formation.

When you start Elasticsearch for the first time, TLS is configured automatically for the HTTP layer. A CA certificate is generated and stored on disk at:

$ES_HOME/config/certs/http_ca.crt

The hex-encoded SHA-256 fingerprint of this certificate is also output to the terminal. Any clients that connect to Elasticsearch, such as the Elasticsearch Clients, Beats, standalone Elastic Agents, and Logstash must validate that they trust the certificate that Elasticsearch uses for HTTPS. Fleet Server and Fleet-managed Elastic Agents are automatically configured to trust the CA certificate. Other clients can establish trust by using either the fingerprint of the CA certificate or the CA certificate itself.

If the auto-configuration process already completed, you can still obtain the fingerprint of the security certificate. You can also copy the CA certificate to your machine and configure your client to use it.

Copy the fingerprint value that’s output to your terminal when Elasticsearch starts, and configure your client to use this fingerprint to establish trust when it connects to Elasticsearch.

If the auto-configuration process already completed, you can still obtain the fingerprint of the security certificate by running the following command. The path is to the auto-generated CA certificate for the HTTP layer.

openssl x509 -fingerprint -sha256 -in config/certs/http_ca.crt

The command returns the security certificate, including the fingerprint. The issuer should be {{es}} security auto-configuration HTTP CA.

issuer= /CN={{es}} security auto-configuration HTTP CA
SHA256 Fingerprint=<fingerprint>

If your library doesn’t support a method of validating the fingerprint, the auto-generated CA certificate is created in the following directory on each Elasticsearch node:

$ES_HOME/config/certs/http_ca.crt

Copy the http_ca.crt file to your machine and configure your client to use this certificate to establish trust when it connects to Elasticsearch.

The archive distributions are entirely self-contained. All files and directories are, by default, contained within $ES_HOME — the directory created when unpacking the archive.

This is convenient because you don’t have to create any directories to start using Elasticsearch, and uninstalling Elasticsearch is as easy as removing the $ES_HOME directory. However, you should change the default locations of the config directory, the data directory, and the logs directory so that you do not delete important data later on.

Type Description Default location Setting
home Elasticsearch home directory or $ES_HOME Directory created by unpacking the archive
bin Binary scripts including elasticsearch to start a node and elasticsearch-plugin to install plugins $ES_HOME/bin
conf Configuration files, including elasticsearch.yml $ES_HOME/config ES_PATH_CONF
conf Generated TLS keys and certificates for the transport and HTTP layer. $ES_HOME/config/certs
data The location of the data files of each index / shard allocated on the node. $ES_HOME/data path.data
logs Log files location. $ES_HOME/logs path.logs
plugins Plugin files location. Each plugin will be contained in a subdirectory. $ES_HOME/plugins
repo Shared file system repository locations. Can hold multiple locations. A file system repository can be placed in to any subdirectory of any directory specified here. Not configured path.repo

You now have a test Elasticsearch environment set up. Before you start serious development or go into production with Elasticsearch, you must do some additional setup:

You can also do the following: