﻿---
title: Start a multi-node cluster with Docker Compose
description: Use Docker Compose to start a three-node Elasticsearch cluster with Kibana. Docker Compose lets you start multiple containers with a single command. Install...
url: https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/deploy/self-managed/install-elasticsearch-docker-compose
products:
  - Elastic Stack
  - Elasticsearch
  - Kibana
applies_to:
  - Self-managed Elastic deployments: Generally available
---

# Start a multi-node cluster with Docker Compose
Use Docker Compose to start a three-node Elasticsearch cluster with Kibana. Docker Compose lets you start multiple containers with a single command.

## Hardened Docker images

You can also use the hardened [Wolfi](https://wolfi.dev/) image for additional security. Using Wolfi images requires Docker version 20.10.10 or higher.
To use the Wolfi image, append `-wolfi` to the image tag in the Docker command.
For example:
<tab-set>
  <tab-item title="Latest">
    ```sh
    docker pull docker.elastic.co/elasticsearch/elasticsearch-wolfi:9.3.2
    ```
  </tab-item>

  <tab-item title="Specific version">
    ```sh
    docker pull docker.elastic.co/elasticsearch/elasticsearch-wolfi:<SPECIFIC.VERSION.NUMBER>
    ```
    You can download and install a specific version of the Elastic Stack by replacing `<SPECIFIC.VERSION.NUMBER>` with the version number you want. For example, you can replace `<SPECIFIC.VERSION.NUMBER>` with 9.0.0.
  </tab-item>
</tab-set>


## Configure and start the cluster

1. Install Docker Compose. Visit the [Docker Compose docs](https://docs.docker.com/compose/install/) to install Docker Compose for your environment.
   If you’re using Docker Desktop, Docker Compose is installed automatically. Make sure to allocate at least 4GB of memory to Docker Desktop. You can adjust memory usage in Docker Desktop by going to **Settings > Resources**.
2. Create or navigate to an empty directory for the project.
3. Download and save the following files in the project directory:
   - [`.env`](https://github.com/elastic/elasticsearch/blob/main/docs/reference/setup/install/docker/.env)
- [`docker-compose.yml`](https://github.com/elastic/elasticsearch/blob/main/docs/reference/setup/install/docker/docker-compose.yml)
4. In the `.env` file, specify a password for the `ELASTIC_PASSWORD` and `KIBANA_PASSWORD` variables.
   The passwords must be at least 6 characters long, alphanumeric, and can’t contain special characters, such as `!` or `@`. The bash script included in the `docker-compose.yml` file only works with alphanumeric characters. Example:
   ```txt
   # Password for the 'elastic' user (at least 6 characters)
   ELASTIC_PASSWORD=changeme

   # Password for the 'kibana_system' user (at least 6 characters)
   KIBANA_PASSWORD=changeme
   ...
   ```
5. Edit the `.env` file to set the `STACK_VERSION`:
   <tab-set>
   <tab-item title="Latest">
   Set the stack version to the current Elastic Stack version.
   ```txt
   ...
   # Version of Elastic products
   STACK_VERSION=9.3.2
   ...
   ```
   </tab-item>

   <tab-item title="Specific version">
   Replace `<SPECIFIC.VERSION.NUMBER>` with the Elasticsearch version number you want. For example, you can replace `<SPECIFIC.VERSION.NUMBER>` with 9.0.0.
   ```txt
   ...
   # Version of Elastic products
   STACK_VERSION=<SPECIFIC.VERSION.NUMBER>
   ...
   ```
   </tab-item>
   </tab-set>
6. By default, the Docker Compose configuration exposes port `9200` on all network interfaces.
   To avoid exposing port `9200` to external hosts, set `ES_PORT` to `127.0.0.1:9200` in the `.env` file. This ensures Elasticsearch is only accessible from the host machine.
   ```txt
   ...
   # Port to expose Elasticsearch HTTP API to the host
   #ES_PORT=9200
   ES_PORT=127.0.0.1:9200
   ...
   ```
7. To start the cluster, run the following command from the project directory.
   ```sh
   docker-compose up -d
   ```
8. After the cluster has started, open [http://localhost:5601](http://localhost:5601) in a web browser to access Kibana.
9. Log in to Kibana as the `elastic` user using the `ELASTIC_PASSWORD` you set earlier.


## Stop and remove the cluster

To stop the cluster, run `docker-compose down`. The data in the Docker volumes is preserved and loaded when you restart the cluster with `docker-compose up`.
```sh
docker-compose down
```

To delete the network, containers, and volumes when you stop the cluster, specify the `-v` option:
```sh
docker-compose down -v
```


## Next steps

You now have a test Elasticsearch environment set up. Before you start serious development or go into production with Elasticsearch, review the [requirements and recommendations](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/deploy/self-managed/install-elasticsearch-docker-prod) to apply when running Elasticsearch in Docker in production.