Start a multi-node cluster with Docker Compose
Self Managed
Use Docker Compose to start a three-node Elasticsearch cluster with Kibana. Docker Compose lets you start multiple containers with a single command.
You can also use the hardened Wolfi image for additional security. Using Wolfi images requires Docker version 20.10.10 or higher.
To use the Wolfi image, append -wolfi
to the image tag in the Docker command.
For example:
docker pull docker.elastic.co/elasticsearch/elasticsearch-wolfi:9.0.0
Install Docker Compose. Visit the Docker Compose docs to install Docker Compose for your environment.
If you’re using Docker Desktop, Docker Compose is installed automatically. Make sure to allocate at least 4GB of memory to Docker Desktop. You can adjust memory usage in Docker Desktop by going to Settings > Resources.
Create or navigate to an empty directory for the project.
Download and save the following files in the project directory:
In the
.env
file, specify a password for theELASTIC_PASSWORD
andKIBANA_PASSWORD
variables.The passwords must be alphanumeric and can’t contain special characters, such as
!
or@
. The bash script included in thedocker-compose.yml
file only works with alphanumeric characters. Example:# Password for the 'elastic' user (at least 6 characters) ELASTIC_PASSWORD=changeme # Password for the 'kibana_system' user (at least 6 characters) KIBANA_PASSWORD=changeme ...
In the
.env
file, setSTACK_VERSION
to the current Elastic Stack version.... # Version of Elastic products STACK_VERSION=9.0.0 ...
By default, the Docker Compose configuration exposes port
9200
on all network interfaces.To avoid exposing port
9200
to external hosts, setES_PORT
to127.0.0.1:9200
in the.env
file. This ensures Elasticsearch is only accessible from the host machine.... # Port to expose {{es}} HTTP API to the host #ES_PORT=9200 ES_PORT=127.0.0.1:9200 ...
To start the cluster, run the following command from the project directory.
docker-compose up -d
After the cluster has started, open http://localhost:5601 in a web browser to access Kibana.
Log in to Kibana as the
elastic
user using theELASTIC_PASSWORD
you set earlier.
To stop the cluster, run docker-compose down
. The data in the Docker volumes is preserved and loaded when you restart the cluster with docker-compose up
.
docker-compose down
To delete the network, containers, and volumes when you stop the cluster, specify the -v
option:
docker-compose down -v
You now have a test Elasticsearch environment set up. Before you start serious development or go into production with Elasticsearch, review the requirements and recommendations to apply when running Elasticsearch in Docker in production.