Upgrade Elasticsearch
This guide outlines the detailed steps for performing an upgrade of a self-managed Elasticsearch cluster from an earlier to a later version.
The Elastic Cloud platform handles cluster management and upgrades automatically, so you don't need to follow the steps in this guide on cloud deployments such as Elastic Cloud on Kubernetes, Elastic Cloud Hosted, Elastic Cloud Enterprise, or Elastic Cloud Serverless.
If you want to use these automated platform features, consider deploying your Elasticsearch cluster using one of the available cloud deployment methods. Refer to migrating your data to port existing self-managed clusters to Elastic-managed.
Before you start the rolling upgrade procedure, plan your upgrade and take the upgrade preparation steps.
Upgrading from a release candidate build such as 9.0.0-rc1 is not supported. Use pre-releases only for testing in a temporary environment.
Elasticsearch does not support downgrading to an earlier version. A mixed-version cluster is only valid during a rolling upgrade. After at least one node runs the target version, the cluster may perform updates that you can't roll back, so you must run the same upgrade through to the last node.
Do not start an upgrade unless you can finish the rolling upgrade for every node. If you have to stop mid-upgrade, you cannot return the existing cluster to the earlier version. Instead, you can rebuild an empty cluster of the earlier version and restore from a snapshot.
Cluster upgrades can be performed as:
(Recommended) A rolling restart
This option allows you to upgrade your cluster one node at a time without interrupting service. Running multiple versions of Elasticsearch in the same cluster beyond the duration of an upgrade is not supported, as shards cannot be replicated from upgraded nodes to nodes running the old-version. Running more than two versions of Elasticsearch in the same cluster is not supported.
A full restart
This option requires that you upgrade every node in a coordinated way by taking your cluster offline: all nodes are stopped, upgraded, and started together. Without high availability during the downtime, data loss is possible if the upgrade process is insufficiently managed.
The following guide describes rolling restarts as the main upgrade path, which is the default method for production environments. You can use the same workflow for full restart upgrades, except you upgrade all nodes simultaneously.
When performing a rolling upgrade, upgrade one node at a time in the order of the node role groupings below. If a node is assigned multiple node roles, it belongs in the first applicable group in the list. This ensures that:
- Built-in plugins, such as ILM and transforms, can continue to process data without errors.
- All nodes can join the cluster during the upgrade. Upgraded nodes can join a cluster with an earlier version master, but earlier version nodes cannot always join a cluster with an upgraded master.
The recommended upgrade order for Elasticsearch nodes is the following:
Upgrade the
datanodes first, tier-by-tier, in the following order:- The
data_frozentier. - The
data_coldtier. - The
data_warmtier. - The
data_hottier. - Any other
datanodes, such asdata_contenttier, which are not in a data tier.
- The
Upgrade all remaining nodes that are neither master-eligible nor data nodes. The order within this grouping does not matter. This includes nodes with the following roles:
- The
mlmachine learning role. - The
ingestrole. - Any dedicated coordinating nodes.
- The
transformrole. - The
remote_cluster_clientrole.
- The
Upgrade the
masterandvoting_onlymaster-eligible nodes last.
You can get the list of nodes in a specific node role using the get node information API. For example, use the following request for data_frozen:
GET /_nodes/data_frozen:true/_none
If you accidentally upgrade a node before its designated grouping order, you might encounter various errors which you can expect to continue until the rolling upgrade is completed for all nodes.
Within data node sub-groupings, you might encounter various shard allocation errors which you can expect to continue until more nodes within the sub-grouping are upgraded. For example, these can display as follows:
cannot allocate replica shard to a node with version [x.x.x] since this is older than the primary version [y.y.y]
To upgrade a cluster, complete these steps for every node:
-
(Optional) Disable shard allocation
When you shut down a data node, the allocation process waits for
index.unassigned.node_left.delayed_timeout(by default, one minute) before starting to replicate the shards on that node to other nodes in the cluster, which can involve a lot of I/O. Because the node is shortly going to be restarted, this I/O is unnecessary. You can avoid racing the clock by disabling allocation of replicas before shutting down data nodes:PUT _cluster/settings{ "persistent": { "cluster.routing.allocation.enable": "primaries" } } -
(Optional) Stop non-essential indexing and perform a flush
While you can continue indexing during the upgrade, shard recovery is much faster if you temporarily stop non-essential indexing and perform a flush.
POST /_flush -
(Optional) Temporarily stop the tasks associated with active machine learning jobs and datafeeds
It is possible to leave your machine learning jobs running during the upgrade, but it puts increased load on the cluster. When you shut down a machine learning node, its jobs automatically move to another node and restore the model states.
NoteAny machine learning indices created before 8.x must be reindexed before upgrading, which you can initiate from the Upgrade Assistant in 8.19.
Temporarily halt the tasks associated with your machine learning jobs and datafeeds and prevent new jobs from opening by using the set upgrade mode API:
POST _ml/set_upgrade_mode?enabled=trueWhen you disable upgrade mode, the jobs resume using the last model state that was automatically saved. This option avoids the overhead of managing active jobs during the upgrade and is faster than explicitly stopping datafeeds and closing jobs.
Stop all datafeeds and close all jobs. This option saves the model state at the time of closure. When you reopen the jobs after the upgrade, they use the exact same model. However, saving the latest model state takes longer than using upgrade mode, especially if you have a lot of jobs or jobs with large model states.
-
Shut down a single node
To shut down a single node depends on what is currently used to run Elasticsearch. For example, if using
systemdor SysVinitrun the commands below.If you are running Elasticsearch with
systemd:sudo systemctl stop elasticsearch.serviceIf you are running Elasticsearch with SysV
init:sudo -i service elasticsearch stop
-
Upgrade the version of the node you shut down
To upgrade using a Debian or RPM package:
- Use
rpmordpkgto install the new package. All files are installed in the appropriate location for the operating system and Elasticsearch config files are not overwritten.
To upgrade using a zip or compressed tarball:
Extract the zip or tarball to a new directory. This is critical if you are not using external
configanddatadirectories.Set the
ES_PATH_CONFenvironment variable to specify the location of your externalconfigdirectory andjvm.optionsfile. If you are not using an externalconfigdirectory, copy your old configuration over to the new installation.Set
path.datainconfig/elasticsearch.ymlto point to your external data directory. If you are not using an externaldatadirectory, copy your old data directory over to the new installation.ImportantIf you use monitoring features, re-use the data directory when you upgrade Elasticsearch. Monitoring identifies unique Elasticsearch nodes by using the persistent UUID, which is stored in the data directory.
Set
path.logsinconfig/elasticsearch.ymlto point to the location where you want to store your logs. If you do not specify this setting, logs are stored in the directory you extracted the archive to.
TipWhen you extract the zip or tarball packages, the
elasticsearch-{{bare_version}}directory contains the Elasticsearchconfig,data, andlogsdirectories.We recommend moving these directories out of the Elasticsearch directory so that there is no chance of deleting them when you upgrade Elasticsearch. To specify the new locations, use the
ES_PATH_CONFenvironment variable and thepath.dataandpath.logssettings. For more information, refer to Important Elasticsearch configuration.The Debian and RPM packages place these directories in the appropriate place for each operating system. In production, we recommend using the deb or rpm package.
- Use
-
Merge the config overrides of the shut down node
Apply any required Elasticsearch configuration changes to align your configuration with the version you are upgrading to. The most common settings to review include:
- Leave
cluster.initial_master_nodesunset inside yourelasticsearch.ymlwhen performing a rolling upgrade. Each upgraded node is joining an existing cluster so there is no need for cluster bootstrapping. You must configure eitherdiscovery.seed_hostsordiscovery.seed_providerson every node. - Elasticsearch ships its recommended JVM settings inside
jvm.optionswhich can change across versions. Ensure any overrides are copied into the updated version'sjvm.options.dfiles to avoid drift. - Elasticsearch ships its recommended logging settings inside
log4j2.propertieswhich can change across versions. Ensure any overrides are copied into the updated version's files to avoid drift. - Avoid drift in OS-level system settings between nodes. That is most likely when you add or rebuild a node and the OS is not a copy of the previous one. For common settings and how to apply them, refer to System settings configuration methods.
- Leave
-
Upgrade any plugins on the shut down node
Use the
elasticsearch-pluginscript to install the upgraded version of each installed Elasticsearch plugin. All plugins must be upgraded when you upgrade a node. -
Start the upgraded node
Start the newly-upgraded node and confirm that it joins the cluster by checking the log file or by submitting a
_cat/nodesrequest:GET _cat/nodes -
Re-enable shard allocation
For data nodes, once the node has joined the cluster, remove the
cluster.routing.allocation.enablesetting to enable shard allocation and start using the node:PUT _cluster/settings{ "persistent": { "cluster.routing.allocation.enable": null } } -
Wait for the node to recover
Before upgrading the next node, wait for the cluster to finish shard allocations by reporting
status: green. You can check progress by submitting a Cluster health status API:GET _cluster/healthShards that were not flushed might take longer to recover. You can monitor the recovery status of individual shards using the CAT recovery API:
GET _cat/recovery?v=true&expand_wildcards=all&active_only=trueIf you stopped indexing, it is safe to resume indexing as soon as recovery completes.
-
Restart machine learning jobs
If you temporarily halted the tasks associated with your machine learning jobs, use the set upgrade mode API to return them to active states:
POST _ml/set_upgrade_mode?enabled=falseIf you closed all machine learning jobs before the upgrade, open the jobs and start the datafeeds from Kibana or with the open jobs and start datafeed APIs.
If you plan to upgrade nodes in quick succession, you might choose to leave indexing stopped and machine learning jobs and feeds paused throughout the entire upgrade process. You will still want to re-enable shard allocation after each node's restart.
To monitor which nodes have been upgraded, use the CAT nodes API:
GET _cat/nodes?v=true&h=name,ip,version,uptime
During a rolling upgrade, the cluster continues to operate normally. New functionality is either inactive or operates in a backward-compatible mode until the last old-version node leaves the cluster. New functionality becomes operational when all nodes in the cluster are running the new version.
Usually, the old-version nodes only leave the cluster when you shut them down to upgrade them. In this case, the last old-version node leaves the cluster when there are no more nodes to upgrade. However, it is possible that an old-version node might temporarily or permanently (until intervened) leave the cluster before you purposely shut it down due to cluster fault detection.
If all the remaining old-version nodes unexpectedly leave the cluster during an upgrade, the cluster will consider itself to be fully-upgraded, automatically activate new functionality, and leave its backward-compatible mode. Once that has happened, there is no way to return the cluster to a state that is compatible with the old-version nodes. Nodes running the earlier version will not be able to join this fully-upgraded cluster. To bring these nodes back into the cluster, upgrade them. Elasticsearch maintains the data in the data paths of the older nodes and will recover the cluster to health using this data after the nodes are fully upgraded.
If you stop half or more of the master-eligible nodes all at once during the upgrade, the cluster will become unavailable due to insufficient voting configurations. You must restart all the stopped master-eligible nodes to allow the cluster to re-form. If the re-formed cluster comprises only upgraded nodes, then the cluster will consider itself to be fully-upgraded, automatically activate new functionality, and leave its backward-compatible mode. In this case, upgrade all other nodes running the old version to enable them to join the re-formed cluster. Upgrade the master-eligible nodes last to make it less likely that this occurs.
In a testing or development environment with only one or two master-eligible nodes, you cannot avoid stopping half or more of the master-eligible nodes, so the cluster will always become unavailable at some point during the upgrade. When you restart the master-eligible nodes after this unavailability, the cluster will re-form with a single upgraded node, which is therefore fully-upgraded and will reject older nodes' attempts to re-join the cluster. Upgrade the master-eligible nodes last to avoid these rejections.
If you upgrade an Elasticsearch cluster that uses deprecated cluster or index settings that are not used in the target version, they are archived. You should remove any archived settings after upgrading. For more information, refer to Archived settings.
Once you've successfully upgraded Elasticsearch, continue upgrading the remaining Elastic Stack components: