﻿---
title: Migrate Fleet-managed Elastic Agents from one cluster to another
description: There are situations where you may need to move your installed Elastic Agents from being managed in one cluster to being managed in another cluster. For...
url: https://www.elastic.co/elastic/docs-builder/docs/3028/reference/fleet/migrate-elastic-agent
products:
  - Elastic Agent
  - Fleet
applies_to:
  - Elastic Stack: Generally available
---

# Migrate Fleet-managed Elastic Agents from one cluster to another
There are situations where you may need to move your installed Elastic Agents from being managed in one cluster to being managed in another cluster.
For a seamless migration, we advise that you create an identical agent policy in the new cluster that is configured in the same manner as the original cluster. There are a few methods to do this.
This guide takes you through the steps to migrate your Elastic Agents by snapshotting a source cluster and restoring it on a target cluster. These instructions assume that you have an Elastic Cloud deployment, but they can be applied to on-premise clusters as well.

## Take a snapshot of the source cluster

Refer to the full [Snapshot and restore](https://www.elastic.co/elastic/docs-builder/docs/3028/deploy-manage/tools/snapshot-and-restore) documentation for full details. In short, to create a new snapshot in an Elastic Cloud deployment:
1. In Kibana, open the main menu, then click **Manage this deployment**.
2. In the deployment menu, select **Snapshots**.
3. Click **Take snapshot now**.
   ![Deployments Snapshots page](https://www.elastic.co/elastic/docs-builder/docs/3028/reference/fleet/images/migrate-agent-take-snapshot.png)


## Create a new target cluster from the snapshot

You can create a new cluster based on the snapshot taken in the previous step, and then migrate your Elastic Agents and Fleet to the new cluster. For best results, it’s recommended that the new target cluster be at the same version as the cluster that the agents are migrating from.
1. Open the Elastic Cloud console and select **Create deployment**.
2. Select **Restore snapshot data**.
3. In the **Restore from** field, select your source deployment.
4. Choose your deployment settings, and, optimally, choose the same Elastic Stack version as the source cluster.
5. Click **Create deployment**.
   ![Create a deployment page](https://www.elastic.co/elastic/docs-builder/docs/3028/reference/fleet/images/migrate-agent-new-deployment.png)


## Update settings in the target cluster

When the target cluster is available you’ll need to adjust a few settings. Take some time to examine the Fleet setup in the new cluster.
1. Open the Kibana menu and select **Fleet**.
2. On the **Agents** tab, your agents should visible, however they’ll appear as `Offline`. This is because these agents have not yet enrolled in the new, target cluster, and are still enrolled in the original, source cluster.
   ![Agents tab in Fleet showing offline agents](https://www.elastic.co/elastic/docs-builder/docs/3028/reference/fleet/images/migrate-agent-agents-offline.png)
3. Open the Fleet **Settings** tab.
4. Examine the configurations captured there for Fleet. These settings are copied from the snapshot of the source cluster and may not have a meaning in the target cluster, so they need to be modified accordingly.
   In the following example, both the **Fleet server hosts** and the **Outputs** settings are copied over from the source cluster:
   ![Settings tab in Fleet showing source deployment host and output settings](https://www.elastic.co/elastic/docs-builder/docs/3028/reference/fleet/images/migrate-agent-host-output-settings.png)
   The next steps explain how to obtain the relevant Fleet Server host and Elasticsearch output details applicable to the new target cluster in Elastic Cloud.


### Modify the Elasticsearch output

1. In the new target cluster on Elastic Cloud, in the **Outputs** section, on the Fleet **Settings** tab, you will find an internal output named `Elastic Cloud internal output`. The host address is in the form:
   `https://<cluster-id-target>.containerhost:9244`
   Record this `<cluster-id-target>` from the target cluster. In the example shown, the ID is `fcccb85b651e452aa28703a59aea9b00`.
2. Also in the **Outputs** section, notice that the default Elasticsearch output (that was copied over from the source cluster) is also in the form:
   `https://<cluster-id-source>.<cloud endpoint address>:443`.
   Modify the Elasticsearch output so that the cluster ID is the same as that for `Elastic Cloud internal output`. In this example we also rename the output to `New Elasticsearch`.
   ![Outputs section showing the new Elasticsearch host setting](https://www.elastic.co/elastic/docs-builder/docs/3028/reference/fleet/images/migrate-agent-elasticsearch-output.png)
   In this example, the `New Elasticsearch` output and the `Elastic Cloud internal output` now have the same cluster ID, namely `fcccb85b651e452aa28703a59aea9b00`.

You have now created an Elasticsearch output that agents can use to write data to the new, target cluster. For on-premise environments not using Elastic Cloud, you should similarly be able to use the host address of the new cluster.

### Modify the Fleet Server host

Like the Elasticsearch host, the Fleet Server host has also changed with the new target cluster. If you're deploying Fleet Server on premise, the host has probably not changed address and this setting does not need to be modified. We still recommend that you ensure the agents are able to reach the the on-premise Fleet Server host (which they should be able to as they were able to connect to it prior to the migration).
The Elastic Cloud Fleet Server host has a similar format to the Elasticsearch output:
`https://<deployment-id>.fleet.<domain>.io`
To configure the correct Elastic Cloud Fleet Server host you will need to find the target cluster’s full `deployment-id`, and use it to replace the original `deployment-id` that was copied over from the source cluster.
The easiest way to find the `deployment-id` is from the deployment URL:
1. From the Kibana menu select **Manage this deployment**.
2. Copy the deployment ID from the URL in your browser’s address bar.
   ![Deployment management page](https://www.elastic.co/elastic/docs-builder/docs/3028/reference/fleet/images/migrate-agent-deployment-id.png)
   In this example, the new deployment ID is `eed4ae8e2b604fae8f8d515479a16b7b`.
   Using that value for `deployment-id`, the new Fleet Server host URL is:
   `https://eed4ae8e2b604fae8f8d515479a16b7b.fleet.us-central1.gcp.cloud.es.io:443`
3. In the target cluster, under **Fleet server hosts**, replace the original host URL with the new value.
   ![Fleet server hosts showing the new host URL](https://www.elastic.co/elastic/docs-builder/docs/3028/reference/fleet/images/migrate-agent-fleet-server-host.png)


### Reset the Elastic Cloud policy

On your target cluster, certain settings from the original Elastic Cloud Elastic Agent policiy may still be retained, and need to be updated to reference the new cluster. For example, in the APM policy installed to the Elastic Cloud Elastic Agent policy, the original and outdated APM URL is preserved. This can be fixed by running the `reset_preconfigured_agent_policies` API request. When you reset the policy, all APM Integration settings are reset, including the secret key or any tail-based sampling.
To reset the Elastic Cloud Elastic Agent policy:
1. Choose one of the API requests below and submit it through a terminal window.
   - If you’re using Kibana version 8.11 or higher, run:
  ```shell
  curl --request POST \
  --url https://{KIBANA_HOST:PORT}/internal/fleet/reset_preconfigured_agent_policies/policy-elastic-agent-on-cloud \
  -u username:password \
  --header 'Content-Type: application/json' \
  --header 'kbn-xsrf: as' \
  --header 'elastic-api-version: 1'
  ```
- If you’re using a Kibana version below 8.11, run:
  ```shell
  curl --request POST \
  --url https://{KIBANA_HOST:PORT}/internal/fleet/reset_preconfigured_agent_policies/policy-elastic-agent-on-cloud \
  -u username:password \
  --header 'Content-Type: application/json' \
  --header 'kbn-xsrf: as'
  ```
  After running the command, your Elastic Cloud agent policy settings should all be updated appropriately.

<note>
  After running the command, a warning message may appear in Fleet indicating that Fleet Server is not healthy. As well, the Elastic Agent associated with the Elastic Cloud agent policy may disappear from the list of agents. To remedy this, you can restart Integrations Server:
  1. From the Kibana menu, select **Manage this deployment**.
  2. In the deployment menu, select **Integrations Server**.
  3. On the **Integrations Server** page, select **Force Restart**.
  After the restart, Integrations Server will enroll a new Elastic Agent for the Elastic Cloud agent policy and Fleet Server should return to a healthy state.
</note>


### Confirm your policy settings

Now that the Fleet settings are correctly set up, it pays to ensure that the Elastic Agent policy is also correctly pointing to the correct entities.
1. In the target cluster, go to **Fleet → Agent policies**.
2. Select a policy to verify.
3. Open the **Settings** tab.
4. Ensure that **Fleet Server**, **Output for integrations**, and **Output for agent monitoring** are all set to the newly created entities.
   ![An agent policy's settings showing the newly created entities](https://www.elastic.co/elastic/docs-builder/docs/3028/reference/fleet/images/migrate-agent-policy-settings.png)

<note>
  If you modified the Fleet Server and the output in place these would have been updated accordingly. However if new entities are created, then ensure that the correct ones are referenced here.
</note>


## Agent policies in the new target cluster

By creating the new target cluster from a snapshot, all of your policies should have been created along with all of the agents. These agents will be offline due to the fact that the actual agents are not checking in with the new, target cluster (yet) and are still communicating with the source cluster.
The agents can now be re-enrolled into these policies and migrated over to the new, target cluster.

## Migrate Elastic Agents to the new target cluster

<note>
  Agents to be migrated cannot be tamper-protected or running as a Fleet Server.
</note>

In order to ensure that all required API keys are correctly created, the agents in your current cluster need to be re-enrolled into the new target cluster.
This is best performed one policy at a time. For a given policy, you need to capture the enrollment token and the URL for the agent to connect to. You can find these by running the in-product steps to add a new agent.
1. On the target cluster, open **Fleet** and select **Add agent**.
2. Select your newly created policy.
3. In the section **Install Elastic Agent on your host**, find the sample install command. This contains the details you’ll need to enroll the agents, namely the enrollment token and the Fleet Server URL.
4. Copy the portion of the install command containing these values. That is, `–url=<path> –enrollment-token=<token for the new policy>`.
   ![Install command from the Add Agent UI](https://www.elastic.co/elastic/docs-builder/docs/3028/reference/fleet/images/migrate-agent-install-command.png)
5. Choose an approach:
   <tab-set>
   <tab-item title="Fleet UI">
   <applies-to>Elastic Stack: Generally available since 9.2</applies-to> Migrate remote agents directly from the Fleet UI:
   1. In the source cluster, select the agents you want to migrate. Click the three dots next to the agents, and select **Migrate agents**.
   2. In the migration dialog, provide the URI and enrollment token you obtained from the target cluster.
   3. Use `replace_token` (Optional): When you are migrating a single agent, you can use the `replace_token` field to preserve the agent's original ID from the source cluster. This step helps with event matching, but will cause the migration to fail if the target cluster already has an agent with the same ID.
   </tab-item>

   <tab-item title="Command line">
   Run the `enroll` command on each individual host:
   1. On the host machines where the current agents are installed, enroll the agents again using the URL and enrollment token you obtained from the target cluster:
   ```shell
   sudo elastic-agent enroll --url=<fleet server url> --enrollment-token=<token for the new policy>
   ```
   The command output should resemble this:
   ![](https://www.elastic.co/elastic/docs-builder/docs/3028/reference/fleet/images/migrate-agent-install-command-output.png)

   1. The agent on each host will now check into the new Fleet Server and appear in the new target cluster. In the source cluster, the agents will go offline as they won’t be sending any check-ins.
   ![](https://www.elastic.co/elastic/docs-builder/docs/3028/reference/fleet/images/migrate-agent-newly-enrolled-agents.png)

   1. Repeat this procedure for each Elastic Agent policy.
   </tab-item>
   </tab-set>