﻿---
title: Migrate Elastic Cloud Hosted data to Elastic Cloud Serverless with Logstash
description: Logstash is a data collection engine that uses a large ecosystem of plugins to collect, process, and forward data from a variety of sources to a variety...
url: https://www.elastic.co/elastic/docs-builder/docs/3016/manage-data/migrate/migrate-with-logstash
products:
  - Elastic Cloud Hosted
  - Elasticsearch
  - Logstash
applies_to:
  - Elastic Cloud Serverless: Generally available
  - Elastic Cloud Hosted: Generally available
---

# Migrate Elastic Cloud Hosted data to Elastic Cloud Serverless with Logstash
[Logstash](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/logstash) is a data collection engine that uses a large ecosystem of [plugins](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/lsr) to collect, process, and forward data from a variety of sources to a variety of destinations. Here we focus on using the [Elasticsearch input](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/lsr/plugins-inputs-elasticsearch) plugin to read from your Elastic Cloud Hosted deployment, and the [Elasticsearch output](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/lsr/plugins-outputs-elasticsearch) plugin to write to your Elastic Cloud Serverless project.
Familiarity with Elastic Cloud Hosted, Elasticsearch, and Logstash is helpful, but not required.
<admonition title="Basic migration">
  This guide focuses on a basic data migration scenario for moving static data from an Elastic Cloud Hosted deployment to a Elastic Cloud Serverless project.The Elasticsearch input plugin offers [additional configuration options](#additional-config) that can support more advanced use cases and migrations. More information about those options is available near the end of this topic.
</admonition>


## Prerequisites

- Elastic Cloud Hosted deployment with data to migrate
- [Elastic Cloud Serverless](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/deploy/elastic-cloud/serverless) project configured and running
- Logstash [installed](https://www.elastic.co/downloads/logstash) on your local machine or server
- API keys in Logstash format for authentication with both deployments

<important>
  Kibana assets much be migrated separately using the Kibana [export/import APIs](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-saved-objects) or recreated manually.
  Templates, data stream definitions, and ILM policies, must be in place _before_ you start data migration.Visual components, such dashboard and visualizations, can be migrated after you have migrated the data.
</important>


## Process overview

- [Configure Logstash](#configure-ls)
- [Run Logstash](#run-ls)
- [Verify data migration](#verify-migration)


### Step 1: Configure Logstash

Create a new Logstash [pipeline configuration file](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/logstash/creating-logstash-pipeline) (_migration.conf_) using the [Elasticsearch input](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/lsr/plugins-inputs-elasticsearch) and the [Elasticsearch output](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/lsr/plugins-outputs-elasticsearch):
- The **input** reads from your Elastic Cloud Hosted.
- The **output** writes to your Elastic Cloud Serverless project.


#### Input: Read from your Elastic Cloud Hosted deployment

```
input {
  elasticsearch {
    cloud_id => "<HOSTED_DEPLOYMENT_CLOUD_ID>"  
    api_key  => "<HOSTED_API_KEY>"              
    index    => "index_pattern*"                
    docinfo  => true                             # Includes metadata about each document, such as its original index name or doc ID. This metadata can be used to preserve index information on the destination cluster. 
  }
}
```

<tip>
  To migrate multiple indexes at the same time, use a wildcard in the index name. For example, `index => "logs-*"` migrates all indices starting with `logs-`.
</tip>


#### Output: Write to your Elastic Cloud Serverless project

```
output {
  elasticsearch {
    hosts       => [ "https://<SERVERLESS_HOST_URL>:443" ]
    api_key     => "<SERVERLESS_API_KEY>"                 
    index       => "%{[@metadata][input][elasticsearch][_index]}"
  }

  stdout { codec => rubydebug { metadata => true } }
}
```

<tip>
  When you create an [API key for Logstash](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/logstash/connecting-to-serverless#api-key), be sure to select **Logstash** from the **API key** format dropdown. This option formats the API key in the correct `id:api_key` format required by Logstash.
</tip>


### Step 2: Run Logstash

Start Logstash:
```
bin/logstash -f migration.conf
```


### Step 3: Verify data migration

After running Logstash, verify that the data has been migrated successfully:
1. Log in to your Elastic Cloud Serverless project.
2. Navigate to Index Management and select the relevant index.
3. Confirm that the migrated data is visible.


## Additional configuration options

The Elasticsearch input includes more [configuration options](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/lsr/plugins-inputs-elasticsearch#plugins-inputs-elasticsearch-options)
that offer greater flexibility and can handle more advanced migrations.
Some options that can be particularly relevant for a  migration use case are:
- `size` - Controls how many documents are retrieved per scroll. Larger values increase throughput, but use more memory.
- `slices` - Enables parallel reads from the source index.
- `scroll` - Adjusts how long Elasticsearch keeps the scroll context alive.


### Field tracking options

<applies-to>Elastic Cloud Serverless: Preview</applies-to> <applies-to>Elastic Stack: Preview</applies-to>
The Elasticsearch input plugin supports cursor-like pagination functionality, unlocking more advanced migration features, including the ability to resume migration tasks after a Logstash restart, and support for ongoing data migration over time. Tracking field options are:
- `tracking_field` - Plugin records the value of a field for the last document retrieved in a run.
- `tracking_field_seed` - Sets the starting value for `tracking_field` if no `last_run_metadata_path` is set.

Check out the Elasticsearch input plugin documentation for more details and code samples: [Tracking a field's value across runs](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/lsr/plugins-inputs-elasticsearch#plugins-inputs-elasticsearch-cursor).