﻿---
title: Increase the disk capacity of data nodes
description: Disk capacity pressures may cause index failures, unassigned shards, and cluster instability. Elasticsearch uses disk-based shard allocation watermarks...
url: https://www.elastic.co/elastic/docs-builder/docs/3016/troubleshoot/elasticsearch/increase-capacity-data-node
products:
  - Elasticsearch
applies_to:
  - Elastic Stack: Generally available
---

# Increase the disk capacity of data nodes
Disk capacity pressures may cause index failures, unassigned shards, and cluster instability.
Elasticsearch uses [disk-based shard allocation watermarks](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings#disk-based-shard-allocation) to manage disk space on nodes, which can block allocation or indexing when nodes run low on disk space. Refer to [Watermark errors](https://www.elastic.co/elastic/docs-builder/docs/3016/troubleshoot/elasticsearch/fix-watermark-errors) for additional details on how to address this situation.
To increase the disk capacity of the data nodes in your cluster, complete these steps:
1. [Estimate how much disk capacity you need](#estimate-required-capacity).
2. [Increase the disk capacity](#increase-disk-capacity-of-data-nodes).


## Estimate the amount of required disk capacity

The following steps explain how to retrieve the current disk watermark configuration of the cluster and how to check the current disk usage on the nodes.
1. Retrieve the relevant disk thresholds that indicate how much space should be available. The relevant thresholds are the [high watermark](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings#cluster-routing-watermark-high) for all the tiers apart from the frozen one and the [frozen flood stage watermark](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings#cluster-routing-flood-stage-frozen) for the frozen tier. The following example demonstrates disk shortage in the hot tier, so only the high watermark is retrieved:
   ```json
   ```
   The response looks like this:
   ```json
   {
     "defaults": {
       "cluster": {
         "routing": {
           "allocation": {
             "disk": {
               "watermark": {
                 "high": "90%",
                 "high.max_headroom": "150GB"
               }
             }
           }
         }
       }
     }
   }
   ```
   The above means that in order to resolve the disk shortage, disk usage must drop below the 90% or have more than 150GB available. Read more on how this threshold works [here](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings#cluster-routing-watermark-high).
2. Find the current disk usage, which in turn indicates how much extra space is required. For simplicity, our example has one node, but you can apply the same for every node over the relevant threshold.
   ```json
   ```
   The response looks like this:
   ```json
   node                disk.percent disk.avail disk.total disk.used disk.indices shards
   instance-0000000000           91     4.6gb       35gb    31.1gb       29.9gb    111
   ```

In this scenario, the high watermark configuration indicates that the disk usage needs to drop below 90%, while the current disk usage is 91%.

## Increase the disk capacity of your data nodes

Here are the most common ways to increase disk capacity:
- You can expand the disk space of the existing nodes. This is typically achieved by replacing your nodes with ones with higher capacity.
- You can add additional data nodes to the data tier that is short of disk space, increasing the overall capacity of that tier and potentially improving performance by distributing data and workload across more resources.

To resize your deployment, follow the recommendations that apply to your deployment type:
<applies-switch>
  <applies-item title="{ ess:, ece: }" applies-to="Elastic Cloud Hosted: Generally available, Elastic Cloud Enterprise: Generally available">
    <warning applies-to="Elastic Cloud Enterprise: Generally available">
      In ECE, resizing is limited by your [allocator capacity](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/deploy/cloud-enterprise/ece-manage-capacity).
    </warning>
    To resize your deployment and increase its capacity by expanding a data tier or adding a new one, use the following options:**Option 1: Configure Autoscaling**
    1. Log in to the [Elastic Cloud console](https://cloud.elastic.co?page=docs&placement=docs-body) or ECE Cloud UI.
    2. On the home page, find your deployment and select **Manage**.
    3. Go to **Actions** > **Edit deployment** and check that autoscaling is enabled. Adjust the **Enable Autoscaling for** dropdown menu as needed and select **Save**.
    4. If autoscaling is successful, the cluster returns to a `healthy` status.
       If the cluster is still out of disk, check if autoscaling has reached its set limits and [update your autoscaling settings](/elastic/docs-builder/docs/3016/deploy-manage/autoscaling/autoscaling-in-ece-and-ech#ec-autoscaling-update).
    **Option 2: Configure deployment size and tiers**You can increase the deployment capacity by editing the deployment and adjusting the size of the existing data tiers or adding new ones.
    1. In Kibana, open your deployment’s navigation menu (placed under the Elastic logo in the upper left corner) and go to **Manage this deployment**.
    2. From the right hand side, click to expand the **Manage** dropdown button and select **Edit deployment** from the list of options.
    3. On the **Edit** page, increase capacity for the data tier you identified earlier by either adding a new tier with **+ Add capacity** or adjusting the size of an existing one. Choose the desired size and availability zones for that tier.
    4. Navigate to the bottom of the page and click the **Save** button.
    **Option 3: Change the hardware profiles/deployment templates**You can change the [hardware profile](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/deploy/elastic-cloud/ec-change-hardware-profile) for Elastic Cloud Hosted deployments or [deployment template](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/deploy/cloud-enterprise/deployment-templates) of the Elastic Cloud Enterprise cluster to one with a higher disk-to-memory ratio.**Option 4: <applies-to>Elastic Cloud Enterprise: Generally available</applies-to> Override disk quota**Elastic Cloud Enterprise administrators can temporarily override the disk quota of Elasticsearch nodes in real time as explained in [Resource overrides](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/deploy/cloud-enterprise/resource-overrides). We strongly recommend making this change only under the guidance of Elastic Support, and only as a temporary measure or for troubleshooting purposes.
  </applies-item>

  <applies-item title="{ self: }" applies-to="Self-managed Elastic deployments: Generally available">
    To increase the data node capacity in your cluster, you can [add more nodes](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/maintenance/add-and-remove-elasticsearch-nodes) to the cluster and assign the index’s target tier [node role](/elastic/docs-builder/docs/3016/manage-data/lifecycle/data-tiers#configure-data-tiers-on-premise) to the new nodes, or increase the disk capacity of existing nodes. Disk expansion procedures depend on your operating system and storage infrastructure and are outside the scope of Elastic support. In practice, this is often achieved by [removing a node from the cluster](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/maintenance/add-and-remove-elasticsearch-nodes) and reinstalling it with a larger disk.
  </applies-item>

  <applies-item title="{ eck: }" applies-to="Elastic Cloud on Kubernetes: Generally available">
    To increase the capacity of the data nodes in your Elastic Cloud on Kubernetes cluster, you can either add more data nodes to the desired tier, or increase the storage size of existing nodes.**Option 1: Add more data nodes**
    1. Update the `count` field in your data node `nodeSets` to add more nodes:
       ```yaml
       apiVersion: elasticsearch.k8s.elastic.co/v1
       kind: Elasticsearch
       metadata:
         name: quickstart
       spec:
         version: 9.3.2
         nodeSets:
         - name: data-nodes
           count: 5 
           config:
             node.roles: ["data"]
           volumeClaimTemplates:
           - metadata:
               name: elasticsearch-data
             spec:
               accessModes:
               - ReadWriteOnce
               resources:
                 requests:
                   storage: 100Gi
       ```
    2. Apply the changes:
       ```sh
       kubectl apply -f your-elasticsearch-manifest.yaml
       ```
       ECK automatically creates the new nodes with a `data` [node role](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles) and Elasticsearch will relocate shards to balance the load.
       You can monitor the progress using:
       ```json
       ```
    **Option 2: Increase storage size of existing nodes**
    1. If your storage class supports [volume expansion](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims), you can increase the storage size in the `volumeClaimTemplates`:
       ```yaml
       apiVersion: elasticsearch.k8s.elastic.co/v1
       kind: Elasticsearch
       metadata:
         name: quickstart
       spec:
         version: 9.3.2
         nodeSets:
         - name: data-nodes
           count: 3
           config:
             node.roles: ["data"]
           volumeClaimTemplates:
           - metadata:
               name: elasticsearch-data
             spec:
               accessModes:
               - ReadWriteOnce
               resources:
                 requests:
                   storage: 200Gi 
       ```
    2. Apply the changes. If the volume driver supports `ExpandInUsePersistentVolumes`, the filesystem will be resized online without restarting Elasticsearch. Otherwise, you might need to manually delete the Pods after the resize so they can be recreated with the expanded filesystem.
    For more information, refer to [Update your deployments](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/deploy/cloud-on-k8s/update-deployments) and [Volume claim templates > Updating the volume claim settings](/elastic/docs-builder/docs/3016/deploy-manage/deploy/cloud-on-k8s/volume-claim-templates#k8s-volume-claim-templates-update).
  </applies-item>
</applies-switch>

When you add another data node, the cluster doesn't recover immediately and it might take some time until shards are relocated to the new node.
You can check the progress with the following API call:
```json
```

If in the response the shards' state is `RELOCATING`, it means that shards are still moving. Wait until all shards turn to `STARTED`.