﻿---
title: Troubleshoot JVM heap dumps
description: Elasticsearch is configured by default to take heap dumps on out-of-memory exceptions to the default data directory. The default data directory is /usr/share/elasticsearch/data...
url: https://www.elastic.co/elastic/docs-builder/docs/3016/troubleshoot/deployments/cloud-on-k8s/jvm-heap-dumps
products:
  - Elastic Cloud on Kubernetes
applies_to:
  - Elastic Cloud on Kubernetes: Generally available
---

# Troubleshoot JVM heap dumps
## Ensure sufficient storage

Elasticsearch is [configured by default](/elastic/docs-builder/docs/3016/deploy-manage/deploy/self-managed/important-settings-configuration#heap-dump-path) to take heap dumps on out-of-memory exceptions to the default data directory. The default data directory is `/usr/share/elasticsearch/data` in the official Docker images that ECK uses. If you are running Elasticsearch with a large heap that is as large as the remaining space on the data volume, this can lead to a situation where Elasticsearch is no longer able to start. To avoid this scenario you have two options:
1. Choose a different path by setting `-XX:HeapDumpPath=` with the  `ES_JAVA_OPTS` variable to a path where a volume with sufficient storage space is mounted
2. [Resize the data volume](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/deploy/cloud-on-k8s/volume-claim-templates) to a sufficiently large size if your volume provisioner supports volume expansion


## Capturing JVM heap dumps

To take a heap dump before the JVM process runs out of memory you can execute the heap dump command directly in the Elasticsearch container:
```sh
kubectl exec $POD_NAME -- su elasticsearch -g root -c \
  '/usr/share/elasticsearch/jdk/bin/jmap -dump:format=b,file=data/heap.hprof $(pgrep -n java)'
```

If the Elasticsearch container is running with a random user ID, as for example on OpenShift, there is no need to substitute the user identity:
```sh
kubectl exec $POD_NAME -- bash -c \
  '/usr/share/elasticsearch/jdk/bin/jmap -dump:format=b,file=data/heap.hprof $(pgrep -n java)'
```


## Extracting heap dumps from the Elasticsearch container

To retrieve heap dumps taken by the Elasticsearch JVM or by you, as described in the previous section, you can use the `kubectl cp` command:
```sh
kubectl cp $POD_NAME:/usr/share/elasticsearch/data/heap.hprof ./heap.hprof


# Remove the heap dump from the running container to free up space
kubectl exec $POD_NAME -- rm /usr/share/elasticsearch/data/heap.hprof
```