﻿---
title: High JVM memory pressure
description: High JVM memory usage can degrade cluster performance and trigger circuit breaker errors. To prevent this, we recommend taking steps to reduce memory...
url: https://www.elastic.co/elastic/docs-builder/docs/3016/troubleshoot/elasticsearch/high-jvm-memory-pressure
products:
  - Elasticsearch
applies_to:
  - Elastic Stack: Generally available
---

# High JVM memory pressure
High JVM memory usage can degrade cluster performance and trigger [circuit breaker errors](https://www.elastic.co/elastic/docs-builder/docs/3016/troubleshoot/elasticsearch/circuit-breaker-errors). To prevent this, we recommend taking steps to reduce memory pressure if a node’s JVM memory usage consistently exceeds 85%.

## Diagnose high JVM memory pressure


### Check JVM memory pressure

Elasticsearch's JVM [uses a G1GC garbage collector](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/deploy/self-managed/bootstrap-checks). Over time this causes the JVM heap usage metric to reflect a sawtooth pattern as shown in [Understanding JVM heap memory](https://www.elastic.co/blog/jvm-essentials-for-elasticsearch). This causes the reported heap percent to fluctuate as it is an instantaneous measurement. You should focus monitoring on the JVM memory pressure, which is a rolling average of old garbage collection, and better represents the node's ongoing JVM responsiveness.
Use the [nodes stats API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-stats) to calculate the current JVM memory pressure for each node.
```json
```

From the previous output, you can calculate the memory pressure as the ratio of `used_in_bytes` to `max_in_bytes`. For example, you can store this output into `nodes_stats.json` and then using [third-party tool JQ](https://jqlang.github.io/jq/) to process it:
```bash
cat nodes_stats.json | jq -rc '.nodes[]|.name as $n|.jvm.mem.pools.old|{name:$n, memory_pressure:(100*.used_in_bytes/.max_in_bytes|round) }'
```

Elastic Cloud Hosted and Elastic Cloud Enterprise also include a JVM memory pressure indicator for each node in your cluster in the deployment's overview page. These indicators turn red when JVM memory pressure reaches 75%. [Learn more about memory pressure monitoring](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/monitor/ec-memory-pressure).

### Check garbage collection logs

As memory usage increases, garbage collection becomes more frequent and takes longer. You can track the frequency and length of garbage collection events in [`elasticsearch.log`](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed). For example, the following event states Elasticsearch spent more than 50% (21 seconds) of the last 40 seconds performing garbage collection.
```txt
[timestamp_short_interval_from_last][INFO ][o.e.m.j.JvmGcMonitorService] [node_id] [gc][number] overhead, spent [21s] collecting in the last [40s]
```

Garbage collection activity can also appear in the output of the [nodes hot threads API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-hot-threads), under the `OTHER_CPU` category, as described in [troubleshooting high CPU usage](/elastic/docs-builder/docs/3016/troubleshoot/elasticsearch/high-cpu-usage#check-hot-threads).
For optimal JVM performance, garbage collection (GC) should meet these criteria:

| GC type  | Completion time | Frequency            |
|----------|-----------------|----------------------|
| Young GC | <50ms           | ~once per 10 seconds |
| Old GC   | <1s             | ≤once per 10 minutes |


### Capture a JVM heap dump

To determine the exact reason for the high JVM memory pressure, capture and review a heap dump of the JVM while its memory usage is high.
If you have an [Elastic subscription](https://www.elastic.co/pricing), you can [request Elastic's assistance](/elastic/docs-builder/docs/3016/troubleshoot#contact-us) reviewing this output. When reaching out, follow these guidelines:
- Grant written permission for Elastic to review your uploaded heap dumps within the support case.
- Share this file only after receiving any necessary business approvals as it might contain private information. Files are handled according to [Elastic's privacy statement](https://www.elastic.co/legal/privacy-statement).
- Share heap dumps through our secure [Support Portal](https://support.elastic.co/). If your files are too large to upload, you can request a secure URL in the support case.
- Share the [garbage collector logs](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/elasticsearch/jvm-settings#gc-logging) covering the same time period.


## Monitor JVM memory pressure

<admonition title="Simplify monitoring with AutoOps">
  AutoOps is a [monitoring](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/monitor) tool that simplifies cluster management through performance recommendations, resource utilization visibility, and real-time issue detection with resolution paths. Learn more about [AutoOps](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/monitor/autoops).
</admonition>

To track JVM memory pressure over time, enable monitoring using one of the following options, depending on your deployment type:
<applies-switch>
  <applies-item title="{ ess:, ece: }" applies-to="Elastic Cloud Hosted: Generally available, Elastic Cloud Enterprise: Generally available">
    - (Recommend) Enable [AutoOps](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/monitor/autoops).
    - Enable [logs and metrics](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/monitor/stack-monitoring/ece-ech-stack-monitoring). When logs and metrics are enabled, monitoring information is visible on Kibana's [Stack Monitoring](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/monitor/monitoring-data/visualizing-monitoring-data) page. You can also enable the [JVM memory threshold alert](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/monitor/monitoring-data/configure-stack-monitoring-alerts) to be notified about potential issues through email.
    - From your deployment menu, view the [**Performance**](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/monitor/access-performance-metrics-on-elastic-cloud) page's [memory pressure troubleshooting charts](https://www.elastic.co/elastic/docs-builder/docs/3016/troubleshoot/monitoring/high-memory-pressure).
  </applies-item>

  <applies-item title="{ self:, eck: }" applies-to="Elastic Cloud on Kubernetes: Generally available, Self-managed Elastic deployments: Generally available">
    - (Recommend) Enable [AutoOps](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/monitor/autoops).
    - Enable [Elasticsearch monitoring](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/monitor/stack-monitoring). When logs and metrics are enabled, monitoring information is visible on Kibana's [Stack Monitoring](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/monitor/monitoring-data/visualizing-monitoring-data) page. You can also enable the [JVM memory threshold alert](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/monitor/monitoring-data/configure-stack-monitoring-alerts) to be notified about potential issues through email.
  </applies-item>
</applies-switch>


## Reduce JVM memory pressure

This section contains some common suggestions for reducing JVM memory pressure.

### Common setup issues

This section highlights common setup issues that can cause JVM memory pressure to remain elevated, even in the absence of obvious load, or to respond non-linearly during performance issues.

#### Disable swapping

<applies-to>
  - Self-managed Elastic deployments: Generally available
</applies-to>

Elasticsearch's JVM handles its own executables and can suffer severe performance degredation due to operating system swapping. We recommend [disabling swap](/elastic/docs-builder/docs/3016/deploy-manage/deploy/self-managed/setup-configuration-memory#bootstrap-memory_lock).
Elasticsearch recommends completely disabling swap on the operating system. This is because anything set Elasticsearch-level is best effort but swap can have severe impact on Elasticsearch performance. To check if any nodes are currently swapping, poll the [nodes stats API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-stats):
```json
```

For example, you can store this output into `nodes_stats.json` and then using [third-party tool JQ](https://jqlang.github.io/jq/) to process it:
```bash
cat nodes_stats.json | jq -rc '.nodes[]|{name:.name, swap_used:.os.swap.used_in_bytes}' | sort
```

If nodes are found to be swapping after attempting to disable on the Elasticsearch level, you need to escalate to [disabling swap on the operating system level](/elastic/docs-builder/docs/3016/deploy-manage/deploy/self-managed/setup-configuration-memory#disable-swap-files) to avoid performance impact.

#### Enable compressed OOPs

<applies-to>
  - Elastic Cloud on Kubernetes: Generally available
  - Self-managed Elastic deployments: Generally available
</applies-to>

JVM performance strongly depends on having [Compressed OOPs](https://docs.oracle.com/javase/7/docs/technotes/guides/vm/performance-enhancements-7.html#compressedOop) enabled. The exact max heap size cutoff depends on operating system, but is typically around 30GB. To check if it is enabled, poll the [node information API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes):
```json
```

For example, you can store this output into `nodes.json` and then using [third-party tool JQ](https://jqlang.github.io/jq/) to process it:
```bash
cat nodes.json | jq -rc '.nodes[]|{node:.name, compressed:.jvm.using_compressed_ordinary_object_pointers}'
```


#### Limit heap size to less than half of total RAM

<applies-to>
  - Elastic Cloud on Kubernetes: Generally available
  - Self-managed Elastic deployments: Generally available
</applies-to>

By default, Elasticsearch manages the JVM heap size. If manually overridden, `Xms` and `Xmx` should be equal and not more than half of total operating system RAM. Refer to [Set the JVM heap size](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/elasticsearch/jvm-settings#set-jvm-heap-size) for detailed guidance and best practices.
To check these heap settings, poll the [node information API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes):
```json
```

For example, you can store this output into `nodes.json` and then using [third-party tool JQ](https://jqlang.github.io/jq/) to process it:
```bash
cat nodes.json | jq -rc '.nodes[]|.name as $n|.jvm.mem|{name:$n, heap_min:.heap_init, heap_max:.heap_max}'
```


#### Reduce your shard count

Every shard uses memory. Usually, a small set of large shards uses fewer resources than many small shards. For tips on reducing your shard count, refer to [Size your shards](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/production-guidance/optimize-performance/size-shards).

### Common traffic issues

This section contains some common suggestions for reducing JVM memory pressure related to traffic patterns.

#### Avoid expensive searches

Expensive searches can use large amounts of memory. To better track expensive searches on your cluster, enable [slow logs](https://www.elastic.co/elastic/docs-builder/docs/3016/deploy-manage/monitor/logging-configuration/slow-logs).
Expensive searches may have a large [`size` argument](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/elasticsearch/rest-apis/paginate-search-results), use aggregations with a large number of buckets, or include [expensive queries](/elastic/docs-builder/docs/3016/explore-analyze/query-filter/languages/querydsl#query-dsl-allow-expensive-queries). To prevent expensive searches, consider the following setting changes:
- Lower the `size` limit using the [`index.max_result_window`](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/elasticsearch/index-settings/index-modules#index-max-result-window) index setting.
- Decrease the maximum number of allowed aggregation buckets using the [search.max_buckets](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/elasticsearch/configuration-reference/search-settings#search-settings-max-buckets) cluster setting.
- Disable expensive queries using the [`search.allow_expensive_queries`](/elastic/docs-builder/docs/3016/explore-analyze/query-filter/languages/querydsl#query-dsl-allow-expensive-queries) cluster setting.
- Set a default search timeout using the [`search.default_search_timeout`](/elastic/docs-builder/docs/3016/solutions/search/the-search-api#search-timeout) cluster setting.

```json

{
  "index.max_result_window": 5000
}


{
  "persistent": {
    "search.max_buckets": 20000,
    "search.allow_expensive_queries": false,
    "search.default_search_timeout": "1m"
  }
}
```


#### Prevent mapping explosion

Defining too many fields or nesting fields too deeply can lead to [mapping explosions](https://www.elastic.co/elastic/docs-builder/docs/3016/troubleshoot/elasticsearch/mapping-explosion) that use large amounts of memory. To prevent mapping explosions, use the [mapping limit settings](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/elasticsearch/index-settings/mapping-limit) to limit the number of field mappings.
You can also configure the Kibana [advanced setting](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/kibana/advanced-settings) `data_views:fields_excluded_data_tiers` to improve performance by preventing Kibana from retrieving field data from specific data tiers. For example, to exclude cold and frozen tiers, typically used for searchable snapshots, set this value to `data_cold,data_frozen`. This can help Discover load fields faster, as described in [Troubleshooting guide: Solving 6 common issues in Kibana Discover load](https://www.elastic.co/blog/troubleshooting-guide-common-issues-kibana-discover-load#2.-load-fields).

#### Spread out bulk requests

While more efficient than individual requests, large [bulk indexing](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk) or [multi-search](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-msearch) requests can still create high JVM memory pressure. If possible, submit smaller requests and allow more time between them.

#### Scale node memory

Heavy indexing and search loads can cause high JVM memory pressure. To better handle heavy workloads, upgrade your nodes to increase their memory capacity.

### Reduce field data usage

Computing the [field data](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/elasticsearch/mapping-reference/text#fielddata-mapping-param) and global ordinals can be [CPU-intensive](https://www.elastic.co/elastic/docs-builder/docs/3016/troubleshoot/elasticsearch/high-cpu-usage). By default, global ordinals are computed at search time but can be [eager loaded](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/elasticsearch/mapping-reference/eager-global-ordinals) to compute after ingestion.
Field data is loaded into the JVM heap cache and retained based on usage frequency. Field data can consume JVM heap memory up to the lower value between the [field data cache setting](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/elasticsearch/configuration-reference/field-data-cache-settings) and the [field data circuit breaker](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/elasticsearch/configuration-reference/circuit-breaker-settings). [Circuit breaker errors](https://www.elastic.co/elastic/docs-builder/docs/3016/troubleshoot/elasticsearch/circuit-breaker-errors) appear as [rejected requests](/elastic/docs-builder/docs/3016/troubleshoot/elasticsearch/rejected-requests#check-circuit-breakers). Setting `indices.fielddata.cache.size` too low causes thrashing and frequent evictions.
To check `fielddata` evictions and determine whether field data contributes significantly to JVM memory usage, use the [cat nodes](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes) API:
```json
```

If the output shows that field data is a significant contributor to JVM memory usage, use the [cat fielddata](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-fielddata) API to determine which fields are using field data and how much per node.
```json
```

You can use the [clear cache](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-clear-cache) API to clear the field data cache and temporarily reduce memory usage. For example:
- To clear the fielddata cache only for fields `fieldname1` and `fieldname2` on the `my-index-000001` index:
  ```json
  ```
- To clear any fielddata cache across all indices:
  ```json
  ```

Common causes of high field data memory usage include:
- Enabling [`fielddata` on `text` fields](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/elasticsearch/mapping-reference/text#enable-fielddata-text-fields). Instead, use a [multi-field](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/elasticsearch/mapping-reference/multi-fields) and search against the [keyword field](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/elasticsearch/mapping-reference/keyword).
- Either [aggregating](https://www.elastic.co/elastic/docs-builder/docs/3016/explore-analyze/query-filter/aggregations) or [sorting](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/elasticsearch/rest-apis/sort-search-results) on high cardinality fields which have computed [global ordinals](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/elasticsearch/mapping-reference/eager-global-ordinals#_what_are_global_ordinals). For example, this can occur from Kibana autocomplete of `non-text` fields or loading [visualizations](https://www.elastic.co/elastic/docs-builder/docs/3016/explore-analyze/visualize/visualize-library). Refer to [avoiding global ordinal loading](https://docs-v3-preview.elastic.dev/elastic/docs-builder/docs/3016/reference/elasticsearch/mapping-reference/eager-global-ordinals#_avoiding_global_ordinal_loading) for guidance.