Loading

High JVM memory pressure

Stack

High JVM memory usage can degrade cluster performance and trigger circuit breaker errors. To prevent this, we recommend taking steps to reduce memory pressure if a node’s JVM memory usage consistently exceeds 85%.

Simplify monitoring with AutoOps

AutoOps is a monitoring tool that simplifies cluster management through performance recommendations, resource utilization visibility, and real-time issue detection with resolution paths. Learn more about AutoOps.

From your deployment menu, click Elasticsearch. Under Instances, each instance displays a JVM memory pressure indicator. When the JVM memory pressure reaches 75%, the indicator turns red.

You can also use the nodes stats API to calculate the current JVM memory pressure for each node.

				GET _nodes/stats?filter_path=nodes.*.jvm.mem.pools.old
		

Use the response to calculate memory pressure as follows:

JVM Memory Pressure = used_in_bytes / max_in_bytes

To calculate the current JVM memory pressure for each node, use the nodes stats API.

				GET _nodes/stats?filter_path=nodes.*.jvm.mem.pools.old
		

Use the response to calculate memory pressure as follows:

JVM Memory Pressure = used_in_bytes / max_in_bytes

As memory usage increases, garbage collection becomes more frequent and takes longer. You can track the frequency and length of garbage collection events in elasticsearch.log. For example, the following event states Elasticsearch spent more than 50% (21 seconds) of the last 40 seconds performing garbage collection.

[timestamp_short_interval_from_last][INFO ][o.e.m.j.JvmGcMonitorService] [node_id] [gc][number] overhead, spent [21s] collecting in the last [40s]
		

For optimal JVM performance, garbage collection (GC) should meet these criteria:

GC type Completion time Frequency
Young GC <50ms ~once per 10 seconds
Old GC <1s ≤once per 10 minutes

To determine the exact reason for the high JVM memory pressure, capture and review a heap dump of the JVM while its memory usage is high.

If you have an Elastic subscription, you can [request Elastic's assistance]](/troubleshoot.md#contact-us) reviewing this output. When doing so, kindly:

  • Grant written permission for Elastic to review your uploaded heap dumps within the support case.
  • Share this file only after receiving any necessary business approvals as it might contain private information. Files are handled according to Elastic's privacy statement.
  • Share heap dumps through our secure Support Portal. If your files are too large to upload, you can request a secure URL in the support case.
  • Share the garbage collector logs covering the same time period.

This section contains some common suggestions for reducing JVM memory pressure.

Reduce your shard count

Every shard uses memory. In most cases, a small set of large shards uses fewer resources than many small shards. For tips on reducing your shard count, see Size your shards.

Avoid expensive searches

Expensive searches can use large amounts of memory. To better track expensive searches on your cluster, enable slow logs.

Expensive searches may have a large size argument, use aggregations with a large number of buckets, or include expensive queries. To prevent expensive searches, consider the following setting changes:

				PUT _settings
					{
  "index.max_result_window": 5000
}
				PUT _cluster/settings
					{
  "persistent": {
    "search.max_buckets": 20000,
    "search.allow_expensive_queries": false
  }
}
		

Prevent mapping explosions

Defining too many fields or nesting fields too deeply can lead to mapping explosions that use large amounts of memory. To prevent mapping explosions, use the mapping limit settings to limit the number of field mappings.

Spread out bulk requests

While more efficient than individual requests, large bulk indexing or multi-search requests can still create high JVM memory pressure. If possible, submit smaller requests and allow more time between them.

Upgrade node memory

Heavy indexing and search loads can cause high JVM memory pressure. To better handle heavy workloads, upgrade your nodes to increase their memory capacity.