Elasticsearch's parent circuit breaker prevents requests that would exceed memory limits. This error occurs when operations consume too much JVM heap memory, threatening cluster stability. Solutions include increasing heap size, optimizing queries, and adjusting circuit breaker thresholds.
The CircuitBreakingException is Elasticsearch's protective mechanism to prevent out-of-memory (OOM) errors. The circuit breaker acts like a fuse—when memory usage approaches limits, it trips and rejects requests before they can crash the cluster. The "parent" circuit breaker is the overall circuit breaker that monitors total memory consumption across all operations in your cluster. When it estimates that a request would push total memory usage beyond 95% of JVM heap (by default), it stops the operation and returns this error. This error has an HTTP 429 (Too Many Requests) status code and typically includes the estimated memory size that would be needed versus the limit. For example: "data for [request] would be [123MB], which is larger than the limit of [117.5MB]".
First, understand your cluster's memory situation. Use the nodes stats API to see current memory usage:
curl -X GET "localhost:9200/_nodes/stats/jvm?pretty"Look for the "mem" section which shows JVM memory usage. Also check circuit breaker status:
curl -X GET "localhost:9200/_nodes/stats/breaker?pretty"This shows you which circuit breakers are being triggered and by how much. The "parent" limit is typically 95% of JVM heap. If usage is consistently above 85%, you need to act now before it hits the limit.
This is the recommended fix for production clusters. Edit the JVM options file:
Linux/Mac:
# Edit jvm.options (usually /etc/elasticsearch/jvm.options or $ES_HOME/config/jvm.options)
-Xms16g
-Xmx16gDocker (docker-compose.yml):
services:
elasticsearch:
environment:
- "ES_JAVA_OPTS=-Xms16g -Xmx16g"Important guidelines:
- Set both -Xms and -Xmx to the same value (prevents resize overhead)
- Never exceed 32 GB per node (JVM GC becomes inefficient above 32GB)
- Heap should be about 50% of total available RAM (leave other 50% for OS and Lucene caches)
- For a 64 GB server, set heap to 32GB; for 32GB server, set to 16GB
After changing, restart the Elasticsearch node. It will rejoin the cluster automatically.
While increasing heap is best, you should also optimize your queries:
Reduce aggregation complexity:
{
"size": 10,
"aggs": {
"top_errors": {
"terms": {
"field": "error_type.keyword",
"size": 100
}
}
}
}Reduce the "size" parameter—you rarely need thousands of results. A smaller size means less memory needed.
Avoid aggregating on text fields:
Text fields require loading fielddata into memory. If you have a "message" text field and you're aggregating on it, you'll hit circuit breakers. Use keyword fields instead:
{
"mappings": {
"properties": {
"status": {
"type": "keyword"
}
}
}
}Use time-based filtering:
Always filter queries to a specific time range if possible. Don't query all data since epoch.
{
"query": {
"range": {
"timestamp": {
"gte": "now-7d",
"lte": "now"
}
}
}
}If you need immediate relief while implementing other fixes, clear the fielddata cache:
# Clear cache for a specific index
curl -X POST "localhost:9200/my_index/_cache/clear?fielddata=true"
# Clear cache across all indices
curl -X POST "localhost:9200/*/_cache/clear?fielddata=true"This frees memory used by fielddata but will slow down subsequent aggregations on text fields (they'll rebuild the cache). Use this as a temporary measure only.
Warning: Clearing cache can disrupt in-progress queries. Do this during low-traffic periods if possible.
You can increase circuit breaker thresholds in elasticsearch.yml, but this is a band-aid solution—address the root cause (heap size, query optimization) instead.
# elasticsearch.yml
indices.breaker.total.limit: 75%
indices.breaker.request.limit: 60%
indices.breaker.fielddata.limit: 60%Defaults:
- indices.breaker.total.limit: 95% (if use_real_memory is true) or 70% (if false)
- indices.breaker.request.limit: 60%
- indices.breaker.fielddata.limit: 40%
Increasing these just delays the problem. The real fix is increasing heap size or reducing memory consumption.
For large deployments, add more nodes to distribute load:
# Check how many nodes you have
curl -X GET "localhost:9200/_cat/nodes?v"Each additional node adds heap capacity. A cluster with 3 nodes each with 16GB heap has more total memory (48GB) to distribute queries across than a single 16GB node.
This is especially useful if you have uneven query patterns or need to scale beyond 32GB per node.
Real Memory Tracking vs Estimated Memory:
In Elasticsearch 7.x+, you can disable real memory tracking via indices.breaker.total.use_real_memory. When set to false, the breaker only tracks reserved memory instead of actual heap usage. This can reduce false positives but gives less accurate monitoring.
Heap Over 32GB:
Never set Elasticsearch heap above 32GB. Beyond 32GB, Java's garbage collection becomes extremely inefficient due to pointer compression limitations. Instead, run multiple nodes with smaller heaps.
SELinux and Permissions:
On Linux, SELinux policies may limit Elasticsearch's memory access. Check logs for SELinux denials and ensure the elasticsearch user has appropriate permissions.
Monitoring Best Practices:
Set up alerts to notify you when memory usage exceeds 85%. Tools like Elastic Stack Monitoring, Kibana, or Grafana can track circuit breaker trips over time. Early warning allows you to scale proactively rather than reactively.
IllegalStateException: There are no ingest nodes in this cluster, unable to forward request to an ingest node
How to fix "There are no ingest nodes in this cluster" in Elasticsearch
ConnectException: Connection refused
How to fix "ConnectException: Connection refused" in Elasticsearch
NodeDisconnectedException: [node] disconnected
How to fix "NodeDisconnectedException: [node] disconnected" in Elasticsearch
SnapshotException: [repository:snapshot] Snapshot could not be read
How to fix "SnapshotException: [repository:snapshot] Snapshot could not be read" in Elasticsearch
AccessDeniedException: action [cluster:admin/settings/update] is unauthorized
AccessDeniedException: action cluster:admin/settings/update is unauthorized