This error occurs when Elasticsearch cannot find any available shard replica to execute a search query, typically due to unassigned shards from node failures, cluster rebalancing, or misconfigured replication settings. Resolving shard allocation issues restores search functionality.
This exception means Elasticsearch attempted to execute your search query but found no available copies of the required shard to query against. Every index in Elasticsearch is divided into shards, and each shard can have multiple replicas. When a search comes in, Elasticsearch must route it to one of these shard copies. If all shard copies are unavailable—because nodes hosting them crashed, network connectivity is broken, or the cluster hasn't finished rebalancing—the search request cannot proceed. This differs from a missing index; the index exists, but the data isn't accessible on any node. The error indicates a shard allocation problem, not a programming bug in your query.
Use the cluster health API to see the overall status:
curl -X GET "localhost:9200/_cluster/health?pretty"Look at the response:
- status: red (primary shards missing), yellow (replicas missing), or green (all OK)
- unassigned_shards: count of shards not allocated to any node
- active_shards_percent_as_number: percentage of active shards
Then view detailed shard status:
curl -X GET "localhost:9200/_cat/shards?v"Find shards with state UNASSIGNED and note the index and shard number.
This API reveals exactly why a shard is unassigned:
curl -X GET "localhost:9200/_cluster/allocation/explain?pretty" \
-H 'Content-Type: application/json' \
-d '{
"index": "your_index_name",
"shard": 0,
"primary": true
}'The response includes:
- can_allocate: whether the shard can be assigned
- allocate_explanation: human-readable reason (e.g., "no nodes available after filtering")
- unassigned_info: reason shard became unassigned (e.g., NODE_LEFT, CLUSTER_RECOVERED)
Common reasons:
- "node does not meet allocation deciders" — incompatible disk/memory
- "no nodes available after filtering" — insufficient replicas for configured count
- "cannot allocate to same node as another copy" — all eligible nodes are taken
Check if shard allocation has been disabled (common during rolling restarts):
curl -X GET "localhost:9200/_cluster/settings?pretty"Look for cluster.routing.allocation.enable setting. If it is set to "none" or "primaries", enable it:
curl -X PUT "localhost:9200/_cluster/settings" \
-H 'Content-Type: application/json' \
-d '{
"persistent": {
"cluster.routing.allocation.enable": "all"
}
}'Wait a few seconds and check cluster health again. Shards should begin allocating.
If you have more replicas configured than available nodes, replicas cannot allocate.
Check current settings:
curl -X GET "localhost:9200/your_index_name/_settings?pretty"Look for number_of_replicas. If you have fewer nodes than replicas + 1 (primary), reduce replicas:
curl -X PUT "localhost:9200/your_index_name/_settings" \
-H 'Content-Type: application/json' \
-d '{
"index": {
"number_of_replicas": 0
}
}'For production, the better long-term fix is to add more nodes to your cluster so replicas can spread across them for redundancy.
Elasticsearch will not allocate shards to nodes running low on disk or memory:
curl -X GET "localhost:9200/_nodes/stats/fs,jvm?pretty"Look at each node's:
- fs.available_in_bytes: free disk space
- jvm.mem.heap_used_percent: heap memory pressure
If disk is near capacity, either:
1. Delete old indices: curl -X DELETE "localhost:9200/old_index"
2. Add more disk space to the node
3. Increase the low disk watermark thresholds (advanced; not recommended long-term)
If heap is full, increase Xmx in elasticsearch.yml and restart the node.
If you have data loss and need to force allocation of unassigned primary shards (risky—may cause inconsistency):
curl -X POST "localhost:9200/_cluster/reroute" \
-H 'Content-Type: application/json' \
-d '{
"commands": [{
"allocate_stale_primary": {
"index": "your_index_name",
"shard": 0,
"node": "node_id"
}
}]
}'Only do this if: Primary shard data is lost, all replicas are gone, and you accept potential data inconsistency. This should never be done in production without careful analysis.
Check node IDs with:
curl -X GET "localhost:9200/_cat/nodes?v"After making changes, monitor the cluster as shards reallocate:
curl -X GET "localhost:9200/_cluster/health?wait_for_status=green&timeout=5m&pretty"This waits up to 5 minutes for the cluster to reach green status. Watch the logs:
tail -f /var/log/elasticsearch/elasticsearch.log | grep -i "allocation"You'll see messages like "completed allocation of shard [index_name][0]". Once status is green and unassigned_shards is 0, your searches should work again.
Snapshot recovery: If nodes are permanently lost, restore from a snapshot: curl -X POST "localhost:9200/_snapshot/repo_name/snapshot_name/_restore". Shard filtering: Use allocation awareness to pin shards to specific nodes with rack/zone awareness. Index replicas per node: Use index.routing.allocation.max_retries_per_shard to retry allocation; default is 5. Circuit breaker: If you see frequent "no shard available" during heavy indexing, the circuit breaker may be rejecting operations—check indices.breaker.total.limit. Read preferences: Use preference=_local in search queries to only query local shards if available, falling back to other nodes.
IllegalStateException: There are no ingest nodes in this cluster, unable to forward request to an ingest node
How to fix "There are no ingest nodes in this cluster" in Elasticsearch
ConnectException: Connection refused
How to fix "ConnectException: Connection refused" in Elasticsearch
NodeDisconnectedException: [node] disconnected
How to fix "NodeDisconnectedException: [node] disconnected" in Elasticsearch
SnapshotException: [repository:snapshot] Snapshot could not be read
How to fix "SnapshotException: [repository:snapshot] Snapshot could not be read" in Elasticsearch
AccessDeniedException: action [cluster:admin/settings/update] is unauthorized
AccessDeniedException: action cluster:admin/settings/update is unauthorized