This error occurs when Elasticsearch attempts to restore a snapshot to an index that is currently open and receiving writes. Elasticsearch prevents snapshot restoration to open indices to avoid data corruption and ensure consistency. The index must be closed first before restoration can proceed.
The "SnapshotRestoreException: cannot restore index [index] because it's open" error occurs when you attempt to restore a snapshot to an index that is currently open and active in Elasticsearch. This is a safety mechanism to prevent data corruption and ensure consistency. When Elasticsearch restores a snapshot, it needs to completely overwrite the target index with the snapshot data. If the index is open and receiving writes, several problems could occur: 1. **Data corruption**: Concurrent writes during restoration could create inconsistent data 2. **Mapping conflicts**: The restored index might have different mappings than the current open index 3. **Version conflicts**: Document versions in the snapshot could conflict with ongoing operations 4. **Shard allocation issues**: Restoration requires exclusive access to index shards The error message includes "[index]" which is the name of the specific index that needs to be closed before restoration. This protection ensures that snapshot restoration is an atomic operation that either completely succeeds or fails without corrupting existing data.
First, identify which indices are open and need to be closed:
# Check status of all indices
curl -X GET "localhost:9200/_cat/indices?v&health=green,yellow,red" -u "username:password"
# Check specific index status
curl -X GET "localhost:9200/my_index" -u "username:password"
# Use the _cat/indices API to see open/closed status
curl -X GET "localhost:9200/_cat/indices/my_index?format=json&h=index,status" -u "username:password"
# Check which indices are included in your snapshot
curl -X GET "localhost:9200/_snapshot/my_repository/my_snapshot" -u "username:password"Look for:
- "status": "open" in index metadata
- Indices with active read/write operations
- Production indices that should not be restored over
Close the index that needs to be restored. This will make it temporarily unavailable for reads and writes:
# Close the specific index
curl -X POST "localhost:9200/my_index/_close" -u "username:password"
# Verify the index is closed
curl -X GET "localhost:9200/my_index" -u "username:password"
# Should return: {"my_index":{"settings":{"index":{"verified_before_close":"false","creation_date":"...","number_of_shards":"...","number_of_replicas":"...","uuid":"...","version":{"created":"..."},"provided_name":"my_index"}}}}
# For multiple indices, close them all
curl -X POST "localhost:9200/index1,index2,index3/_close" -u "username:password"
# Using wildcards (be careful!)
curl -X POST "localhost:9200/logs-*/_close" -u "username:password"Important considerations:
- Closing an index makes it temporarily unavailable
- Plan for downtime during the restore operation
- Consider business impact before closing production indices
- Backup current index data if needed before restoration
Now that the index is closed, restore the snapshot:
# Basic restore to original index names
curl -X POST "localhost:9200/_snapshot/my_repository/my_snapshot/_restore?wait_for_completion=true" -u "username:password" -H 'Content-Type: application/json' -d'
{
"indices": "my_index",
"ignore_unavailable": false,
"include_global_state": false
}
'
# Restore with rename pattern (safer for production)
curl -X POST "localhost:9200/_snapshot/my_repository/my_snapshot/_restore?wait_for_completion=true" -u "username:password" -H 'Content-Type: application/json' -d'
{
"indices": "my_index",
"ignore_unavailable": false,
"include_global_state": false,
"rename_pattern": "(.+)",
"rename_replacement": "restored_$1"
}
'
# Monitor restore progress
curl -X GET "localhost:9200/_snapshot/my_repository/my_snapshot/_status" -u "username:password"Restoration strategies:
- Restore to temporary index names first for validation
- Use wait_for_completion=true for synchronous operations
- Monitor cluster resources during large restores
After successful restoration, reopen the index:
# Reopen the restored index
curl -X POST "localhost:9200/my_index/_open" -u "username:password"
# Verify the index is open and healthy
curl -X GET "localhost:9200/_cat/indices/my_index?v&health=green" -u "username:password"
# Check index health and document count
curl -X GET "localhost:9200/my_index/_count" -u "username:password"
curl -X GET "localhost:9200/my_index/_search?size=0" -u "username:password"
# If using renamed indices, consider reindexing or alias switching
curl -X POST "localhost:9200/_reindex?wait_for_completion=true" -u "username:password" -H 'Content-Type: application/json' -d'
{
"source": {
"index": "restored_my_index"
},
"dest": {
"index": "my_index"
}
}
'Post-restore checks:
- Verify document counts match expectations
- Test search functionality
- Check mappings and settings
- Update index aliases if needed
Prevent this error with better restore workflows:
# 1. Always check index status before restore
curl -X GET "localhost:9200/_cat/indices/my_index?format=json&h=index,status" -u "username:password"
# 2. Create restore scripts with pre-flight checks
#!/bin/bash
INDEX="my_index"
STATUS=$(curl -s -X GET "localhost:9200/_cat/indices/$INDEX?format=json&h=status" -u "username:password" | jq -r '.[0].status')
if [ "$STATUS" = "open" ]; then
echo "Index $INDEX is open. Closing before restore..."
curl -X POST "localhost:9200/$INDEX/_close" -u "username:password"
fi
# 3. Use index aliases for zero-downtime restores
curl -X POST "localhost:9200/_aliases" -u "username:password" -H 'Content-Type: application/json' -d'
{
"actions": [
{
"add": {
"index": "restored_my_index",
"alias": "my_index"
}
},
{
"remove": {
"index": "old_my_index",
"alias": "my_index"
}
}
]
}
'
# 4. Schedule restores during maintenance windows
# Use cron or scheduler tools for off-hours operationsBest practices:
- Always test restores on non-production clusters first
- Maintain multiple snapshots for rollback options
- Document restore procedures for your team
- Monitor disk space during restore operations
## Advanced Restoration Scenarios
### Zero-Downtime Restore Patterns
For production systems requiring 24/7 availability:
1. Blue-Green Restoration:
- Restore to a new index (e.g., my_index_v2)
- Use index aliases to switch traffic
- Delete old index after verification
2. Index Aliasing Strategy:
# Current setup: alias -> production index
curl -X POST "localhost:9200/_aliases" -u "username:password" -H 'Content-Type: application/json' -d'
{
"actions": [
{"remove": {"index": "my_index_old", "alias": "my_index"}},
{"add": {"index": "my_index_restored", "alias": "my_index"}}
]
}3. Read-Only Mode:
- Set index to read-only before closing
- Allows reads during preparation phase
- Minimizes write disruption
### Partial Index Restoration
When you only need to restore specific data:
# Restore only specific document types
curl -X POST "localhost:9200/_snapshot/my_repository/my_snapshot/_restore" -u "username:password" -H 'Content-Type: application/json' -d'
{
"indices": "my_index",
"index_settings": {
"index.number_of_replicas": 0
},
"ignore_index_settings": ["index.refresh_interval"],
"include_aliases": false
}
'
# Restore with query to filter documents
# Note: This requires reindexing after restore### Cross-Cluster Restoration
For disaster recovery scenarios:
# Register remote repository
curl -X PUT "localhost:9200/_snapshot/remote_repo" -u "username:password" -H 'Content-Type: application/json' -d'
{
"type": "url",
"settings": {
"url": "http://backup-cluster:9200/snapshots/repo"
}
}
'
# Restore from remote
curl -X POST "localhost:9200/_snapshot/remote_repo/my_snapshot/_restore" -u "username:password"### Performance Considerations
Large index restores can impact cluster performance:
1. Throttle Restoration:
# Set restore throttling
curl -X PUT "localhost:9200/_cluster/settings" -u "username:password" -H 'Content-Type: application/json' -d'
{
"transient": {
"indices.recovery.max_bytes_per_sec": "50mb"
}
}2. Monitor During Restore:
- Thread pools: _cat/thread_pool?v
- Disk I/O: _cat/nodes?v&h=name,disk.*
- Network: _cat/nodes?v&h=name,transport.*
3. Resource Planning:
- Ensure sufficient disk space (snapshot + restored data)
- Monitor heap usage during large restores
- Consider restoring during off-peak hours
### Security and Compliance
For regulated environments:
1. Encrypted Snapshots:
- Use repository encryption features
- Manage encryption keys securely
- Audit restore operations
2. Access Controls:
- Limit who can close/open indices
- Restrict snapshot restore permissions
- Log all restore operations
3. Data Governance:
- Maintain restore audit trails
- Document data lineage
- Comply with retention policies
### Troubleshooting Complex Cases
Index Corruption During Restore:
If restoration fails mid-process:
1. Check cluster logs for specific errors
2. Verify repository connectivity
3. Check disk space on all nodes
4. Examine shard allocation issues
Version Compatibility Issues:
When restoring across Elasticsearch versions:
1. Check version compatibility matrix
2. Use snapshot upgrade API if needed
3. Test restore on staging first
Network/Storage Issues:
For cloud or remote repositories:
1. Verify network connectivity
2. Check cloud storage permissions
3. Monitor transfer rates and timeouts
QueryShardException: No mapping found for [field] in order to sort on
How to fix "QueryShardException: No mapping found for field in order to sort on" in Elasticsearch
IndexNotFoundException: no such index [index_name]
How to fix "IndexNotFoundException: no such index [index_name]" in Elasticsearch
DocumentMissingException: [index][type][id]: document missing
DocumentMissingException: Document missing
ParsingException: Unknown key for a START_OBJECT in [query]
How to fix "ParsingException: Unknown key for a START_OBJECT in [query]" in Elasticsearch
AggregationExecutionException: Aggregation [agg_name] does not support sampling
How to fix "AggregationExecutionException: Aggregation [agg_name] does not support sampling" in Elasticsearch