The "OutOfDiskSpace: Disk space is critically low" error in MongoDB occurs when the database server runs out of available disk space for data files, journal files, or temporary operations. This critical error prevents MongoDB from writing new data and can cause database operations to fail, potentially leading to data loss or service disruption if not addressed promptly.
The "OutOfDiskSpace: Disk space is critically low" error (MongoDB error code 14031) is a critical storage-related error that occurs when MongoDB detects insufficient disk space to perform necessary operations. This error indicates that the database server cannot allocate additional disk space for: 1. **Data file growth**: MongoDB data files (.wt files for WiredTiger) need space to expand as data is inserted 2. **Journal files**: Write-ahead journal files require space for crash recovery 3. **Temporary files**: Sort operations, index builds, and aggregation pipelines create temporary files 4. **Oplog growth**: In replica sets, the oplog requires continuous disk space 5. **Compression dictionaries**: WiredTiger compression needs space for dictionary files When MongoDB encounters this error, it typically: - Stops accepting write operations to prevent data corruption - Logs the error in mongod.log with severity "F" - May enter a read-only state for affected databases - Can cause replica set members to go into an error state The error is particularly dangerous because: - It can occur suddenly during normal operations if disk monitoring is inadequate - It affects all databases on the same storage volume - Recovery requires immediate administrative action to free disk space - Prolonged disk exhaustion can lead to data file corruption MongoDB monitors disk space continuously and triggers this error when available space falls below operational thresholds, which vary by storage engine and MongoDB version.
First, determine how much disk space is available and what's consuming it:
Check overall disk usage:
# Check disk usage on Linux/Unix
df -h /var/lib/mongodb # or your MongoDB data directory
# Check inode usage (important for some filesystems)
df -i /var/lib/mongodb
# Check detailed disk usage by directory
du -sh /var/lib/mongodb/*
du -sh /var/lib/mongodb/*/* 2>/dev/null | sort -hr | head -20
# For Windows
wmic logicaldisk get size,freespace,caption
dir /s "C:\Program Files\MongoDB\Server\data"Check MongoDB-specific storage usage:
// In mongosh or mongo shell
use admin
// Check database sizes
show dbs
// Check collection sizes for each database
db.getCollectionNames().forEach(function(coll) {
var stats = db[coll].stats();
print(coll + ': ' + stats.size + ' bytes (' +
Math.round(stats.size / 1024 / 1024) + ' MB)');
});
// Check index sizes
db.getCollectionNames().forEach(function(coll) {
var stats = db[coll].stats();
print(coll + ' indexes: ' + stats.totalIndexSize + ' bytes');
});Identify largest collections:
// Run in each database
db.getCollectionNames().forEach(function(coll) {
var size = db[coll].stats().size;
if (size > 100 * 1024 * 1024) { // Larger than 100MB
print('Large collection: ' + coll + ' - ' +
Math.round(size / 1024 / 1024) + ' MB');
}
});Free disk space without risking data corruption:
Remove old log files:
# Rotate and compress MongoDB logs
sudo logrotate -f /etc/logrotate.d/mongod
# Remove old log files (keep last 7 days)
find /var/log/mongodb -name "*.log*" -mtime +7 -delete
# Compress large log files
find /var/log/mongodb -name "*.log" -size +100M -exec gzip {} ;Clean up MongoDB temporary files:
# Check for orphaned temporary files
find /var/lib/mongodb -name "*.tmp" -mtime +1 -delete
find /var/lib/mongodb -name "mongodb-*.sock" -delete
find /tmp -name "mongodb-*" -mtime +1 -deleteRemove unnecessary system files:
# Clean package manager cache
sudo apt-get clean # Debian/Ubuntu
sudo yum clean all # RHEL/CentOS
# Remove old kernel versions (Linux)
sudo apt-get autoremove --purge # Debian/Ubuntu
# Clear systemd journal logs
sudo journalctl --vacuum-time=3d
sudo journalctl --vacuum-size=500MFor emergency space, consider compressing existing data:
# Compress backup files if present
find /var/backups -name "*.gz" -exec gzip -f {} \;
# Check for core dumps
find / -name "core.*" -size +100M -delete 2>/dev/nullImportant: Do NOT delete MongoDB data files (.wt, .wt.tmp), journal files, or the oplog while MongoDB is running.
Safely reduce MongoDB data size:
Identify candidates for archiving:
// Find old data that can be archived
use yourDatabase
// Example: Find documents older than 1 year
var oldData = db.yourCollection.count({
createdAt: { $lt: new Date(Date.now() - 365 * 24 * 60 * 60 * 1000) }
});
print('Documents older than 1 year: ' + oldData);
// Check for duplicate or temporary data
db.yourCollection.aggregate([
{ $match: { status: 'temporary' } },
{ $count: 'temporaryDocuments' }
]);Archive data to another location:
# Use mongodump to archive old data
mongodump --db yourDatabase --collection yourCollection --query '{"createdAt": {"$lt": {"$date": "2024-01-01T00:00:00Z"}}}' --out /path/to/archive/
# Compress the archive
tar -czf /path/to/archive/mongodb-archive-$(date +%Y%m%d).tar.gz /path/to/archive/yourDatabaseRemove archived data from MongoDB:
// Remove old data after verifying archive
use yourDatabase
// First, create index for efficient deletion
db.yourCollection.createIndex({ createdAt: 1 });
// Delete in batches to avoid overwhelming the system
var batchSize = 1000;
var cutoffDate = new Date(Date.now() - 365 * 24 * 60 * 60 * 1000);
while (true) {
var result = db.yourCollection.deleteMany({
createdAt: { $lt: cutoffDate }
}).limit(batchSize);
print('Deleted ' + result.deletedCount + ' documents');
if (result.deletedCount < batchSize) {
break;
}
// Small pause between batches
sleep(100);
}Enable TTL indexes for automatic cleanup:
// Create TTL index to automatically remove old documents
db.yourCollection.createIndex(
{ createdAt: 1 },
{ expireAfterSeconds: 365 * 24 * 60 * 60 } // 1 year
);
// For time-series collections (MongoDB 5.0+)
db.createCollection("logs", {
timeseries: {
timeField: "timestamp",
metaField: "metadata",
granularity: "hours"
},
expireAfterSeconds: 30 * 24 * 60 * 60 // 30 days
});Reclaim wasted space and optimize storage:
Compact collections to reclaim space:
// For WiredTiger storage engine (default)
use yourDatabase
// Run compact on specific collections
db.runCommand({ compact: 'yourCollection' });
// Monitor progress (run in another shell)
db.currentOp({ "command.compact": { $exists: true } });Note: The compact command requires additional temporary disk space during operation. Ensure you have at least 10-20% free space before running.
Reclaim space from deleted documents:
// Check storage statistics before and after
var beforeStats = db.yourCollection.storageStats();
// Force storage reclamation
db.runCommand({
compact: 'yourCollection',
force: true // Force compaction even if little gain expected
});
var afterStats = db.yourCollection.storageStats();
print('Space reclaimed: ' +
(beforeStats.size - afterStats.size) + ' bytes');Rebuild indexes to reduce size:
// Rebuild all indexes on a collection
db.yourCollection.reIndex();
// Check index sizes before and after
db.yourCollection.aggregate([
{ $indexStats: {} },
{ $group: {
_id: null,
totalSize: { $sum: "$size" }
}
}
]);Enable compression for new collections:
// Create new collection with snappy compression (default)
db.createCollection("newCollection", {
storageEngine: {
wiredTiger: {
configString: "block_compressor=snappy"
}
}
});
// For better compression (more CPU)
db.createCollection("highlyCompressed", {
storageEngine: {
wiredTiger: {
configString: "block_compressor=zlib"
}
}
});Convert existing collection to use compression:
// Use mongodump/mongorestore to change compression
// 1. Dump the collection
// 2. Drop the collection
// 3. Create with desired compression
// 4. Restore the dataPermanently resolve disk space issues:
Increase disk size (cloud environments):
# AWS EBS volume resize
aws ec2 modify-volume --volume-id vol-12345 --size 100
# Then extend filesystem
sudo growpart /dev/xvdf 1
sudo resize2fs /dev/xvdf1 # ext4
# or
sudo xfs_growfs /mount/point # xfs
# Azure Disk resize
az disk update --resource-group myRG --name myDisk --size-gb 200
# Google Cloud Persistent Disk
gcloud compute disks resize my-disk --size=200GB --zone=us-central1-aAdd additional storage volume:
# Add new volume and mount it
sudo mkdir /mnt/mongodb_data2
sudo mount /dev/sdb1 /mnt/mongodb_data2
# Update MongoDB configuration
sudo nano /etc/mongod.conf
# Change storage.dbPath to new locationSet up disk space monitoring:
# Create monitoring script
cat > /usr/local/bin/check_mongodb_disk.sh << 'EOF'
#!/bin/bash
THRESHOLD=20 # Percentage free space threshold
DATA_DIR="/var/lib/mongodb"
FREE_PCT=$(df --output=pcent "$DATA_DIR" | tail -1 | tr -d '% ')
if [ "$FREE_PCT" -gt "$THRESHOLD" ]; then
echo "OK: $FREE_PCT% free space in $DATA_DIR"
exit 0
else
echo "CRITICAL: Only $FREE_PCT% free space in $DATA_DIR"
exit 2
fi
EOF
chmod +x /usr/local/bin/check_mongodb_disk.sh
# Schedule regular checks
echo "*/5 * * * * root /usr/local/bin/check_mongodb_disk.sh" | sudo tee /etc/cron.d/mongodb-disk-checkConfigure MongoDB storage limits:
# In /etc/mongod.conf
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
# Optional: Limit total data size
# maxSize: 100GB
wiredTiger:
engineConfig:
# Cache size (default: 50% of RAM - 1GB)
cacheSizeGB: 4
# Journal compressor
journalCompressor: snappy
# Directory for temporary files (can be on different volume)
# directoryForIndexes: /mnt/fast_ssd
collectionConfig:
blockCompressor: snappy
indexConfig:
prefixCompression: trueSet up alerts for disk space:
- CloudWatch alarms (AWS)
- Azure Monitor alerts
- Google Cloud Monitoring
- Nagios/Icinga checks
- Prometheus + Grafana dashboards
Prevent future disk space issues:
Implement data lifecycle management:
// Regular cleanup job
db.createCollection("cleanupJobs", {
validator: {
$jsonSchema: {
bsonType: "object",
required: ["collection", "query", "schedule"],
properties: {
collection: { bsonType: "string" },
query: { bsonType: "object" },
schedule: { bsonType: "string" },
lastRun: { bsonType: "date" },
enabled: { bsonType: "bool" }
}
}
}
});
// Example cleanup job document
db.cleanupJobs.insertOne({
collection: "logs",
query: { timestamp: { $lt: new Date(Date.now() - 30 * 24 * 60 * 60 * 1000) } },
schedule: "0 2 * * *", // Daily at 2 AM
lastRun: null,
enabled: true
});Use capped collections for high-volume data:
// Create capped collection (fixed size, FIFO)
db.createCollection("recentEvents", {
capped: true,
size: 100 * 1024 * 1024, // 100MB
max: 100000 // Maximum document count
});
// Convert existing collection to time-series (MongoDB 5.0+)
db.logs.convertToCapped(100 * 1024 * 1024);Implement sharding for horizontal scaling:
// Enable sharding on database
sh.enableSharding("yourDatabase");
// Shard collection by date range
sh.shardCollection("yourDatabase.logs", { timestamp: 1 });
// Add shards as needed
sh.addShard("shard1/mongodb1:27017");
sh.addShard("shard2/mongodb2:27017");Configure proper backup retention:
# Backup script with rotation
#!/bin/bash
BACKUP_DIR="/backups/mongodb"
RETENTION_DAYS=30
# Create backup
mongodump --out "$BACKUP_DIR/$(date +%Y%m%d_%H%M%S)"
# Compress backup
find "$BACKUP_DIR" -name "*.bson" -exec gzip {} ;
# Remove old backups
find "$BACKUP_DIR" -type f -mtime +$RETENTION_DAYS -delete
# Keep only last N backups
ls -t "$BACKUP_DIR"/*.gz | tail -n +6 | xargs rm -fMonitor and alert on growth trends:
// Growth monitoring query
db.getCollectionNames().forEach(function(coll) {
var stats = db[coll].stats();
var growthRate = (stats.size - stats.avgObjSize * stats.count) /
(1024 * 1024 * 1024); // GB
if (growthRate > 1) { // Growing > 1GB
print('Rapid growth in ' + coll + ': ' + growthRate.toFixed(2) + ' GB');
}
});Set up regular maintenance schedule:
- Weekly: Check disk usage and growth trends
- Monthly: Review and adjust TTL indexes
- Quarterly: Review data archiving strategy
- Annually: Review storage capacity planning
Understanding MongoDB Storage Architecture:
MongoDB uses several types of files that consume disk space:
1. Data Files (.wt): WiredTiger storage engine data files
2. Journal Files: Write-ahead log for crash recovery
- Journal files are typically 100MB each (configurable)
- MongoDB keeps journal files until they're no longer needed for recovery
3. Oplog: Operation log for replication
- Default size is 5% of disk space (min 990MB, max 50GB)
- Can be adjusted with --oplogSizeMB startup option
4. Temporary Files: Created during:
- Sort operations that exceed memory limits
- Index builds
- Aggregation pipeline stages ($group, $sort with large datasets)
5. Diagnostic Files: Core dumps, diagnostic data collections
Filesystem Considerations:
- Ext4/XFS: Recommended for MongoDB on Linux
- NTFS: Acceptable for Windows deployments
- ZFS: Good for compression but requires careful tuning
- Btrfs: Generally not recommended for production MongoDB
- Cloud Storage: EBS, Persistent Disk, Managed Disks have specific performance characteristics
Inode Exhaustion:
Even with free disk space, MongoDB can fail if inodes are exhausted. Monitor with:
df -i /var/lib/mongodbCompression Trade-offs:
- snappy (default): Fast, moderate compression
- zlib: Better compression, higher CPU
- none: No compression, fastest
- zstd (MongoDB 4.2+): Good balance of speed and compression
WiredTiger Cache Considerations:
The WiredTiger cache (default: 50% of RAM - 1GB) affects disk I/O:
- Larger cache reduces disk reads but uses more RAM
- Monitor cache usage: db.serverStatus().wiredTiger.cache
- Adjust with storage.wiredTiger.engineConfig.cacheSizeGB
Recovery from Full Disk Scenarios:
If MongoDB crashes due to full disk:
1. Free disk space without starting MongoDB
2. Check data files for corruption:
mongod --dbpath /var/lib/mongodb --repair3. Restore from backup if repair fails
4. Re-sync replica set members if applicable
Preventive Monitoring Metrics:
Monitor these key metrics:
- Disk free space percentage
- Disk I/O latency
- MongoDB data file growth rate
- Journal file rotation frequency
- Temporary file creation rate
- Oplog window duration
Cloud-Specific Considerations:
AWS EBS:
- Monitor burst balance for gp2/gp3 volumes
- Consider io2/io2 Block Express for high performance
- Use provisioned IOPS for predictable performance
Azure Managed Disks:
- Choose appropriate disk tier (P, E, S series)
- Enable bursting for intermittent high performance needs
- Monitor disk throughput limits
Google Persistent Disk:
- Choose SSD vs HDD based on performance needs
- Regional disks for higher availability
- Monitor disk read/write operations
Containerized Deployments:
- Docker/Kubernetes storage limits
- Persistent volume claims and limits
- Storage class selection for performance
StaleShardVersion: shard version mismatch
How to fix "StaleShardVersion: shard version mismatch" in MongoDB
MongoOperationTimeoutError: Operation timed out
How to fix "MongoOperationTimeoutError: Operation timed out" in MongoDB
MongoServerError: PlanExecutor error during aggregation :: caused by :: Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting. Aborting operation.
How to fix "QueryExceededMemoryLimitNoDiskUseAllowed" in MongoDB
MissingSchemaError: Schema hasn't been registered for model
How to fix "MissingSchemaError: Schema hasn't been registered for model" in MongoDB/Mongoose
DivergentArrayError: For your own good, using document.save() to update an array which was selected using an $elemMatch projection will not work
How to fix "DivergentArrayError: For your own good, using document.save() to update an array which was selected using an $elemMatch projection will not work" in MongoDB