This error occurs when a MongoDB aggregation pipeline stage exceeds the default 100MB memory limit and disk usage is not enabled. Enable allowDiskUse to resolve it.
MongoDB enforces a 100MB memory limit per stage in aggregation pipelines to prevent runaway queries from consuming excessive server resources. When a $group stage processes large datasets and accumulates more than 100MB of data in memory, it needs to spill temporary data to disk. If the allowDiskUse option is not enabled (the default in many MongoDB versions and drivers), the aggregation fails with this error. The PlanExecutor is the component that executes the query plan. When it detects that the $group operation has exceeded the memory threshold and external sorting to disk is not permitted, it immediately terminates the operation to protect server stability. This error is particularly common with grouping operations on large collections, especially when using accumulators like $push, $addToSet, or complex $group expressions that create large intermediate result sets.
The primary solution is to enable the allowDiskUse option, which allows MongoDB to write temporary files to disk when memory limits are exceeded:
// MongoDB Node.js driver
db.collection('myCollection').aggregate([
{ $group: { _id: '$category', total: { $sum: '$amount' } } }
], { allowDiskUse: true });
// Mongoose
MyModel.aggregate([
{ $group: { _id: '$category', total: { $sum: '$amount' } } }
]).allowDiskUse(true);
// MongoDB shell
db.myCollection.aggregate([
{ $group: { _id: '$category', total: { $sum: '$amount' } } }
], { allowDiskUse: true });This enables MongoDB to spill data to the _tmp directory within your data directory when memory limits are exceeded.
Review your aggregation pipeline to minimize memory consumption:
// Instead of accumulating all items:
db.collection.aggregate([
{ $group: {
_id: '$userId',
items: { $push: '$$ROOT' } // Stores entire documents
}
}
]);
// Only accumulate necessary fields:
db.collection.aggregate([
{ $group: {
_id: '$userId',
items: { $push: { id: '$_id', name: '$name' } } // Only needed fields
}
}
]);
// Or use $count instead of $push when possible:
db.collection.aggregate([
{ $group: {
_id: '$userId',
itemCount: { $sum: 1 }
}
}
]);Reduce the dataset size before grouping by adding $match stages early in the pipeline:
db.collection.aggregate([
// Filter first to reduce data volume
{ $match: {
createdAt: { $gte: new Date('2024-01-01') },
status: 'active'
}
},
// Then group on smaller dataset
{ $group: {
_id: '$category',
total: { $sum: '$amount' }
}
}
], { allowDiskUse: true });Placing $match stages before $group leverages indexes and significantly reduces memory usage.
If you're using MongoDB Atlas, verify your cluster tier supports allowDiskUse for aggregations:
- M0/M2/M5 (free/shared tiers): Limited or no support for allowDiskUse
- M10 and above: Full support for allowDiskUse
If you're on a lower tier and still experiencing this error with allowDiskUse enabled, you may need to upgrade to M10 or higher:
# Check your current cluster tier in Atlas dashboard
# Clusters → [Your Cluster] → ConfigurationAlternatively, optimize your query further or process data in smaller batches.
If grouping creates too many distinct groups, use $bucketAuto to limit the number of output buckets:
db.collection.aggregate([
{ $bucketAuto: {
groupBy: '$timestamp',
buckets: 10, // Limits to 10 groups
output: {
count: { $sum: 1 },
avgAmount: { $avg: '$amount' }
}
}
}
], { allowDiskUse: true });This automatically distributes documents into a specified number of buckets, preventing memory explosion from high-cardinality grouping.
In MongoDB 6.0+, you can set allowDiskUseByDefault as a server parameter to enable disk usage globally without specifying it in each query. However, be aware that using disk storage significantly impacts performance due to I/O overhead - aggregations can be 10-100x slower when spilling to disk. For production systems, it's better to optimize queries to avoid disk usage when possible.
Some operators like $push and $addToSet have additional memory considerations. Even with allowDiskUse enabled, individual array accumulations are still subject to the 16MB BSON document size limit. If you're building arrays that approach this limit, consider alternative approaches like storing references instead of full documents, or using separate queries.
For recurring aggregations on large datasets, consider using views with pre-aggregated data, scheduled batch jobs that process data in chunks, or MongoDB's change streams to maintain materialized aggregations incrementally.
DivergentArrayError: For your own good, using document.save() to update an array which was selected using an $elemMatch projection will not work
How to fix "DivergentArrayError: For your own good, using document.save() to update an array which was selected using an $elemMatch projection will not work" in MongoDB
MongoServerError: bad auth : authentication failed
How to fix "MongoServerError: bad auth : authentication failed" in MongoDB
CannotCreateIndex: Cannot create index
CannotCreateIndex: Cannot create index
StaleShardVersion: shard version mismatch
How to fix "StaleShardVersion: shard version mismatch" in MongoDB
MongoOperationTimeoutError: Operation timed out
How to fix "MongoOperationTimeoutError: Operation timed out" in MongoDB