The CursorNotFound error occurs when MongoDB cannot locate a cursor that your application is trying to access. This typically happens when a cursor times out after 10 minutes of inactivity, when processing large result sets slowly, or in load-balanced sharded cluster configurations. Understanding cursor lifecycle and timeout management is essential for handling large datasets.
The CursorNotFound error is thrown when MongoDB attempts to execute a getMore command on a cursor that no longer exists on the server. Cursors are server-side objects that manage result set iteration, fetching documents in batches to optimize memory usage and network bandwidth. MongoDB automatically closes cursors that have been idle for 10 minutes (600,000 milliseconds) by default, as configured by the cursorTimeoutMillis server parameter. When your application takes longer than this timeout to process a batch of results and then tries to fetch the next batch, the server cannot find the cursor and throws this error. This error commonly occurs in long-running batch processing jobs, data migration scripts, or applications that process large datasets with slow per-document operations. In sharded clusters behind load balancers, it can also occur when the getMore command is routed to a different mongos instance than the one that created the cursor.
Disable automatic cursor timeout for operations that need to process data slowly:
// Node.js MongoDB driver
const cursor = db.collection("largeCollection")
.find(query)
.noCursorTimeout(true);
try {
await cursor.forEach(doc => {
// Process each document (can take > 10 minutes total)
processDocument(doc);
});
} finally {
// CRITICAL: Always close cursor manually when using noCursorTimeout
await cursor.close();
}
// Mongoose with noCursorTimeout
const cursor = Model.find(query)
.cursor({ noCursorTimeout: true });
for await (const doc of cursor) {
await processDocument(doc);
}
await cursor.close();Important: When using noCursorTimeout, you MUST manually close the cursor or exhaust all results to prevent server-side resource leaks.
Note: MongoDB Atlas M0 (free tier) and M2/M5 shared clusters do NOT support noCursorTimeout.
Adjust the batch size to fetch more documents per round trip, reducing the number of getMore calls:
// Default batch size is 101 for initial batch, then 16MB limit
// Set explicit batch size
const cursor = db.collection("data")
.find(query)
.batchSize(1000); // Fetch 1000 documents per batch
await cursor.forEach(doc => {
processDocument(doc);
});
// Aggregation with batch size
const cursor = db.collection("data").aggregate(
[
{ $match: { status: "pending" } },
{ $sort: { createdAt: -1 } }
],
{ batchSize: 500 }
);
// Python pymongo
cursor = collection.find(query).batch_size(1000)
for doc in cursor:
process_document(doc)Trade-offs:
- Larger batch size = fewer getMore calls but more memory usage per batch
- Smaller batch size = more frequent server round trips but lower memory footprint
- Find the optimal balance based on your document size and processing speed
Refactor slow processing logic to complete batches within the 10-minute window:
// BAD: Slow processing that may exceed timeout
const cursor = db.collection("data").find(query);
for await (const doc of cursor) {
await heavyExternalApiCall(doc); // Could take minutes per document
await complexCalculation(doc);
}
// GOOD: Fast processing with proper batch handling
const cursor = db.collection("data")
.find(query)
.batchSize(100);
const batchPromises = [];
for await (const doc of cursor) {
// Collect documents for parallel processing
batchPromises.push(processDocumentFast(doc));
// Process in parallel batches of 10
if (batchPromises.length >= 10) {
await Promise.all(batchPromises);
batchPromises.length = 0; // Clear array
}
}
// Process remaining documents
if (batchPromises.length > 0) {
await Promise.all(batchPromises);
}Profile your processing logic to identify bottlenecks and optimize accordingly.
For very large datasets or when cursor timeout cannot be disabled, use pagination:
// Process data in pages to avoid cursor timeout
const pageSize = 1000;
let page = 0;
let hasMore = true;
while (hasMore) {
const documents = await db.collection("data")
.find(query)
.sort({ _id: 1 }) // Consistent ordering required
.skip(page * pageSize)
.limit(pageSize)
.toArray();
if (documents.length === 0) {
hasMore = false;
break;
}
// Process this page
for (const doc of documents) {
await processDocument(doc);
}
page++;
// Optional: Add small delay between pages
await new Promise(resolve => setTimeout(resolve, 100));
}
// BETTER: Use range-based pagination for better performance
let lastId = null;
while (true) {
const query = lastId ? { _id: { $gt: lastId } } : {};
const documents = await db.collection("data")
.find(query)
.sort({ _id: 1 })
.limit(pageSize)
.toArray();
if (documents.length === 0) break;
for (const doc of documents) {
await processDocument(doc);
}
lastId = documents[documents.length - 1]._id;
}Note: Using skip() for large offsets is inefficient. Range-based pagination with _id is preferred.
In sharded clusters behind load balancers, ensure getMore commands reach the same mongos:
// Connection string with readPreference and session options
const client = new MongoClient(uri, {
readPreference: "primary",
// Use consistent mongos routing
});
// Use explicit sessions for cursor operations
const session = client.startSession();
try {
const cursor = db.collection("data")
.find(query, { session })
.batchSize(500);
for await (const doc of cursor) {
processDocument(doc);
}
} finally {
await session.endSession();
}Infrastructure fix: Configure your load balancer to use session affinity (sticky sessions) based on MongoDB connection IDs. This ensures all commands for a cursor go to the same mongos instance.
Alternative: Use direct mongos connections instead of load balancer for cursor-heavy operations.
Adjust the server-side cursor timeout threshold if you have server access:
// Connect to MongoDB admin database
use admin
// Check current cursor timeout (milliseconds)
db.adminCommand({ getParameter: 1, cursorTimeoutMillis: 1 })
// Returns: { cursorTimeoutMillis: 600000, ok: 1 } (10 minutes)
// Increase timeout to 30 minutes
db.adminCommand({
setParameter: 1,
cursorTimeoutMillis: 1800000
})Permanent configuration - Add to mongod.conf:
setParameter:
cursorTimeoutMillis: 1800000Restart mongod for configuration file changes to take effect:
sudo systemctl restart mongodWarning: Increasing cursor timeout server-wide can lead to resource exhaustion if applications leave cursors open. Use judiciously and monitor cursor metrics.
The CursorNotFound error is fundamentally about managing server-side resources and client-side iteration timing. Understanding the cursor lifecycle is critical for large-scale data processing.
Cursor Lifecycle Details:
1. Initial query creates cursor on server with configurable batch size (default 101 documents)
2. Client receives first batch immediately
3. When client requests more data, it issues getMore command with cursor ID
4. Server tracks cursor idle time; closes after cursorTimeoutMillis (default 600,000ms)
5. Cursors automatically close when exhausted or explicitly closed
Memory and Performance Implications:
- Each open cursor consumes server memory for tracking iteration state
- Default batch size after initial batch is limited by 16MB message size
- Large batch sizes reduce network round trips but increase client memory usage
- In sharded clusters, cursors track state across multiple shards
Monitoring and Debugging:
Use MongoDB server logs and metrics to diagnose cursor issues:
// Check active cursors
db.serverStatus().metrics.cursor
// Monitor cursor timeouts in logs
tail -f /var/log/mongodb/mongod.log | grep "cursor timeout"
// Find long-running operations
db.currentOp({ "cursor": { $exists: true } })Sharded Cluster Considerations:
In sharded deployments, mongos instances maintain cursor state. If your infrastructure uses multiple mongos behind a load balancer without session affinity, getMore commands may route to a different mongos that doesn't have the cursor. Solutions:
1. Enable sticky sessions on load balancer
2. Use MongoDB's built-in replica set connection strings instead of load balancers
3. Use explicit sessions to maintain routing consistency
Atlas Limitations:
MongoDB Atlas shared clusters (M0, M2, M5) do not allow noCursorTimeout option for security and resource management. For these tiers, you must use pagination or optimize processing speed.
Best Practices:
- Always close cursors explicitly when using noCursorTimeout
- Monitor cursor usage with db.serverStatus().metrics.cursor
- Use aggregation pipelines with $merge or $out for large batch transformations
- Consider change streams for real-time processing instead of polling with cursors
- Implement exponential backoff and retry logic for cursor errors in production
StaleShardVersion: shard version mismatch
How to fix "StaleShardVersion: shard version mismatch" in MongoDB
MongoOperationTimeoutError: Operation timed out
How to fix "MongoOperationTimeoutError: Operation timed out" in MongoDB
MongoServerError: PlanExecutor error during aggregation :: caused by :: Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting. Aborting operation.
How to fix "QueryExceededMemoryLimitNoDiskUseAllowed" in MongoDB
MissingSchemaError: Schema hasn't been registered for model
How to fix "MissingSchemaError: Schema hasn't been registered for model" in MongoDB/Mongoose
CastError: Cast to ObjectId failed for value "abc123" at path "_id"
How to fix "CastError: Cast to ObjectId failed" in MongoDB