The "VersionError: No matching document found for id" error in MongoDB occurs when using optimistic concurrency control with document versioning. This typically happens when trying to save a document that has been modified or deleted by another process since it was loaded. The error indicates a version mismatch between the document in memory and the document in the database.
The "VersionError: No matching document found for id" error is a concurrency control error that occurs in MongoDB applications using document versioning, particularly with Mongoose ODM (Object Document Mapper). This error implements optimistic concurrency control, which prevents multiple processes from overwriting each other's changes. When a document has a `__v` (version) field or similar version tracking, Mongoose (and other ODMs) use this field to detect concurrent modifications. The process works like this: 1. **Document Loading**: An application loads a document from MongoDB, which includes a version field (e.g., `__v: 2`). 2. **Local Modification**: The application modifies the document locally. 3. **Concurrent Change**: Meanwhile, another process modifies and saves the same document, incrementing its version (e.g., `__v: 3`). 4. **Save Attempt**: When the first process tries to save its changes, it includes the original version (`__v: 2`) in the update query. 5. **Version Mismatch**: MongoDB finds no document matching both the ID and the specified version, so it throws VersionError. This error protects against "lost updates" where changes from one process overwrite changes from another without detection. It's a safety mechanism that ensures data consistency in multi-user or distributed systems. Common scenarios: - **Web applications** with multiple users editing the same document - **Background jobs** processing the same data concurrently - **Microservices** accessing shared data without proper coordination - **Real-time applications** with WebSocket connections updating shared state
The most straightforward solution is to implement retry logic that reloads the document and reapplies changes when a VersionError occurs.
Mongoose example with retry:
async function saveWithRetry(document, maxRetries = 3) {
let lastError;
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
// Reload the document to get current version
const freshDoc = await Model.findById(document._id);
if (!freshDoc) {
throw new Error('Document no longer exists');
}
// Apply changes from our document to the fresh one
Object.assign(freshDoc, document.toObject());
delete freshDoc._id; // Don't overwrite the ID
// Save with current version
return await freshDoc.save();
} catch (error) {
lastError = error;
if (error.name === 'VersionError') {
// Exponential backoff before retry
const delay = Math.min(1000 * Math.pow(2, attempt), 10000);
console.log(`VersionError on attempt ${attempt}, retrying in ${delay}ms`);
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
// Non-retryable error
throw error;
}
}
throw lastError;
}
// Usage
const user = await User.findById(userId);
user.name = 'Updated Name';
await saveWithRetry(user);Using a library like p-retry:
npm install p-retryconst retry = require('p-retry');
async function saveDocument(document) {
const freshDoc = await Model.findById(document._id);
if (!freshDoc) {
throw new Error('Document not found');
}
Object.assign(freshDoc, document.toObject());
delete freshDoc._id;
return await freshDoc.save();
}
const result = await retry(() => saveDocument(user), {
retries: 3,
factor: 2,
minTimeout: 1000,
maxTimeout: 10000,
onFailedAttempt: error => {
if (error.name !== 'VersionError') {
throw error; // Don't retry non-version errors
}
console.log(`Attempt ${error.attemptNumber} failed. ${error.retriesLeft} retries left`);
},
});Instead of loading, modifying, and saving documents, use atomic operations that update documents in a single database operation. This avoids version conflicts entirely.
Mongoose atomic update:
// Instead of:
const user = await User.findById(userId);
user.name = 'New Name';
user.email = '[email protected]';
await user.save(); // Can cause VersionError
// Use atomic update:
const result = await User.findOneAndUpdate(
{ _id: userId },
{
$set: {
name: 'New Name',
email: '[email protected]'
},
$inc: { __v: 1 } // Manually increment version if needed
},
{
new: true, // Return updated document
runValidators: true // Run schema validation
}
);
// For complex updates with conditions:
const result = await User.findOneAndUpdate(
{
_id: userId,
status: 'active', // Additional condition
__v: currentVersion // Ensure we're updating the expected version
},
{
$set: { name: 'Updated' },
$inc: { __v: 1 }
},
{ new: true }
);
if (!result) {
// Document was modified by someone else or doesn't meet conditions
throw new Error('Update failed - document may have been modified');
}Native MongoDB driver atomic update:
const collection = db.collection('users');
const result = await collection.findOneAndUpdate(
{
_id: new ObjectId(userId),
__v: currentVersion
},
{
$set: { name: 'Updated' },
$inc: { __v: 1 }
},
{ returnDocument: 'after' }
);
if (!result.value) {
throw new Error('Concurrent modification detected');
}Using update operators:
// Increment a counter atomically
await User.updateOne(
{ _id: userId },
{ $inc: { loginCount: 1, __v: 1 } }
);
// Add to an array atomically
await User.updateOne(
{ _id: userId },
{
$push: { tags: 'new-tag' },
$inc: { __v: 1 }
}
);
// Complex atomic operation
await User.updateOne(
{
_id: userId,
balance: { $gte: 100 } // Ensure sufficient balance
},
{
$inc: { balance: -100, __v: 1 },
$push: { transactions: transactionData }
}
);For scenarios where you need to load, modify, and save documents, implement proper optimistic concurrency control.
Pattern 1: Version checking with reload
async function updateWithOptimisticControl(userId, updates) {
let retries = 0;
const maxRetries = 3;
while (retries < maxRetries) {
// 1. Load current document
const doc = await User.findById(userId);
if (!doc) throw new Error('Document not found');
const currentVersion = doc.__v;
// 2. Apply updates
Object.assign(doc, updates);
// 3. Try to save with version check
try {
const result = await User.findOneAndUpdate(
{
_id: userId,
__v: currentVersion
},
{
$set: updates,
$inc: { __v: 1 }
},
{ new: true }
);
if (result) {
return result; // Success
}
// Version mismatch - increment retry count
retries++;
if (retries >= maxRetries) {
throw new Error('Max retries reached - concurrent modification');
}
// Wait before retry
await new Promise(resolve => setTimeout(resolve, 100 * retries));
} catch (error) {
if (error.name === 'VersionError') {
retries++;
if (retries >= maxRetries) throw error;
await new Promise(resolve => setTimeout(resolve, 100 * retries));
continue;
}
throw error;
}
}
}Pattern 2: Using MongoDB transactions for multi-document updates
const session = await mongoose.startSession();
session.startTransaction();
try {
// Load documents within transaction
const doc1 = await Model1.findById(id1).session(session);
const doc2 = await Model2.findById(id2).session(session);
// Apply updates
doc1.field = 'updated';
doc2.field = 'updated';
// Save within transaction
await doc1.save({ session });
await doc2.save({ session });
// Commit transaction
await session.commitTransaction();
} catch (error) {
// Abort transaction on error
await session.abortTransaction();
throw error;
} finally {
session.endSession();
}Pattern 3: Using MongoDB change streams for real-time conflict detection
// Watch for changes to specific document
const changeStream = collection.watch([
{
$match: {
'documentKey._id': new ObjectId(userId),
'operationType': 'update'
}
}
]);
changeStream.on('change', (change) => {
console.log('Document was modified by another process:', change);
// Notify UI or trigger reload
});
// In your update function
async function updateDocument(userId, updates) {
const changeStream = collection.watch([
{ $match: { 'documentKey._id': new ObjectId(userId) } }
]);
try {
const result = await collection.findOneAndUpdate(
{ _id: new ObjectId(userId) },
{ $set: updates },
{ returnDocument: 'after' }
);
return result;
} finally {
changeStream.close();
}
}Configure Mongoose to better handle versioning and concurrency.
Disable versioning for specific operations:
// Disable versioning for this save
await doc.save({ versionKey: false });
// Disable versioning in findOneAndUpdate
await Model.findOneAndUpdate(
{ _id: docId },
updates,
{
new: true,
omitUndefined: true,
timestamps: false,
versionKey: false // Disable version checking
}
);
// Disable versioning globally for a schema
const schema = new mongoose.Schema({
name: String,
// ... other fields
}, {
versionKey: false // Disable __v field entirely
});Custom version key:
const schema = new mongoose.Schema({
name: String,
version: { type: Number, default: 0 }
}, {
versionKey: 'version' // Use custom field instead of __v
});
// Use in updates
await Model.findOneAndUpdate(
{ _id: docId, version: currentVersion },
{
$set: updates,
$inc: { version: 1 }
}
);Optimistic concurrency plugin for Mongoose:
npm install mongoose-optimistic-concurrencyconst optimisticConcurrency = require('mongoose-optimistic-concurrency');
const mongoose = require('mongoose');
const schema = new mongoose.Schema({
name: String,
value: Number
});
schema.plugin(optimisticConcurrency, {
versionKey: 'version',
strategy: 'version' // or 'timestamp'
});
const Model = mongoose.model('Test', schema);
// Now saves will automatically check version
const doc = await Model.findById(id);
doc.name = 'Updated';
await doc.save(); // Will throw VersionError if modified concurrentlySchema design for reduced conflicts:
// Instead of single document with all data
const userSchema = new mongoose.Schema({
name: String,
email: String,
profile: Object, // Large object that changes frequently
settings: Object,
// ... many fields
});
// Consider splitting into multiple documents
const userSchema = new mongoose.Schema({
name: String,
email: String,
// Infrequently changed fields
});
const userProfileSchema = new mongoose.Schema({
userId: { type: mongoose.Schema.Types.ObjectId, ref: 'User' },
bio: String,
avatar: String,
// Frequently changed but isolated from core user data
});
const userSettingsSchema = new mongoose.Schema({
userId: { type: mongoose.Schema.Types.ObjectId, ref: 'User' },
notifications: Object,
preferences: Object,
// Isolated settings
});For high-concurrency scenarios, implement application-level coordination.
Using Redis for distributed locking:
npm install redlockconst Redlock = require('redlock');
const redis = require('redis');
const client = redis.createClient();
const redlock = new Redlock([client], {
driftFactor: 0.01,
retryCount: 3,
retryDelay: 200,
retryJitter: 200
});
async function updateWithLock(documentId, updates) {
const resource = `locks:document:${documentId}`;
const ttl = 5000; // 5 second lock
let lock;
try {
// Acquire lock
lock = await redlock.acquire([resource], ttl);
// Perform update
const doc = await Model.findById(documentId);
Object.assign(doc, updates);
await doc.save();
return doc;
} finally {
// Release lock
if (lock) {
await lock.release();
}
}
}Using message queues for serialized processing:
npm install bullconst Queue = require('bull');
const updateQueue = new Queue('document-updates');
// Producer - add updates to queue
app.post('/api/users/:id', async (req, res) => {
const job = await updateQueue.add({
documentId: req.params.id,
updates: req.body
}, {
jobId: `user-${req.params.id}-${Date.now()}`, // Unique job ID
attempts: 3,
backoff: { type: 'exponential', delay: 1000 }
});
res.json({ jobId: job.id });
});
// Consumer - process updates sequentially
updateQueue.process(async (job) => {
const { documentId, updates } = job.data;
const doc = await Model.findById(documentId);
if (!doc) {
throw new Error('Document not found');
}
Object.assign(doc, updates);
await doc.save();
return { success: true, documentId };
});
// Webhook to notify when job completes
updateQueue.on('completed', (job, result) => {
console.log(`Job ${job.id} completed:`, result);
// Notify client via WebSocket or polling
});Using database-level locking (pessimistic locking):
// MongoDB doesn't have row-level locking, but you can simulate it
async function updateWithPessimisticLock(documentId, updates) {
const lockKey = `lock:${documentId}`;
// Create lock document
const lock = await LockModel.findOneAndUpdate(
{ key: lockKey, expiresAt: { $lt: new Date() } }, // Only acquire expired locks
{
$set: {
key: lockKey,
documentId: documentId,
acquiredAt: new Date(),
expiresAt: new Date(Date.now() + 10000) // 10 second lock
}
},
{ upsert: true, new: true }
);
if (!lock) {
throw new Error('Could not acquire lock - document is locked');
}
try {
// Perform update
const doc = await Model.findById(documentId);
Object.assign(doc, updates);
await doc.save();
return doc;
} finally {
// Release lock
await LockModel.deleteOne({ _id: lock._id });
}
}Set up monitoring to understand concurrency issues and optimize accordingly.
Log VersionError occurrences:
// Middleware to log VersionError
mongoose.plugin((schema) => {
schema.post('save', function(error, doc, next) {
if (error && error.name === 'VersionError') {
console.error('VersionError occurred:', {
documentId: doc._id,
model: doc.constructor.modelName,
timestamp: new Date().toISOString(),
stack: error.stack
});
// Log to monitoring service
metrics.increment('mongoose.version_error', {
model: doc.constructor.modelName
});
}
next(error);
});
});
// Or wrap save operations
const originalSave = mongoose.Model.prototype.save;
mongoose.Model.prototype.save = function(options) {
return originalSave.call(this, options).catch(error => {
if (error.name === 'VersionError') {
console.error('VersionError on save:', {
model: this.constructor.modelName,
id: this._id,
version: this.__v
});
}
throw error;
});
};Track concurrency metrics:
// Using a metrics library like prom-client
const client = require('prom-client');
const versionErrorCounter = new client.Counter({
name: 'mongoose_version_errors_total',
help: 'Total number of Mongoose VersionError occurrences',
labelNames: ['model', 'operation']
});
// Increment counter when VersionError occurs
app.use((err, req, res, next) => {
if (err.name === 'VersionError') {
versionErrorCounter.inc({
model: err.modelName || 'unknown',
operation: err.operation || 'save'
});
}
next(err);
});
// Expose metrics endpoint
app.get('/metrics', async (req, res) => {
res.set('Content-Type', client.register.contentType);
res.end(await client.register.metrics());
});Analyze hot documents (frequently updated):
// Track document update frequency
const updateCounts = new Map();
mongoose.plugin((schema) => {
schema.post('save', function(doc) {
const key = `${doc.constructor.modelName}:${doc._id}`;
const count = (updateCounts.get(key) || 0) + 1;
updateCounts.set(key, count);
// Log if document is frequently updated
if (count > 10) { // Threshold
console.warn('Hot document detected:', {
model: doc.constructor.modelName,
id: doc._id,
updateCount: count
});
}
});
});
// Periodic cleanup
setInterval(() => {
updateCounts.clear();
}, 3600000); // Clear every hourImplement circuit breaker for high-concurrency endpoints:
const CircuitBreaker = require('opossum');
const breaker = new CircuitBreaker(async (documentId, updates) => {
const doc = await Model.findById(documentId);
Object.assign(doc, updates);
return await doc.save();
}, {
timeout: 5000,
errorThresholdPercentage: 50,
resetTimeout: 30000
});
breaker.fallback(() => {
return { error: 'Service unavailable due to high concurrency' };
});
breaker.on('open', () => {
console.log('Circuit breaker opened - too many VersionErrors');
});
// Use circuit breaker
app.put('/api/documents/:id', async (req, res) => {
try {
const result = await breaker.fire(req.params.id, req.body);
res.json(result);
} catch (error) {
res.status(500).json({ error: error.message });
}
});Understanding Optimistic vs. Pessimistic Concurrency Control:
Optimistic Concurrency Control (OCC):
- Assumes conflicts are rare
- Allows concurrent reads and writes
- Detects conflicts at commit time (VersionError)
- Requires retry logic
- Better for read-heavy workloads
Pessimistic Concurrency Control (PCC):
- Assumes conflicts are common
- Locks resources during operation
- Prevents conflicts but reduces concurrency
- Can cause deadlocks
- Better for write-heavy workloads
MongoDB's Approach:
MongoDB uses document-level optimistic concurrency by default through the __v field. However, for multi-document transactions, MongoDB 4.0+ provides ACID transactions with snapshot isolation.
When to Use Transactions vs. Versioning:
- Use transactions for multi-document updates that must succeed or fail together
- Use versioning for single-document optimistic concurrency
- Combine both for complex scenarios requiring both single-document versioning and multi-document atomicity
Performance Considerations:
1. Version checking adds overhead to every save operation
2. Frequent VersionErrors indicate high contention - consider schema redesign
3. Transactions have higher overhead than single-document operations
4. Indexes on version fields can improve performance
Schema Design Patterns to Reduce Conflicts:
1. Embedded Documents: Keep frequently updated fields together
2. Reference Documents: Split rarely changed fields into separate documents
3. Event Sourcing: Store changes as events rather than mutating documents
4. Command Query Responsibility Segregation (CQRS): Separate read and write models
Monitoring and Alerting:
- Set up alerts for VersionError rate spikes
- Monitor document update frequency
- Track retry attempt counts
- Measure transaction abort rates
Testing Strategies:
1. Load testing with concurrent users
2. Chaos engineering to simulate network partitions
3. Property-based testing for concurrent operations
4. Integration tests with real concurrency scenarios
Common Anti-patterns to Avoid:
1. Disabling versioning without alternative conflict resolution
2. Ignoring VersionError and proceeding with stale data
3. Using global locks instead of document-level coordination
4. Not implementing retry logic for transient conflicts
5. Assuming single-user access in multi-user applications
StaleShardVersion: shard version mismatch
How to fix "StaleShardVersion: shard version mismatch" in MongoDB
MongoOperationTimeoutError: Operation timed out
How to fix "MongoOperationTimeoutError: Operation timed out" in MongoDB
MongoServerError: PlanExecutor error during aggregation :: caused by :: Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting. Aborting operation.
How to fix "QueryExceededMemoryLimitNoDiskUseAllowed" in MongoDB
MissingSchemaError: Schema hasn't been registered for model
How to fix "MissingSchemaError: Schema hasn't been registered for model" in MongoDB/Mongoose
CastError: Cast to ObjectId failed for value "abc123" at path "_id"
How to fix "CastError: Cast to ObjectId failed" in MongoDB