This error occurs when multiple asynchronous save() operations are attempted on the same MongoDB document instance simultaneously. It's a Mongoose-specific concurrency protection mechanism that prevents data corruption from race conditions. The fix involves implementing proper synchronization or using atomic update operators.
The ParallelSaveError is a Mongoose-specific error that protects against data corruption when the same document instance is being saved concurrently from multiple asynchronous operations. In MongoDB with Mongoose, when you retrieve a document using findOne() or similar methods, you get a document instance. If you modify this instance and call save() multiple times in parallel (e.g., from different API requests or async operations), Mongoose throws this error to prevent race conditions where the second save could overwrite changes from the first save. This is different from MongoDB's native concurrency controls. MongoDB itself uses optimistic locking with version numbers, but Mongoose adds this additional layer of protection at the application level to ensure data consistency when working with document instances.
First, identify where you're calling save() on the same document instance. Look for patterns like:
// Problematic pattern
const user = await User.findById(userId);
// Multiple async operations saving the same instance
await Promise.all([
user.save(), // First save
user.save() // Second save - will cause ParallelSaveError
]);Or in event handlers:
// Event-driven code that might trigger multiple saves
user.on('update', async () => {
await user.save(); // Could be called multiple times
});For concurrent updates, use MongoDB's atomic operators which handle concurrency at the database level:
// Instead of modifying and saving:
user.name = 'New Name';
await user.save(); // Problematic in parallel
// Use findOneAndUpdate with atomic operators:
await User.findOneAndUpdate(
{ _id: userId },
{ $set: { name: 'New Name' } },
{ new: true }
);
// Or updateOne:
await User.updateOne(
{ _id: userId },
{ $set: { name: 'New Name' } }
);Atomic operators like $set, $inc, $push are processed atomically by MongoDB and don't suffer from this error.
If you must use save() and have legitimate concurrent operations, implement synchronization:
// Using a simple in-memory lock
const locks = new Map();
async function safeSave(document) {
const lockKey = document._id.toString();
// Check if already locked
if (locks.has(lockKey)) {
throw new Error('Document is already being saved');
}
try {
locks.set(lockKey, true);
return await document.save();
} finally {
locks.delete(lockKey);
}
}
// Or use a proper queuing system
const saveQueue = new PQueue({ concurrency: 1 });
async function queuedSave(document) {
return saveQueue.add(() => document.save());
}If different parts of your code need to modify the same document, fetch fresh instances:
// Instead of sharing the same instance:
async function updateUser(userId) {
const user = await User.findById(userId);
// Operation A
user.fieldA = 'valueA';
await user.save();
// Operation B (separate function/request)
// DON'T use the same 'user' instance
// Instead, fetch fresh:
const freshUser = await User.findById(userId);
freshUser.fieldB = 'valueB';
await freshUser.save();
}Each operation gets its own document instance, avoiding the ParallelSaveError.
When you need to update multiple documents or perform multiple operations, use bulk operations:
// Instead of multiple save() calls:
const updates = users.map(user => {
user.status = 'active';
return user.save(); // Problematic if same user appears multiple times
});
// Use bulkWrite:
await User.bulkWrite(
users.map(user => ({
updateOne: {
filter: { _id: user._id },
update: { $set: { status: 'active' } }
}
}))
);Bulk operations are more efficient and avoid concurrency issues.
Implement proper error handling and consider retry mechanisms:
async function saveWithRetry(document, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
return await document.save();
} catch (error) {
if (error.name === 'ParallelSaveError' && i < maxRetries - 1) {
// Wait a bit and fetch fresh document
await new Promise(resolve => setTimeout(resolve, 100 * (i + 1)));
document = await document.constructor.findById(document._id);
continue;
}
throw error;
}
}
}
// Usage
try {
await saveWithRetry(user);
} catch (error) {
console.error('Failed to save after retries:', error);
// Fallback to atomic update
await User.updateOne(
{ _id: user._id },
{ $set: user.toObject() }
);
}## Understanding Mongoose's Document Instance Model
Mongoose maintains an internal state for each document instance, tracking changes (dirty paths). When save() is called, Mongoose:
1. Calculates which fields have changed
2. Sends only those changes to MongoDB
3. Updates the document's version key (if configured)
4. Marks the document as clean
When a second save() is attempted while the first is still in progress, Mongoose detects this and throws ParallelSaveError because:
- The second save would use potentially stale change tracking
- It could overwrite changes from the first save
- Version key increments could conflict
## When to Use save() vs Atomic Operators
Use save() when:
- You need Mongoose middleware (pre/post hooks) to run
- You're working with complex embedded documents
- You want validation to run
- Updates are simple and non-concurrent
Use atomic operators when:
- Multiple processes might update the same document
- You need better performance
- You don't need Mongoose middleware
- Updates are simple field changes
## Performance Considerations
Atomic operators are generally faster than save() because:
- They send only the update operation to MongoDB
- No need to fetch the document first
- MongoDB handles concurrency natively
- Less network traffic
However, save() provides:
- Full document validation
- Middleware execution
- Change tracking
- Instance method availability
Choose based on your specific needs and concurrency requirements.
StaleShardVersion: shard version mismatch
How to fix "StaleShardVersion: shard version mismatch" in MongoDB
MongoOperationTimeoutError: Operation timed out
How to fix "MongoOperationTimeoutError: Operation timed out" in MongoDB
MongoServerError: PlanExecutor error during aggregation :: caused by :: Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting. Aborting operation.
How to fix "QueryExceededMemoryLimitNoDiskUseAllowed" in MongoDB
MissingSchemaError: Schema hasn't been registered for model
How to fix "MissingSchemaError: Schema hasn't been registered for model" in MongoDB/Mongoose
CastError: Cast to ObjectId failed for value "abc123" at path "_id"
How to fix "CastError: Cast to ObjectId failed" in MongoDB