Every MongoDB write concern is guarded by a timeout. When a write needs acknowledgements from more than the primary, the driver waits for those replicas but eventually raises "WriteConcernError: waiting for replication timed out" if the replicas cannot confirm the write before the configured wtimeout expires.
MongoDB tracks how long it spends waiting for the configured number of replicas (w) to acknowledge a write. When the acknowledgment window exceeds the `wtimeout` value (default 1,000 ms), mongod reports a WriteConcernError while the command itself still returns ok: 1. This is most visible as `MongoServerError: WriteConcernError: waiting for replication timed out` in drivers; the document is written on the primary but never confirmed to the application unless the driver retries. The embedded `writeConcernError` payload tells you which write concern failed, the `code` (usually 64 / WriteConcernFailed), and the elapsed time. Because the timeout usually happens only during periods of replication slowness or lost secondaries, the error is a signal that either the replica set cannot reach the needed nodes in time or your timeout is too aggressive.
Use rs.status(), rs.printSecondaryReplicationInfo(), and db.serverStatus() to confirm all members are SECONDARY/PRIMARY. Look at optimeDate comparisons and ensure secondaries are not stuck in RECOVERING or down.
If a node is lagging or not reachable, fix replication synchronization by repairing the oplog, restarting the member, or adding capacity so it can apply oplog entries faster.
If legitimate writes take longer than the default 1,000 ms, extend wtimeoutMS on the driver or in mongos/command.
Node.js example:
const client = new MongoClient(uri, {
writeConcern: { w: 'majority', wtimeoutMS: 5000 },
});For a single command:
db.collection.insertOne(doc, { writeConcern: { w: 2, wtimeout: 5000 } });Increasing wtimeoutMS gives the replica set more time to meet the requested acknowledgment count without immediately raising an error.
If you cannot keep additional nodes healthy, consider using w:1 or w:majority with fewer tags. Changing to w:1 removes the need to wait for remote nodes entirely, though durability drops to only the primary.
Atlas tip: review the cluster tierβs replica set size, tags, and priority settings before relaxing w.
Heavy queries, index builds, or disk saturation can delay Oplog application. Keep secondaries performant by:
- Building indexes during low traffic windows
- Monitoring CPU/disk I/O spikes with Atlas/Cloud Manager
- Ensuring the oplog window is large enough
- Increasing priority or adding hidden members if you need more acknowledgement targets
Also verify network devices (firewalls, VPNs) are not closing idle connections faster than your heartbeat and wtimeout.
After fixing the root cause, either retry the failed operation (drivers support retryable writes when the server returns a writeConcernError) or switch to idempotent writes so your app can surface the retry logic without duplicating data.
WriteConcernError is not fatal to the cluster, but it signals that durability expectations are temporarily unmet. Even though the command returns ok: 1, the driver exposes the error so you can decide whether to retry or fail the request.
DivergentArrayError: For your own good, using document.save() to update an array which was selected using an $elemMatch projection will not work
How to fix "DivergentArrayError: For your own good, using document.save() to update an array which was selected using an $elemMatch projection will not work" in MongoDB
MongoServerError: bad auth : authentication failed
How to fix "MongoServerError: bad auth : authentication failed" in MongoDB
CannotCreateIndex: Cannot create index
CannotCreateIndex: Cannot create index
StaleShardVersion: shard version mismatch
How to fix "StaleShardVersion: shard version mismatch" in MongoDB
MongoOperationTimeoutError: Operation timed out
How to fix "MongoOperationTimeoutError: Operation timed out" in MongoDB