MongoDB aborts a transaction that runs longer than the server parameter `transactionLifetimeLimitSeconds` (60 seconds by default). This error shows up when a transaction spends too much time inside the session, usually because of large workloads, missing indexes, or waiting on long-running operations.
MongoDB tracks how long a transaction remains open and kills it once it exceeds the configured `transactionLifetimeLimitSeconds`. The default value is 60 seconds, and when the counter fires the server returns the `TransactionExceededLifetimeLimitSeconds` error with code 291. This safeguard prevents a single transaction from holding locks and blocking the cluster for too long. The error is common when a transaction contains many heavy reads/writes, runs an aggregation that scans a large index, waits on external services, or experiences slow network round trips.
Split large batches of reads or writes into separate transactions so each one finishes well before 60 seconds. For example, commit after every few hundred documents instead of wrapping the entire job in a single transaction:
const session = client.startSession();
try {
await session.withTransaction(async () => {
for (const chunk of chunks) {
await collection.insertMany(chunk, { session });
}
});
} finally {
await session.endSession();
}Chunk the array or cursor data so the loop above can finish quickly rather than holding the transaction open for its entire duration.
Use db.currentOp({ active: true, "transaction.state": { $exists: true } }) to spot transactions that approach the lifetime limit. Ensure the commands inside the transaction use covering indexes, keep network calls outside the transaction, and add maxTimeMS if you can bail early:
await db.collection("orders")
.find({ status: "pending" })
.maxTimeMS(2000)
.toArray();If a read or write takes too long, finish the transaction, diagnose the slow query, and try to optimize it before retrying.
Set the transactionLifetimeLimitSeconds server parameter when the workload legitimately needs more time, but do so carefully so long-lived transactions do not block the cluster:
// Once per replica set
await db.adminCommand({
setParameter: 1,
transactionLifetimeLimitSeconds: 120,
});Or start mongod with:
mongod --setParameter transactionLifetimeLimitSeconds=120Atlas clusters expose this parameter under the Advanced Configuration section. Always monitor locks and replication lag after raising the limit because each transaction can now hold on to resources longer.
The transaction lifetime limit is a safety mechanism to prevent long-running transactions from blocking the cluster. While you can increase transactionLifetimeLimitSeconds, consider that each transaction holds locks and consumes resources. For workloads that genuinely need more time, also evaluate: 1) Using snapshot reads instead of multi-document transactions where possible, 2) Implementing client-side retry logic with exponential backoff, 3) Breaking large operations into idempotent smaller transactions that can be retried independently.
StaleShardVersion: shard version mismatch
How to fix "StaleShardVersion: shard version mismatch" in MongoDB
MongoOperationTimeoutError: Operation timed out
How to fix "MongoOperationTimeoutError: Operation timed out" in MongoDB
MongoServerError: PlanExecutor error during aggregation :: caused by :: Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting. Aborting operation.
How to fix "QueryExceededMemoryLimitNoDiskUseAllowed" in MongoDB
MissingSchemaError: Schema hasn't been registered for model
How to fix "MissingSchemaError: Schema hasn't been registered for model" in MongoDB/Mongoose
CastError: Cast to ObjectId failed for value "abc123" at path "_id"
How to fix "CastError: Cast to ObjectId failed" in MongoDB