This error occurs when a Firestore transaction runs for more than 270 seconds or remains idle for more than 60 seconds. The transaction fails and no data is written to the database.
This error indicates that a Firestore transaction has exceeded the maximum allowed time limit. Firestore imposes two critical timeout constraints on transactions: a maximum total duration of 270 seconds (4.5 minutes) from transaction start to commit, and an idle timeout of 60 seconds when no reads or writes occur within the transaction. When either limit is exceeded, Firestore automatically aborts the transaction. The transaction fails completely, rolling back any pending changes without writing anything to the database. This is a hard constraint enforced by the Firestore backend to prevent long-running transactions from holding locks on documents and blocking other operations. The 270-second limit applies to the total elapsed time of the transaction, while the 60-second idle timeout applies when your transaction code pauses between operations. Both timeouts are designed to ensure database performance and prevent resource exhaustion from stalled or infinite-loop transactions.
Limit each transaction to modify fewer documents. Firestore transactions have a hard limit of 500 documents per transaction.
Break large operations into smaller batches:
// Bad: Processing 1000 documents in one transaction
await db.runTransaction(async (transaction) => {
for (let i = 0; i < 1000; i++) {
const docRef = db.collection('items').doc(`item-${i}`);
transaction.update(docRef, { processed: true });
}
});
// Good: Process in batches of 200
async function processBatch(startIndex, batchSize) {
await db.runTransaction(async (transaction) => {
for (let i = startIndex; i < startIndex + batchSize; i++) {
const docRef = db.collection('items').doc(`item-${i}`);
transaction.update(docRef, { processed: true });
}
});
}
for (let i = 0; i < 1000; i += 200) {
await processBatch(i, 200);
}Move any CPU-intensive work, API calls, or blocking operations outside the transaction to avoid idle timeouts.
// Bad: External API call inside transaction
await db.runTransaction(async (transaction) => {
const userDoc = await transaction.get(userRef);
const userData = userDoc.data();
// This external call can exceed 60s idle timeout
const validationResult = await fetch('https://api.example.com/validate', {
method: 'POST',
body: JSON.stringify(userData)
});
transaction.update(userRef, { validated: validationResult.ok });
});
// Good: Perform external operations first
const userDoc = await userRef.get();
const userData = userDoc.data();
const validationResult = await fetch('https://api.example.com/validate', {
method: 'POST',
body: JSON.stringify(userData)
});
// Fast transaction with only database operations
await db.runTransaction(async (transaction) => {
transaction.update(userRef, { validated: validationResult.ok });
});If you don't need atomicity guarantees or read-before-write logic, use batched writes which don't have the same timeout constraints.
// Batched write: no time limits, no retries, atomic commit
const batch = db.batch();
for (let i = 0; i < 500; i++) {
const docRef = db.collection('items').doc(`item-${i}`);
batch.update(docRef, { status: 'processed' });
}
await batch.commit();Batched writes can modify up to 500 documents and don't require read validation, making them faster and more reliable for bulk updates.
For very large datasets, use Promise.all() with individual writes or small batches to parallelize operations.
const BATCH_SIZE = 500;
const PARALLEL_BATCHES = 10;
async function writeBatch(items) {
const batch = db.batch();
items.forEach(item => {
const docRef = db.collection('items').doc(item.id);
batch.set(docRef, item);
});
return batch.commit();
}
// Split 5000 items into 10 batches of 500, execute in parallel
const allItems = [/* 5000 items */];
const batches = [];
for (let i = 0; i < allItems.length; i += BATCH_SIZE) {
const batchItems = allItems.slice(i, i + BATCH_SIZE);
batches.push(batchItems);
}
// Process batches in parallel groups
for (let i = 0; i < batches.length; i += PARALLEL_BATCHES) {
const parallelBatches = batches.slice(i, i + PARALLEL_BATCHES);
await Promise.all(parallelBatches.map(writeBatch));
}Ensure all reads happen at the beginning of the transaction and avoid re-reading the same document.
// Bad: Multiple reads scattered throughout
await db.runTransaction(async (transaction) => {
const user = await transaction.get(userRef);
// ... some logic ...
const account = await transaction.get(accountRef);
// ... more logic ...
const settings = await transaction.get(settingsRef);
transaction.update(userRef, { /* ... */ });
});
// Good: Batch all reads at start
await db.runTransaction(async (transaction) => {
// Perform all reads upfront
const [user, account, settings] = await Promise.all([
transaction.get(userRef),
transaction.get(accountRef),
transaction.get(settingsRef)
]);
// All writes at the end
transaction.update(userRef, { /* ... */ });
transaction.update(accountRef, { /* ... */ });
});Transaction Retry Behavior: Firestore automatically retries failed transactions up to 5 times when they encounter contention (concurrent modifications). Each retry executes your entire transaction function again. If your transaction function contains slow operations, these retries can compound and push you over the 270-second limit. Always keep transaction functions fast and idempotent.
Size Limits: Beyond time limits, transactions are constrained by a 10 MiB maximum size for all operations combined. This includes the size of documents being read/written plus index entry sizes. Large documents or fields with many indexed values can hit this limit before the document count limit of 500.
Idle vs Total Time: The 60-second idle timeout resets with each read or write operation. If you perform a read, wait 55 seconds, then perform another read, the transaction continues. However, the 270-second total time limit is measured from transaction start to commit, regardless of activity.
Cloud Functions Timeout: When running transactions in Cloud Functions, be aware that Cloud Functions have their own timeout limits (9 minutes for HTTP functions, 60 minutes for event-driven functions). Plan transaction execution time accordingly.
Alternative: BulkWriter: For Node.js Admin SDK users, the BulkWriter class provides automatic throttling, batching, and retry logic for large-scale writes without transaction semantics. It's ideal for migration scripts or bulk data updates where atomicity isn't required.
const bulkWriter = db.bulkWriter();
for (let i = 0; i < 10000; i++) {
bulkWriter.update(db.collection('items').doc(`item-${i}`), {
migrated: true
});
}
await bulkWriter.close();Callable Functions: INTERNAL - Unhandled exception
How to fix "Callable Functions: INTERNAL - Unhandled exception" in Firebase
auth/invalid-hash-algorithm: Hash algorithm doesn't match supported options
How to fix "auth/invalid-hash-algorithm: Hash algorithm doesn't match supported options" in Firebase
Hosting: CORS configuration not set up properly
How to fix CORS configuration in Firebase Hosting
auth/reserved-claims: Custom claims use reserved OIDC claim names
How to fix "reserved claims" error when setting custom claims in Firebase
Callable Functions: UNAUTHENTICATED - Invalid credentials
How to fix "UNAUTHENTICATED - Invalid credentials" in Firebase Callable Functions