The DEADLINE_EXCEEDED error occurs when a Firestore operation exceeds the default 60-second timeout window. This commonly happens with large batch operations, complex queries, or high-volume read/write operations. Fix it by optimizing queries, breaking operations into smaller chunks, and addressing latency issues.
The DEADLINE_EXCEEDED error means your Firestore operation took longer than the default deadline (60 seconds) to complete and was terminated by the server. This is a timeout error that indicates the server aborted your request to prevent excessive resource consumption. The error can occur during read, write, or batch operations, and is often caused by increased latency in your database or network calls.
Check that your Firestore security rules are not blocking requests. Also ensure your network connection to Firebase is stable. You can test connectivity with:
// Test basic connectivity
const testRef = doc(db, 'test', 'ping');
await getDoc(testRef).catch(err => console.error('Connection test failed:', err));If security rules are denying access, it can cause delays and timeouts.
Review your queries to ensure they are efficient. Avoid fetching entire collections when you only need specific documents.
Instead of:
// Bad: fetches all users
const snapshot = await getDocs(collection(db, 'users'));Use filters and limits:
// Good: fetches only active users with limit
const q = query(
collection(db, 'users'),
where('active', '==', true),
limit(100)
);
const snapshot = await getDocs(q);Ensure you have composite indexes on frequently queried fields.
Instead of writing thousands of documents in one batch, split them into smaller batches of 100-500 documents each.
const documents = [...]; // array of documents to write
async function batchWrite(docs, batchSize = 100) {
for (let i = 0; i < docs.length; i += batchSize) {
const batch = writeBatch(db);
const chunk = docs.slice(i, i + batchSize);
chunk.forEach(doc => {
batch.set(collection(db, 'items').doc(), doc);
});
await batch.commit();
}
}
await batchWrite(documents);This distributes the load and reduces the chance of hitting the deadline.
Use Promise.all() to fetch multiple documents in parallel rather than sequentially:
// Bad: sequential reads
const user1 = await getDoc(doc(db, 'users', 'id1'));
const user2 = await getDoc(doc(db, 'users', 'id2'));
// Good: parallel reads
const [user1, user2] = await Promise.all([
getDoc(doc(db, 'users', 'id1')),
getDoc(doc(db, 'users', 'id2'))
]);This reduces total execution time by running requests concurrently.
If you are hitting deadlines in Cloud Functions, you can increase the timeout setting:
// Using runWith() to set a 120-second timeout (max is 540 seconds)
exports.processLargeDataset = functions
.runWith({ timeoutSeconds: 120, memory: '2GB' })
.onCall(async (data, context) => {
// Your long-running operation here
});Note: Be cautious with very long timeouts, as they increase costs. Use them only when your operation genuinely requires more time.
Instead of loading all results at once, use cursor-based pagination:
// First page
let q = query(
collection(db, 'items'),
orderBy('createdAt', 'desc'),
limit(25)
);
let snapshot = await getDocs(q);
// Next page: use last document as cursor
if (snapshot.docs.length > 0) {
const lastDoc = snapshot.docs[snapshot.docs.length - 1];
q = query(
collection(db, 'items'),
orderBy('createdAt', 'desc'),
startAfter(lastDoc),
limit(25)
);
snapshot = await getDocs(q);
}This ensures you only process what you need in each request.
If multiple clients are writing to the same document frequently, it creates a hot-spot. Firestore cannot scale horizontally for a single document. Consider using distributed counters or sharded collections:
// Instead of incrementing a single counter document
// Use sharded approach: multiple shard documents
async function incrementShardedCounter(db, counterId, shards = 10) {
const shardId = Math.floor(Math.random() * shards);
const shardRef = doc(db, 'counters', `${counterId}_shard_${shardId}`);
await updateDoc(shardRef, {
count: increment(1)
});
}This distributes write load across multiple documents.
DEADLINE_EXCEEDED errors in batch commits can be deceptive: the data may actually be committed to Firestore despite the timeout error being thrown. If you retry and get an ALREADY_EXISTS error, your data was already committed. Always implement idempotent write operations. Additionally, some intermittent deadline exceeded errors stem from cold starts in serverless environments or traffic spikes on Firebase infrastructure. Monitor your function execution time using Firebase Console to identify actual bottlenecks versus infrastructure-side delays. For enterprise deployments, use Firestore in Datastore mode or consider sharding collections if you have predictable hot-spot patterns.
Callable Functions: INTERNAL - Unhandled exception
How to fix "Callable Functions: INTERNAL - Unhandled exception" in Firebase
auth/invalid-hash-algorithm: Hash algorithm doesn't match supported options
How to fix "auth/invalid-hash-algorithm: Hash algorithm doesn't match supported options" in Firebase
Hosting: CORS configuration not set up properly
How to fix CORS configuration in Firebase Hosting
auth/reserved-claims: Custom claims use reserved OIDC claim names
How to fix "reserved claims" error when setting custom claims in Firebase
Callable Functions: UNAUTHENTICATED - Invalid credentials
How to fix "UNAUTHENTICATED - Invalid credentials" in Firebase Callable Functions