This error occurs when a Firestore transaction or batched write operation exceeds the maximum request size of 10 MiB, typically due to writing too many documents or large index updates in a single operation.
This error indicates that your Firestore transaction or batched write has exceeded Firebase's hard limit of 10 MiB (mebibytes) for the total request size. This limit applies to the combined size of all documents and index entries modified within a single transaction or batch operation. The transaction size is calculated based on the sizes of documents being written, updated, or deleted, plus the size of all index entries that need to be created or modified as a result. For delete operations, this includes the size of the target document and all index entries that need to be removed. Even if you're only modifying a few documents, if those documents have many indexed fields or if you have many composite indexes defined, the total transaction size can quickly exceed the 10 MiB threshold. When this error occurs, Firestore automatically rolls back the entire transaction, and no data is written to the database. You'll need to redesign your approach by breaking the operation into smaller chunks or using alternative strategies like parallelized individual writes.
The most direct solution is to reduce the number of documents you're processing in each transaction or batch. Start by cutting your batch size in half and testing:
// Before: Too many documents
const batch = writeBatch(db);
for (let i = 0; i < 500; i++) {
const docRef = doc(db, 'users', `user${i}`);
batch.set(docRef, largeDataObject);
}
await batch.commit(); // May exceed 10 MiB
// After: Smaller batches
const BATCH_SIZE = 100; // Adjust based on document size
for (let i = 0; i < 500; i += BATCH_SIZE) {
const batch = writeBatch(db);
for (let j = i; j < Math.min(i + BATCH_SIZE, 500); j++) {
const docRef = doc(db, 'users', `user${j}`);
batch.set(docRef, largeDataObject);
}
await batch.commit();
}Start with a conservative batch size and gradually increase until you find the optimal balance.
Before batching, estimate your document sizes to determine appropriate batch limits:
import { getFirestore } from 'firebase-admin/firestore';
function estimateDocumentSize(data: any): number {
// Rough estimation (actual size includes indexes and metadata)
const jsonSize = JSON.stringify(data).length;
const indexOverhead = Object.keys(data).length * 100; // Approximate
return jsonSize + indexOverhead;
}
const MAX_TRANSACTION_SIZE = 10 * 1024 * 1024; // 10 MiB in bytes
const SAFETY_MARGIN = 0.8; // Use 80% of limit to be safe
async function smartBatch(documents: any[]) {
let currentBatch = writeBatch(db);
let currentSize = 0;
let batchCount = 0;
for (const docData of documents) {
const docSize = estimateDocumentSize(docData);
if (currentSize + docSize > MAX_TRANSACTION_SIZE * SAFETY_MARGIN) {
// Commit current batch and start new one
await currentBatch.commit();
console.log(`Committed batch ${++batchCount}`);
currentBatch = writeBatch(db);
currentSize = 0;
}
const docRef = doc(db, 'collection', docData.id);
currentBatch.set(docRef, docData);
currentSize += docSize;
}
// Commit final batch
if (currentSize > 0) {
await currentBatch.commit();
console.log(`Committed final batch ${++batchCount}`);
}
}For very large datasets, skip batching entirely and use parallelized individual writes with rate limiting:
import { setDoc, doc } from 'firebase/firestore';
async function parallelWriteWithLimit(
documents: any[],
concurrency: number = 50
) {
const chunks = [];
for (let i = 0; i < documents.length; i += concurrency) {
chunks.push(documents.slice(i, i + concurrency));
}
for (const chunk of chunks) {
await Promise.all(
chunk.map(async (docData) => {
const docRef = doc(db, 'collection', docData.id);
return setDoc(docRef, docData);
})
);
console.log(`Processed ${chunk.length} documents`);
}
}
// Usage
await parallelWriteWithLimit(largeDataset, 50);This approach sacrifices atomicity but handles unlimited data volumes without transaction size limits.
Excessive composite indexes can dramatically increase transaction sizes. Review your firestore.indexes.json:
{
"indexes": [
{
"collectionGroup": "users",
"queryScope": "COLLECTION",
"fields": [
{ "fieldPath": "status", "order": "ASCENDING" },
{ "fieldPath": "createdAt", "order": "DESCENDING" }
]
}
]
}Remove unused indexes and consider:
- Using single-field indexes instead of composite indexes where possible
- Disabling auto-indexing on large text fields you don't query
- Using array-contains queries instead of complex composite indexes
- Exempting specific fields from indexing with .indexOff() in security rules
Check the Firebase Console → Firestore → Indexes tab to identify indexes consuming the most space.
If individual documents are very large, consider restructuring your data model:
// Before: One large document
const userDoc = {
id: 'user123',
profile: { /* large object */ },
preferences: { /* large object */ },
activityLog: [ /* hundreds of entries */ ],
metadata: { /* more data */ }
};
// After: Main document + subcollections
const userDoc = {
id: 'user123',
// Only essential fields in main document
};
// Split into subcollections
await setDoc(doc(db, 'users', 'user123'), userDoc);
await setDoc(doc(db, 'users/user123/details/profile'), profileData);
await setDoc(doc(db, 'users/user123/details/preferences'), preferencesData);
// Activity log as separate documents
for (const activity of activityLog) {
await addDoc(collection(db, 'users/user123/activity'), activity);
}This keeps each write operation smaller and more manageable.
Transaction Size Calculation: The 10 MiB limit is not just document data—it includes the serialized size of all index entries created or deleted. If you have 10 composite indexes on a collection, each document write creates 10+ index entries. Use explain queries in the Firebase Console to analyze index usage.
Server vs Client SDKs: Server-side SDKs (Admin SDK) have better performance for bulk operations but are subject to the same 10 MiB limit. The Admin SDK's BulkWriter class (available in Node.js) automatically handles batching and retries, making it ideal for large-scale imports.
Firestore Data Bundles: For truly massive datasets, consider using Firestore Data Bundles to preload data on clients, or Cloud Functions to handle large write operations asynchronously with custom batching logic.
Monitoring: Enable Cloud Firestore monitoring in Google Cloud Console to track transaction sizes and identify which operations are approaching limits. Set up alerts for repeated transaction failures.
Database Migration Strategy: If migrating large datasets, use Cloud Functions with Pub/Sub to process records in manageable chunks over time rather than attempting bulk migration in a single operation.
messaging/UNSPECIFIED_ERROR: No additional information available
How to fix "messaging/UNSPECIFIED_ERROR: No additional information available" in Firebase Cloud Messaging
App Check: reCAPTCHA Score Too Low
App Check reCAPTCHA Score Too Low
storage/invalid-url: Invalid URL format for Cloud Storage reference
How to fix invalid URL format in Firebase Cloud Storage
auth/missing-uid: User ID identifier required
How to fix "auth/missing-uid: User ID identifier required" in Firebase
auth/invalid-argument: Invalid parameter passed to method
How to fix "auth/invalid-argument: Invalid parameter passed to method" in Firebase