The Spark (free) Firebase plan caps Firestore at 50,000 document reads per 24-hour window, so once your project consumes the allotment Firestore returns a ResourceExhausted error and blocks any further reads until the quota resets. Confirm the quota usage, upgrade to Blaze, and cut the number of reads (caching, aggregation, alerts, etc.) so you no longer burn through the free limit.
The error means Firestore is enforcing the Spark plan's daily document read quota (50,000 reads/day) and has reached that hard limit. Every read operation counts toward the quota, including listeners, explicit get() calls, and Cloud Functions that rehydrate data. When the counter hits 50,000 the backend returns ResourceExhausted and stops serving reads for the rest of the day; the quota resets around midnight Pacific time. Because the free tier only allows one Firestore database per Firebase project, the only way to keep serving reads beyond that point is to either throttle the workload (caching, aggregation, fewer documents per request) or upgrade the project to the Blaze (pay-as-you-go) plan and link it to a Cloud Billing account.
Open Firebase Console → Firestore → Usage & billing and switch the chart to Document reads with the last 24‑hour window. If the line hits 50,000 and the count stops growing, the Spark plan quota is exhausted. Also inspect Project settings → Usage and billing to confirm the project is still on the Spark (free) tier, which includes exactly one Firestore database and 50,000 reads/day (see https://firebase.google.com/docs/firestore/quotas).
In Firebase Console → Usage & billing → Manage plan, switch from Spark to Blaze and attach a Cloud Billing account. Blaze removes the 50,000/day free cap and charges per Firestore read (e.g., $0.06 per 100k reads in the U.S.), so you can serve more traffic without ResourceExhausted errors. After upgrading, create budgets/alerts in Cloud Billing so you get notified before the spend exceeds your comfort level.
For repeated reads (feature flags, config, dashboards), store the last value in memory and only refetch when it expires. Example in JavaScript:
const cache = new Map<string, { data: DocumentData; expiresAt: number }>();
async function getCachedDoc(path: string) {
const entry = cache.get(path);
if (entry && entry.expiresAt > Date.now()) {
return entry.data;
}
const snapshot = await getDoc(doc(db, path));
const data = snapshot.data() || {};
cache.set(path, { data, expiresAt: Date.now() + 60_000 }); // 60s TTL
return data;
}Also enable IndexedDB persistence (call enableIndexedDbPersistence() on web clients or disk persistence on mobile) so Firestore automatically serves cached data and reduces network reads.
Instead of fetching an entire collection, store pre-aggregated documents that contain the values your UI or function needs and update those summaries via Cloud Functions:
exports.refreshDailyTotals = functions.firestore
.document('orders/{orderId}')
.onWrite(async (change, context) => {
const db = admin.firestore();
const summaryRef = db.doc('materialized/dailyTotals');
await db.runTransaction(async (tx) => {
const summary = (await tx.get(summaryRef)).data() || { reads: 0 };
const delta = (change.after.exists ? change.after.get('amount') : 0) - (change.before.exists ? change.before.get('amount') : 0);
tx.set(summaryRef, { reads: summary.reads + delta }, { merge: true });
});
});Also use select(), limit(), and where() filters that keep the number of matched documents low.
Create a Cloud Monitoring alert on the metric firestore.googleapis.com/document/read_count and set the threshold at ~45,000 reads for a rolling 24-hour window. Configure notifications (email, Slack, etc.) so your team can throttle traffic before Firestore responds with the quota error. You can also export Firestore usage to BigQuery or use the Firebase Usage page to look at top collections and Cloud Functions that consume reads.
Quota usage resets around midnight Pacific time, so after a Spark plan project hits the 50,000 read cap it automatically recovers at the next reset window (per https://firebase.google.com/docs/firestore/quotas). However, because Spark only allows one free Firestore database per project you cannot simply create multiple databases to multiply the quota. Switching to Blaze lets you keep consuming reads (you pay per operation), and you can still rely on per-second/per-minute rate limits documented in the quotas page, so keep bursty workloads in check even after the upgrade.
Callable Functions: INTERNAL - Unhandled exception
How to fix "Callable Functions: INTERNAL - Unhandled exception" in Firebase
auth/invalid-hash-algorithm: Hash algorithm doesn't match supported options
How to fix "auth/invalid-hash-algorithm: Hash algorithm doesn't match supported options" in Firebase
Hosting: CORS configuration not set up properly
How to fix CORS configuration in Firebase Hosting
auth/reserved-claims: Custom claims use reserved OIDC claim names
How to fix "reserved claims" error when setting custom claims in Firebase
Callable Functions: UNAUTHENTICATED - Invalid credentials
How to fix "UNAUTHENTICATED - Invalid credentials" in Firebase Callable Functions