This error occurs when a Cloud Function takes longer to execute than the maximum allowed timeout duration. By default, Firebase sets a 60-second timeout for HTTP functions and 540 seconds for background functions, with higher limits available for 2nd generation functions.
The "Function exceeded timeout limit" error means your Cloud Function did not complete within the time limit allocated to it. Firebase Cloud Functions have strict execution time limits to prevent runaway code from consuming resources indefinitely. When a function reaches its timeout, Firebase forcibly terminates the function execution and returns an error status immediately to the caller. This can happen for several reasons: - **Long-running operations**: Database queries, API calls, or data processing that takes longer than expected - **Slow external services**: Calling third-party APIs or databases that respond slowly - **Default timeout too low**: The default 60-second timeout for HTTP functions may not be sufficient for your use case - **Inefficient code**: Unoptimized algorithms, unnecessary loops, or excessive computations - **Cold starts**: Initialization and warm-up time in first-generation functions It's important to understand the timeout limits for your function type. Firebase Cloud Functions come in two generations with different limits: **1st Generation Functions:** - Default timeout: 60 seconds for HTTP, 540 seconds for background - Maximum timeout: 540 seconds (9 minutes) **2nd Generation Functions (Cloud Run):** - HTTP and callable functions: Up to 3600 seconds (60 minutes) - Event-driven functions: Up to 540 seconds (9 minutes) - Task queue functions: Up to 1800 seconds (30 minutes)
First, verify how long your function is actually taking to execute:
1. Go to the [Firebase Console](https://console.firebase.google.com/)
2. Select your project and navigate to Functions
3. Click on the failing function name
4. Go to the Logs tab
5. Look for execution duration in the logs
Check if the function is consistently close to or exceeding the timeout limit. This tells you if the timeout is too low or if there's a performance issue.
Set a longer timeout using the runWith configuration when deploying your function.
For HTTP Functions (Node.js):
const functions = require('firebase-functions');
exports.myHttpFunction = functions
.runWith({ timeoutSeconds: 300 }) // 5 minutes instead of default 1 minute
.https.onRequest((req, res) => {
// Your function code
res.send('Success');
});For Background/Event Functions (Node.js):
exports.myBackgroundFunction = functions
.runWith({ timeoutSeconds: 540 }) // Maximum for 1st gen: 9 minutes
.firestore.document('users/{uid}')
.onCreate(async (snap, context) => {
// Your function code
});For Python:
import functions_framework
from firebase_functions import https_fn, options
@https_fn.on_request(timeout_sec=300)
def my_http_function(req: https_fn.Request) -> https_fn.Response:
# Your function code
return https_fn.Response('Success')After updating your code, redeploy the function:
firebase deploy --only functions:myHttpFunctionEven with a longer timeout, optimize your code to execute faster:
1. Use connection pooling for databases:
// Use a shared connection pool, not a new connection per function
const admin = require('firebase-admin');
const db = admin.firestore();
// Good: Reuse the connection
exports.myFunction = functions.https.onRequest(async (req, res) => {
const doc = await db.collection('users').doc('user1').get();
res.json(doc.data());
});2. Use Promise.all() for concurrent operations:
// Bad: Sequential operations
const user = await getUser();
const posts = await getPosts();
const comments = await getComments();
// Good: Parallel operations
const [user, posts, comments] = await Promise.all([
getUser(),
getPosts(),
getComments(),
]);3. Avoid synchronous operations:
// Bad: Blocks execution
const fs = require('fs');
const data = fs.readFileSync('large-file.json');
// Good: Use async I/O
const data = await fs.promises.readFile('large-file.json');4. Set timeouts on external API calls:
// Use fetch with timeout
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), 5000); // 5 second timeout
try {
const response = await fetch(externalUrl, { signal: controller.signal });
const data = await response.json();
} catch (error) {
if (error.name === 'AbortError') {
console.log('Request timed out');
}
}If you're using Firebase Cloud Functions 2nd generation (Cloud Run-based), you have access to much longer timeouts:
Check your function generation:
1. Go to Firebase Console → Functions
2. Click on your function
3. Look for "Generation" label (1st or 2nd)
If 2nd generation, you can set much longer timeouts:
const functions = require('firebase-functions');
// For HTTP functions: up to 3600 seconds (60 minutes)
exports.longRunningHttp = functions
.runWith({ timeoutSeconds: 3600 })
.https.onRequest(async (req, res) => {
// Can run for up to 60 minutes
res.json({ success: true });
});
// For event functions: up to 540 seconds
exports.longRunningEvent = functions
.runWith({ timeoutSeconds: 540 })
.firestore.document('data/{id}')
.onCreate(async (snap, context) => {
// Can run for up to 9 minutes
});Migrate to 2nd generation for more flexibility:
firebase deploy --only functions --gen 2More memory often means faster CPU, which can reduce execution time:
const functions = require('firebase-functions');
exports.myFunction = functions
.runWith({
timeoutSeconds: 300,
memory: '512MB', // Increase from default 256MB
})
.https.onRequest(async (req, res) => {
// Faster execution with more memory
res.json({ success: true });
});Available memory options:
- 128MB (not recommended)
- 256MB (default for HTTP)
- 512MB
- 1GB
- 2GB
- 4GB (2nd gen only)
- 8GB (2nd gen only)
- 16GB (2nd gen only)
More memory = higher cost, but can significantly speed up function execution.
For operations that are inherently long-running, break them into smaller tasks:
Example: Processing large file in chunks
// Function 1: Start processing and queue tasks
exports.processLargeFile = functions
.runWith({ timeoutSeconds: 60 })
.https.onRequest(async (req, res) => {
const fileId = req.query.fileId;
// Split file into chunks and queue tasks
const chunks = await splitFile(fileId, 100);
for (const chunk of chunks) {
// Queue each chunk for processing
await admin.firestore().collection('tasks').add({
fileId,
chunkId: chunk.id,
status: 'pending',
});
}
res.json({ queued: chunks.length });
});
// Function 2: Process individual chunks (Pub/Sub triggered)
exports.processChunk = functions.pubsub
.topic('process-chunk')
.onPublish(async (message) => {
const { fileId, chunkId } = JSON.parse(
Buffer.from(message.data, 'base64').toString()
);
// Process single chunk (fast operation)
await processChunk(fileId, chunkId);
});If the timeout is caused by temporary service unavailability, add retry logic:
const functions = require('firebase-functions');
async function callExternalApiWithRetry(url, maxRetries = 3) {
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
const response = await fetch(url);
if (!response.ok) throw new Error(`HTTP ${response.status}`);
return response.json();
} catch (error) {
if (attempt === maxRetries) throw error;
// Exponential backoff
const delay = Math.pow(2, attempt) * 1000;
console.log(`Retry attempt ${attempt} after ${delay}ms`);
await new Promise(resolve => setTimeout(resolve, delay));
}
}
}
exports.resilientFunction = functions
.runWith({ timeoutSeconds: 300 })
.https.onRequest(async (req, res) => {
try {
const data = await callExternalApiWithRetry('https://api.example.com/data');
res.json(data);
} catch (error) {
res.status(503).json({ error: 'Service temporarily unavailable' });
}
});Understanding Cold Starts:
Cold starts occur when a function hasn't been invoked recently and the runtime needs to initialize. For 1st generation functions, this initialization time counts toward the timeout, which can cause issues:
// 1st generation: Cold start overhead counts toward 60-second timeout
// - Environment initialization: 2-5 seconds
- Module loading: 1-2 seconds
- Your actual code: Remaining time
// Solution: Use 2nd generation or increase timeout for 1st genWhen NOT to increase timeout:
Increasing the timeout isn't always the right solution. If your function would take 10+ minutes, consider alternative approaches:
- Cloud Tasks: For work that can be queued and processed asynchronously
- Cloud Workflows: For multi-step orchestration with longer durations
- Compute Engine: For truly long-running batch jobs
- Cloud Dataflow: For large-scale data processing
Monitoring and prevention:
Set up alerts for timeout-prone functions:
// Log execution time to identify patterns
exports.monitoredFunction = functions
.runWith({ timeoutSeconds: 300 })
.https.onRequest(async (req, res) => {
const startTime = Date.now();
try {
// Your code
const result = await someOperation();
const duration = Date.now() - startTime;
if (duration > 240000) { // Alert if using >80% of timeout
console.warn(`Function took ${duration}ms (near timeout)`);
}
res.json(result);
} catch (error) {
console.error('Function error:', error);
res.status(500).json({ error: error.message });
}
});Cost implications:
- Longer timeouts = potential higher execution costs if functions run longer
- More memory = higher cost per 100ms
- 2nd generation pricing is based on actual runtime, so short-lived optimized code is more cost-efficient
Callable Functions: INTERNAL - Unhandled exception
How to fix "Callable Functions: INTERNAL - Unhandled exception" in Firebase
auth/invalid-hash-algorithm: Hash algorithm doesn't match supported options
How to fix "auth/invalid-hash-algorithm: Hash algorithm doesn't match supported options" in Firebase
Hosting: CORS configuration not set up properly
How to fix CORS configuration in Firebase Hosting
auth/reserved-claims: Custom claims use reserved OIDC claim names
How to fix "reserved claims" error when setting custom claims in Firebase
Callable Functions: UNAUTHENTICATED - Invalid credentials
How to fix "UNAUTHENTICATED - Invalid credentials" in Firebase Callable Functions