This error occurs when a server acting as a gateway or proxy does not receive a timely response from an upstream server. It commonly happens when your Node.js application makes requests to external APIs, databases, or microservices that take too long to respond, exceeding the configured timeout limits.
The HTTP 504 Gateway Timeout error indicates that a server, while acting as a gateway or proxy, did not receive a response in time from an upstream server needed to complete the request. This is distinct from a 502 Bad Gateway - with 504, the gateway successfully connected to the upstream server but the upstream failed to respond within the allowed time window. In Node.js applications, this error typically appears when your application sits behind a reverse proxy (Nginx, Apache, AWS ELB) or when your application itself acts as a proxy making requests to other services. The timeout can occur at multiple layers: the reverse proxy waiting for your Node.js app, your app waiting for an external API, or database queries taking too long. The error is not necessarily caused by your Node.js code directly but indicates a timing mismatch between configured timeout values and actual processing time. Understanding the timeout chain in your architecture is crucial for diagnosing and fixing this issue.
Start by determining where in the request chain the timeout occurs. Check logs at each layer:
# Check Nginx error logs
sudo tail -f /var/log/nginx/error.log
# Check Node.js application logs
pm2 logs your-app
# For AWS environments, check ELB access logs
aws logs tail /aws/elasticloadbalancing/your-lb --followLook for timeout messages that indicate whether the proxy, application, or upstream service is timing out. The error message often reveals which layer failed.
If using Nginx as a reverse proxy, increase timeout values in your configuration:
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://localhost:3000;
# Increase all proxy timeouts
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
# Important: match or exceed your app's processing time
send_timeout 300s;
}
}For Apache, update httpd.conf or virtual host:
<VirtualHost *:80>
ProxyPass / http://localhost:3000/
ProxyPassReverse / http://localhost:3000/
# Set timeout to 5 minutes
ProxyTimeout 300
</VirtualHost>After changing, restart the web server: sudo systemctl restart nginx
Increase timeout settings in your Node.js HTTP server to handle long-running requests:
import express from 'express';
const app = express();
const PORT = 3000;
const server = app.listen(PORT, () => {
console.log(`Server listening on port ${PORT}`);
});
// Set server timeout to 5 minutes (300,000 ms)
server.timeout = 300000;
// Set headers timeout (should be higher than timeout)
server.headersTimeout = 310000;
// Set keep-alive timeout
server.keepAliveTimeout = 65000; // Higher than proxy keep-aliveFor production environments behind a load balancer, ensure Node.js timeout is longer than the load balancer timeout to avoid connection issues.
When your Node.js app makes outgoing requests, configure appropriate timeouts:
Using Axios:
import axios from 'axios';
const api = axios.create({
timeout: 30000, // 30 second timeout
});
// Or per-request
try {
const response = await axios.get('https://api.example.com/data', {
timeout: 60000, // 60 seconds for slow endpoints
});
console.log(response.data);
} catch (error) {
if (error.code === 'ECONNABORTED') {
console.error('Request timed out');
} else if (error.response?.status === 504) {
console.error('Upstream gateway timeout');
}
throw error;
}Using native fetch:
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), 30000);
try {
const response = await fetch('https://api.example.com/data', {
signal: controller.signal,
});
clearTimeout(timeoutId);
const data = await response.json();
} catch (error) {
if (error.name === 'AbortError') {
console.error('Request timed out after 30 seconds');
}
}Match these timeouts to the expected response time of your upstream services.
Investigate and improve operations that exceed timeout limits:
Profile slow database queries:
// Add query logging
import { PrismaClient } from '@prisma/client';
const prisma = new PrismaClient({
log: [
{ emit: 'event', level: 'query' },
],
});
prisma.$on('query', (e) => {
if (e.duration > 1000) { // Log queries over 1 second
console.warn(`Slow query (${e.duration}ms): ${e.query}`);
}
});Implement caching for expensive operations:
import NodeCache from 'node-cache';
const cache = new NodeCache({ stdTTL: 600 }); // 10 minute cache
app.get('/api/expensive-data', async (req, res) => {
const cacheKey = 'expensive-data';
const cached = cache.get(cacheKey);
if (cached) {
return res.json(cached);
}
// Expensive operation
const data = await fetchFromSlowAPI();
cache.set(cacheKey, data);
res.json(data);
});Move long-running tasks to background jobs:
import Queue from 'bull';
const jobQueue = new Queue('slow-tasks');
app.post('/api/process', async (req, res) => {
// Don't wait for completion
const job = await jobQueue.add({ data: req.body });
// Return immediately
res.json({ jobId: job.id, status: 'processing' });
});Add middleware to proactively handle request timeouts in Express:
function timeoutMiddleware(seconds) {
return (req, res, next) => {
// Set timeout on request
req.setTimeout(seconds * 1000, () => {
console.error(`Request timeout after ${seconds} seconds`);
if (!res.headersSent) {
res.status(504).json({
error: 'Gateway Timeout',
message: 'Request took too long to process',
});
}
});
// Set timeout on response
res.setTimeout(seconds * 1000, () => {
console.error(`Response timeout after ${seconds} seconds`);
if (!res.headersSent) {
res.status(504).json({
error: 'Gateway Timeout',
message: 'Response took too long to send',
});
}
});
next();
};
}
// Apply globally or to specific routes
app.use(timeoutMiddleware(120)); // 2 minute timeout
// Or per-route for slow endpoints
app.get('/api/slow-report', timeoutMiddleware(300), async (req, res) => {
const report = await generateSlowReport();
res.json(report);
});This provides better error messages and prevents hanging requests.
AWS Elastic Load Balancer Considerations
AWS Application Load Balancers (ALB) have a fixed 60-second idle timeout. If your application takes longer, you have two options:
1. Stream response data to keep the connection alive:
app.get('/api/long-process', async (req, res) => {
res.setHeader('Content-Type', 'text/plain');
res.setHeader('Transfer-Encoding', 'chunked');
// Send periodic updates
const interval = setInterval(() => {
res.write('processing...\n');
}, 5000);
try {
const result = await longRunningOperation();
clearInterval(interval);
res.end(JSON.stringify(result));
} catch (error) {
clearInterval(interval);
res.status(500).end();
}
});2. Use asynchronous processing with webhooks or polling.
Timeout Chain Best Practices
Configure timeouts in descending order through your stack:
- Load balancer: 60s (or cloud provider limit)
- Nginx proxy_read_timeout: 55s
- Node.js server.timeout: 50s
- Database query timeout: 45s
- External API timeout: 40s
This prevents race conditions where multiple timeouts trigger simultaneously.
Handling Timeout Errors Gracefully
Differentiate between timeout types in error handling:
app.use((err, req, res, next) => {
if (err.code === 'ETIMEDOUT' || err.code === 'ECONNABORTED') {
return res.status(504).json({
error: 'Gateway Timeout',
message: 'Upstream service timed out',
retryable: true,
});
}
if (err.status === 504) {
return res.status(504).json({
error: 'Gateway Timeout',
message: err.message,
retryable: true,
});
}
next(err);
});Performance Monitoring
Use APM tools to track request durations and identify bottlenecks:
import * as Sentry from '@sentry/node';
Sentry.init({
dsn: process.env.SENTRY_DSN,
tracesSampleRate: 0.1,
});
app.use(Sentry.Handlers.requestHandler());
app.use(Sentry.Handlers.tracingHandler());
// Your routes here
app.use(Sentry.Handlers.errorHandler());Network-Level Troubleshooting
If timeouts persist after configuration changes, check network connectivity:
# Test connection latency to upstream
time curl -I https://api.upstream.com
# Check DNS resolution time
time nslookup api.upstream.com
# Monitor active connections
netstat -an | grep ESTABLISHED | wc -lHigh latency or connection limits may require infrastructure changes rather than timeout adjustments.
Error: EMFILE: too many open files, watch
EMFILE: fs.watch() limit exceeded
Error: Middleware next() called multiple times (next() invoked twice)
Express middleware next() called multiple times
Error: Worker failed to initialize (worker startup error)
Worker failed to initialize in Node.js
Error: EMFILE: too many open files, open 'file.txt'
EMFILE: too many open files
Error: cluster.fork() failed (cannot create child process)
cluster.fork() failed - Cannot create child process