This error occurs when async operations are being queued faster than they can be processed, causing unbounded memory growth and eventual system failure. It's a classic backpressure problem where producers outpace consumers.
This error indicates that your Node.js application is experiencing unbounded concurrency - async operations are being scheduled faster than they can complete, causing the queue to grow indefinitely. This is one of the most challenging issues in Node.js development. When async operations pile up without proper flow control, several problems cascade: memory bloats from buffered data and pending promises, garbage collection thrashes trying to manage the heap, CPU spikes occur as the event loop struggles to keep up, and latency increases dramatically at the 95th/99th percentiles. Eventually, the application can run out of memory or become completely unresponsive. The root cause is almost always a lack of backpressure mechanism - there's no way to signal upstream producers to slow down when downstream consumers can't keep up. This commonly happens with streams that don't respect write return values, promise chains without concurrency limits, or queue systems that accept work faster than workers can process it.
Check your application logs and monitoring for rapidly growing queue sizes or memory usage. Use Node.js diagnostics to find the culprit:
// Add diagnostics to track pending async operations
const pendingOps = new Set();
function trackAsync(name, promise) {
const id = Math.random();
pendingOps.add(id);
console.log(`[${name}] Pending operations: ${pendingOps.size}`);
return promise.finally(() => pendingOps.delete(id));
}
// Use in your code
trackAsync('db-query', db.query(sql));Look for patterns where the pending operations count grows without bound. Check streams, database operations, API calls, and queue systems.
If you're using streams, always respect the write() return value and listen for drain events:
// BAD: Ignoring backpressure
readableStream.on('data', (chunk) => {
writableStream.write(chunk); // No check for return value
});
// GOOD: Respecting backpressure
readableStream.on('data', (chunk) => {
const canContinue = writableStream.write(chunk);
if (!canContinue) {
// Buffer is full, pause reading
readableStream.pause();
}
});
writableStream.on('drain', () => {
// Buffer is empty, resume reading
readableStream.resume();
});
// BETTER: Use pipe() which handles backpressure automatically
readableStream.pipe(writableStream);For Node.js 10+, use pipeline() for better error handling:
const { pipeline } = require('stream');
pipeline(
readableStream,
transformStream,
writableStream,
(err) => {
if (err) console.error('Pipeline failed:', err);
}
);Use a concurrency control library to limit how many promises run simultaneously:
// BAD: Processing all items at once
const results = await Promise.all(
items.map(item => processItem(item))
); // Can queue thousands of operations
// GOOD: Use p-limit for concurrency control
const pLimit = require('p-limit');
const limit = pLimit(5); // Only 5 concurrent operations
const results = await Promise.all(
items.map(item => limit(() => processItem(item)))
);Or use p-queue for more advanced scenarios:
const PQueue = require('p-queue').default;
const queue = new PQueue({
concurrency: 5,
interval: 1000,
intervalCap: 10 // Rate limiting: max 10 per second
});
for (const item of items) {
queue.add(() => processItem(item));
}
await queue.onIdle(); // Wait for all to completeIf using async queue libraries, set explicit size limits and handle overflow:
const async = require('async');
const queue = async.queue(async (task) => {
await processTask(task);
}, 5); // 5 concurrent workers
// Set a maximum queue size
const MAX_QUEUE_SIZE = 100;
function addToQueue(task) {
if (queue.length() >= MAX_QUEUE_SIZE) {
throw new Error('Queue size exceeded - rejecting new work');
}
return new Promise((resolve, reject) => {
queue.push(task, (err, result) => {
if (err) reject(err);
else resolve(result);
});
});
}For production systems, consider using a proper message queue like Bull with Redis:
const Queue = require('bull');
const myQueue = new Queue('work', {
redis: { port: 6379, host: '127.0.0.1' },
limiter: {
max: 100, // Max 100 jobs
duration: 1000 // per second
}
});
myQueue.process(5, async (job) => {
return processWork(job.data);
});Implement rate limiting for database queries, API calls, or any resource-intensive operations:
const Bottleneck = require('bottleneck');
// Limit to 10 concurrent operations, max 50 per minute
const limiter = new Bottleneck({
maxConcurrent: 10,
minTime: 1200, // Minimum 1.2s between operations (50/min)
reservoir: 50, // Initial capacity
reservoirRefreshAmount: 50,
reservoirRefreshInterval: 60 * 1000 // Refill every minute
});
// Wrap your async functions
const rateLimitedDbQuery = limiter.wrap(async (query) => {
return await db.execute(query);
});
// Use throughout your app
const result = await rateLimitedDbQuery('SELECT * FROM users');Add monitoring and automatic circuit breaking to prevent cascading failures:
class MonitoredQueue {
constructor(maxSize = 1000, concurrency = 10) {
this.maxSize = maxSize;
this.queue = [];
this.processing = 0;
this.concurrency = concurrency;
this.circuitOpen = false;
}
async add(task) {
if (this.circuitOpen) {
throw new Error('Circuit breaker open - queue overloaded');
}
if (this.queue.length >= this.maxSize) {
this.circuitOpen = true;
setTimeout(() => this.circuitOpen = false, 60000); // Reset after 1 min
throw new Error('Queue overflow - circuit breaker activated');
}
this.queue.push(task);
this.processNext();
}
async processNext() {
if (this.processing >= this.concurrency || this.queue.length === 0) {
return;
}
this.processing++;
const task = this.queue.shift();
try {
await task();
} finally {
this.processing--;
this.processNext();
}
}
getStats() {
return {
queueSize: this.queue.length,
processing: this.processing,
circuitOpen: this.circuitOpen
};
}
}Understanding the Feedback Loop: The vicious cycle of queue overflow occurs because as memory grows from queued operations, garbage collection becomes more expensive. This increases GC pauses, which slows down work completion, causing more operations to queue, growing memory further. Breaking this cycle requires proactive rate limiting before the queue grows.
Choosing the Right Concurrency Limit: The optimal concurrency limit depends on your workload. For I/O-bound operations (API calls, database queries), you can typically handle 10-50 concurrent operations. For CPU-bound operations, limit concurrency to the number of CPU cores. Monitor your P99 latency and memory usage to find the sweet spot.
Stream Transform Considerations: When using Transform streams with async operations, Node.js doesn't automatically handle backpressure for the async work. You need to use the callback parameter properly: call it only after the async operation completes, and call it with an error if the operation fails. This ensures the stream's internal buffering system works correctly.
Event Loop Monitoring: Use process.hrtime() or libraries like event-loop-stats to monitor event loop lag. If lag exceeds 100ms consistently, you have too much work queued. Consider using setImmediate() to break up long-running operations and give the event loop breathing room.
Memory Profiling: Use Node.js built-in heap snapshots (--inspect flag and Chrome DevTools) to identify what objects are consuming memory during queue overflow. Often you'll find thousands of pending Promise objects or closure references keeping data in memory. Tools like clinic.js can automate this analysis.
Graceful Degradation: In production, implement graceful degradation when queues approach limits: reject new work with 503 Service Unavailable, implement exponential backoff for retries, shed less critical work first, and alert operations teams before complete failure occurs.
Error: EMFILE: too many open files, watch
EMFILE: fs.watch() limit exceeded
Error: Middleware next() called multiple times (next() invoked twice)
Express middleware next() called multiple times
Error: Worker failed to initialize (worker startup error)
Worker failed to initialize in Node.js
Error: EMFILE: too many open files, open 'file.txt'
EMFILE: too many open files
Error: cluster.fork() failed (cannot create child process)
cluster.fork() failed - Cannot create child process