This error occurs when a Node.js writable stream fails to emit the drain event after write() returns false, indicating a backpressure handling failure. The stream's internal buffer is full but never signals when it's safe to resume writing, leading to frozen writes, memory buildup, or application hangs. Proper backpressure handling requires correctly listening for and responding to the drain event.
In Node.js streams, backpressure is a built-in flow control mechanism that prevents overwhelming the system with data faster than it can be consumed. When you write data to a writable stream using stream.write(chunk), it returns a boolean. If it returns false, the internal buffer has exceeded the highWaterMark threshold, and you should stop writing until the buffer drains. The stream signals this by emitting a "drain" event when it's ready to accept more data. This error indicates that write() returned false but the drain event was never emitted, leaving your code waiting indefinitely. This can happen due to bugs in custom stream implementations, improper handling of the stream lifecycle, race conditions in event listeners, or issues with the underlying destination that prevent the buffer from being flushed.
The most common cause is forgetting to listen for the drain event. Attach the listener before you start writing data. If write() returns false, stop writing and resume only when drain fires.
const fs = require('fs');
const writable = fs.createWriteStream('output.txt');
// Attach drain listener BEFORE writing
writable.on('drain', () => {
console.log('Buffer drained, safe to write again');
// Resume writing here
});
function writeData(data) {
const canContinue = writable.write(data);
if (!canContinue) {
console.log('Backpressure applied, waiting for drain event');
// Stop writing and wait for drain event
}
}When write() returns false, you must pause writing and resume only after the drain event. Use a state variable or pause the data source until drain fires.
const writable = fs.createWriteStream('output.txt');
let canWrite = true;
writable.on('drain', () => {
canWrite = true;
processNextChunk(); // Resume processing
});
function writeChunk(chunk) {
if (canWrite) {
canWrite = writable.write(chunk);
if (!canWrite) {
console.log('Buffer full, waiting for drain');
}
} else {
console.log('Still waiting for drain, queueing chunk');
}
}Instead of manually managing backpressure, use pipe() which automatically handles the drain event and pauses the source when backpressure occurs. This is the recommended approach for most stream connections.
const fs = require('fs');
const readable = fs.createReadStream('input.txt');
const writable = fs.createWriteStream('output.txt');
// pipe() automatically handles backpressure
readable.pipe(writable);
readable.on('error', (err) => console.error('Read error:', err));
writable.on('error', (err) => console.error('Write error:', err));
writable.on('finish', () => console.log('Write completed'));If you're implementing a custom Writable stream, ensure you call callback() in your _write() method and that the stream emits drain when appropriate. The drain event is automatically emitted by Node.js when the buffer drops below highWaterMark after returning false.
const { Writable } = require('stream');
class MyWritable extends Writable {
_write(chunk, encoding, callback) {
// Process the chunk
doSomethingAsync(chunk, (err) => {
if (err) {
callback(err); // Signal error
} else {
callback(); // Signal completion - this allows drain to emit
}
});
}
}
const writable = new MyWritable({ highWaterMark: 16 * 1024 });Attach the drain listener before you start writing. If the drain event fires before you attach the listener, you'll miss it and wait forever. Also ensure you don't remove the listener prematurely.
const writable = fs.createWriteStream('output.txt');
// WRONG - listener attached after writing starts
function badExample() {
writable.write(data);
writable.on('drain', () => { /* might be too late */ });
}
// CORRECT - listener attached first
function goodExample() {
writable.on('drain', handleDrain);
writable.write(data);
}The underlying destination might fail without emitting drain. Add error and close handlers to detect when the stream becomes unusable and handle the situation gracefully.
writable.on('error', (err) => {
console.error('Stream error:', err);
// Clean up and handle error
});
writable.on('close', () => {
console.log('Stream closed');
// Stop waiting for drain if stream is closed
});
writable.on('finish', () => {
console.log('All writes completed');
});Implement a timeout mechanism to detect when drain never fires. This helps diagnose the issue and prevents indefinite hangs.
function writeWithTimeout(stream, data, timeout = 5000) {
return new Promise((resolve, reject) => {
const timer = setTimeout(() => {
reject(new Error('Drain event timeout - backpressure not resolving'));
}, timeout);
const canWrite = stream.write(data);
if (canWrite) {
clearTimeout(timer);
resolve();
} else {
stream.once('drain', () => {
clearTimeout(timer);
resolve();
});
}
});
}This issue is particularly tricky because it often manifests as a silent hang rather than an immediate error. Understanding the internal stream state can help debug: the _writableState.needDrain flag indicates whether the stream needs to emit drain. In older Node.js versions (pre-v6), there was a documented race condition where rapid successive writes could reset this flag incorrectly, preventing drain from firing. Modern Node.js versions have fixed this, but custom stream implementations can still exhibit the bug. For Transform streams, the issue is more complex because they have both readable and writable sides - if the readable side isn't being consumed (no one is reading), the writable side can't drain, creating a deadlock. Always ensure downstream consumers are actively reading. When debugging, use Node.js's built-in stream utilities like stream.finished() and stream.pipeline() which provide better error handling and automatic cleanup. The highWaterMark setting affects when backpressure kicks in - lower values (like 16KB) trigger backpressure more frequently but provide finer flow control, while higher values (like 1MB) reduce event overhead but can consume more memory. For network streams (sockets), drain failures often indicate the remote peer has stopped consuming data or the connection has stalled. Consider implementing application-level timeouts and health checks for long-lived streams.
Error: EMFILE: too many open files, watch
EMFILE: fs.watch() limit exceeded
Error: Middleware next() called multiple times (next() invoked twice)
Express middleware next() called multiple times
Error: Worker failed to initialize (worker startup error)
Worker failed to initialize in Node.js
Error: EMFILE: too many open files, open 'file.txt'
EMFILE: too many open files
Error: cluster.fork() failed (cannot create child process)
cluster.fork() failed - Cannot create child process