This error occurs when data is being written to a Node.js stream faster than it can be consumed, causing the internal buffer to fill up. The stream is experiencing backpressure because the destination cannot drain data quickly enough to keep up with the source.
Backpressure in Node.js streams is a buildup of data behind a buffer during data transfer. This happens when the receiving end has complex operations or is slower for whatever reason, causing data from the incoming source to accumulate. When you write data to a writable stream using `write()`, it returns a boolean value. If it returns `false`, this means the internal buffer has exceeded the `highWaterMark` threshold (default 16KB), and you should stop writing more data until the buffer drains. Ignoring this warning can lead to memory issues and eventually cause the application to crash or hang. The backpressure mechanism is Node.js's way of implementing flow control to prevent unbounded memory growth when processing large amounts of data through streams.
Always check if write() returns false and stop writing until the drain event fires:
const fs = require('fs');
function writeWithBackpressure(readable, writable) {
readable.on('data', (chunk) => {
const canContinue = writable.write(chunk);
if (!canContinue) {
// Buffer is full, pause the readable stream
readable.pause();
console.log('Backpressure detected - pausing read stream');
// Wait for drain event before resuming
writable.once('drain', () => {
console.log('Buffer drained - resuming read stream');
readable.resume();
});
}
});
readable.on('end', () => {
writable.end();
});
}
const readStream = fs.createReadStream('large-file.txt');
const writeStream = fs.createWriteStream('output.txt');
writeWithBackpressure(readStream, writeStream);This pattern ensures you respect the stream's buffer capacity and prevent memory issues.
The simplest solution is to use pipe(), which handles backpressure automatically:
const fs = require('fs');
const readStream = fs.createReadStream('large-file.txt');
const writeStream = fs.createWriteStream('output.txt');
// pipe() automatically handles backpressure
readStream.pipe(writeStream);
writeStream.on('finish', () => {
console.log('Write completed successfully');
});
writeStream.on('error', (err) => {
console.error('Write error:', err);
});The pipe() method automatically pauses the readable stream when the writable stream's buffer is full and resumes it when the buffer drains.
For modern async code, implement backpressure handling with promises:
const fs = require('fs');
const { pipeline } = require('stream/promises');
async function writeWithBackpressureAsync(readable, writable) {
for await (const chunk of readable) {
const canContinue = writable.write(chunk);
if (!canContinue) {
// Wait for drain event
await new Promise(resolve => writable.once('drain', resolve));
}
}
writable.end();
}
// Or use the built-in pipeline for automatic handling
async function copyFile() {
try {
await pipeline(
fs.createReadStream('large-file.txt'),
fs.createWriteStream('output.txt')
);
console.log('Pipeline succeeded');
} catch (err) {
console.error('Pipeline failed:', err);
}
}
copyFile();The stream/promises pipeline automatically manages backpressure and error handling.
If your streams consistently hit backpressure, consider adjusting the buffer size:
const fs = require('fs');
const readStream = fs.createReadStream('large-file.txt', {
highWaterMark: 64 * 1024 // 64KB instead of default 16KB
});
const writeStream = fs.createWriteStream('output.txt', {
highWaterMark: 64 * 1024 // Match reader buffer size
});
readStream.pipe(writeStream);Note: Increasing highWaterMark uses more memory but can reduce the frequency of backpressure events. Only adjust this if profiling shows it's beneficial for your specific use case.
Always handle stream errors to prevent silent failures:
const fs = require('fs');
const { pipeline } = require('stream');
const readStream = fs.createReadStream('input.txt');
const writeStream = fs.createWriteStream('output.txt');
pipeline(
readStream,
writeStream,
(err) => {
if (err) {
console.error('Pipeline error:', err);
// Cleanup or retry logic here
} else {
console.log('Pipeline completed successfully');
}
}
);
// Or with stream/promises
const { pipeline: pipelineAsync } = require('stream/promises');
async function safePipeline() {
try {
await pipelineAsync(
fs.createReadStream('input.txt'),
fs.createWriteStream('output.txt')
);
} catch (err) {
console.error('Pipeline failed:', err);
throw err;
}
}The pipeline utility properly handles cleanup and error propagation across all streams.
Understanding highWaterMark: The highWaterMark option sets the buffer size threshold. For readable streams, it controls how much data to buffer from the source. For writable streams, it controls when to trigger backpressure. Default values are 16KB for binary data and 16 objects for object mode streams.
Transform Streams and Backpressure: When creating custom transform streams, the _transform() callback must call the provided callback to signal completion. Failing to do so will cause backpressure to build up indefinitely:
const { Transform } = require('stream');
const uppercase = new Transform({
transform(chunk, encoding, callback) {
this.push(chunk.toString().toUpperCase());
callback(); // Must call this!
}
});Multiple Destinations: When piping one readable stream to multiple writable streams, backpressure from the slowest stream will affect all streams. Consider using separate readable streams or implementing custom multiplexing logic.
Objectmode Streams: For object mode streams (objectMode: true), the highWaterMark represents the number of objects, not bytes. Adjust accordingly based on your object sizes.
Monitoring Backpressure: You can monitor backpressure by listening to pause and resume events on readable streams, and checking writable.writableLength to see current buffer size.
Performance Considerations: While backpressure prevents memory issues, it can slow down processing. If your writable stream involves slow I/O (like network requests), consider batching operations or using worker threads for CPU-intensive transforms to maintain throughput.
Error: EMFILE: too many open files, watch
EMFILE: fs.watch() limit exceeded
Error: Middleware next() called multiple times (next() invoked twice)
Express middleware next() called multiple times
Error: Worker failed to initialize (worker startup error)
Worker failed to initialize in Node.js
Error: EMFILE: too many open files, open 'file.txt'
EMFILE: too many open files
Error: cluster.fork() failed (cannot create child process)
cluster.fork() failed - Cannot create child process