This error occurs when you configure a Node.js stream with an invalid highWaterMark value. highWaterMark controls the internal buffer threshold for stream backpressure, and it must be a positive integer (0 or greater). Invalid values like negative numbers, strings, or numbers exceeding 1GiB will trigger this RangeError.
The highWaterMark option is a critical parameter that controls how much data a Node.js stream buffers internally before applying backpressure (pausing the incoming data stream). When you create a readable, writable, or transform stream, you can pass a highWaterMark option to customize this threshold. Node.js validates this value strictly: it must be a positive integer (0 or any positive whole number) and cannot exceed 1GB. If you pass an invalid valueโsuch as a negative number, a non-integer, a string, null, undefined, or a value larger than 1GBโNode.js throws a RangeError to prevent misconfiguration of your stream backpressure system.
Check the stream creation code and ensure highWaterMark is a whole number greater than or equal to 0. Common valid values are:
// Default: 16KB (16384 bytes)
const readable = fs.createReadStream('file.txt');
// Explicit positive value
const readable = fs.createReadStream('file.txt', { highWaterMark: 32768 });
// For object mode (buffering objects instead of bytes)
const transform = fs.createReadStream('file.txt', {
highWaterMark: 16, // 16 objects, not bytes
objectMode: true
});
// Minimum valid value (0)
const writable = fs.createWriteStream('output.txt', { highWaterMark: 0 });If you're reading highWaterMark from a configuration file, environment variable, or dynamic source, ensure it's converted to a number. Strings will cause the validation to fail:
// WRONG: String will cause RangeError
const config = { highWaterMark: '16384' }; // String, not number
const stream = fs.createReadStream('file.txt', config);
// CORRECT: Parse to integer
const config = { highWaterMark: parseInt('16384', 10) };
const stream = fs.createReadStream('file.txt', config);
// CORRECT: Load from environment (also ensure it's valid)
const highWaterMark = process.env.STREAM_BUFFER_SIZE
? parseInt(process.env.STREAM_BUFFER_SIZE, 10)
: 16384;
const stream = fs.createReadStream('file.txt', { highWaterMark });If calculating highWaterMark dynamically, ensure the result is always non-negative. Use Math.max() to guard against accidentally negative values:
// WRONG: Subtraction can result in negative
let highWaterMark = baseSize - bufferUsed; // Could be negative
const stream = fs.createReadStream('file.txt', { highWaterMark });
// CORRECT: Use Math.max to ensure non-negative
let highWaterMark = Math.max(0, baseSize - bufferUsed);
const stream = fs.createReadStream('file.txt', { highWaterMark });
// CORRECT: Always clamp to valid range
const clampHighWaterMark = (value) => {
const MAX_HIGH_WATER_MARK = 1073741824; // 1GB
return Math.max(0, Math.min(value, MAX_HIGH_WATER_MARK));
};
const stream = fs.createReadStream('file.txt', {
highWaterMark: clampHighWaterMark(userValue)
});For robust applications, validate highWaterMark before passing it to stream constructors. Use a helper function:
function validateHighWaterMark(value) {
if (typeof value !== 'number') {
throw new TypeError(`highWaterMark must be a number, got ${typeof value}`);
}
if (!Number.isInteger(value)) {
throw new RangeError(`highWaterMark must be an integer, got ${value}`);
}
if (value < 0) {
throw new RangeError(`highWaterMark must be non-negative, got ${value}`);
}
if (value > 1073741824) {
throw new RangeError(`highWaterMark exceeds 1GB limit: ${value}`);
}
return value;
}
// Use in stream creation
try {
const hwm = validateHighWaterMark(userConfig.highWaterMark);
const stream = fs.createReadStream('file.txt', { highWaterMark: hwm });
} catch (err) {
console.error('Invalid stream configuration:', err.message);
}If the error comes from a third-party library (like axios, node-fetch, or custom stream wrappers), verify the library's documentation for the correct format and constraints:
// Example: axios with streams
import axios from 'axios';
import fs from 'fs';
// WRONG: Passing highWaterMark as string
axios({
method: 'get',
url: 'https://example.com/file.bin',
responseType: 'stream',
httpAgent: { highWaterMark: '65536' } // Invalid: string
});
// CORRECT: Verify the library accepts highWaterMark and pass as number
axios({
method: 'get',
url: 'https://example.com/file.bin',
responseType: 'stream',
maxRedirects: 5,
maxContentLength: Infinity
// highWaterMark might be set on the response stream, not axios options
});
// For custom stream wrapper
class StreamWrapper {
constructor(options = {}) {
const hwm = options.highWaterMark || 16384;
if (typeof hwm !== 'number' || hwm < 0) {
throw new RangeError('highWaterMark must be a non-negative number');
}
this.stream = fs.createReadStream(options.path, { highWaterMark: hwm });
}
}Understanding highWaterMark's Role in Backpressure:
highWaterMark is not a hard memory limit but a signal threshold. When the internal buffer exceeds this value, the stream stops requesting more data from its source and emits a 'drain' event when the buffer empties. This prevents memory exhaustion in scenarios where data arrives faster than it can be consumed (e.g., reading a large file and sending it over a slow network).
objectMode vs. Binary Mode:
In binary mode (default), highWaterMark specifies bytes. In objectMode (for streams of objects), highWaterMark specifies the count of objects. This distinction is crucial: a single large object in objectMode can use far more memory than its object count suggests. If buffering large objects, carefully tune objectMode streams to avoid memory pressure.
Performance Tuning:
The default 16KB is appropriate for most use cases, but high-throughput scenarios (10+ MB/s) may benefit from larger values (32KB to 256KB) to reduce event loop iterations. Conversely, memory-constrained environments may use smaller values (1KB to 4KB). Always test with realistic workloads before changing defaults.
Preventing Backpressure Issues:
Always handle the return value of writable.write(). If it returns false, stop writing and wait for the 'drain' event:
const readable = fs.createReadStream('large-file.bin');
const writable = fs.createWriteStream('output.bin');
readable.on('data', (chunk) => {
const canContinue = writable.write(chunk);
if (!canContinue) {
readable.pause(); // Stop reading until drain
}
});
writable.on('drain', () => {
readable.resume(); // Resume reading
});Alternatively, use pipe() which handles backpressure automatically:
fs.createReadStream('large-file.bin').pipe(fs.createWriteStream('output.bin'));Error: EMFILE: too many open files, watch
EMFILE: fs.watch() limit exceeded
Error: Middleware next() called multiple times (next() invoked twice)
Express middleware next() called multiple times
Error: Worker failed to initialize (worker startup error)
Worker failed to initialize in Node.js
Error: EMFILE: too many open files, open 'file.txt'
EMFILE: too many open files
Error: cluster.fork() failed (cannot create child process)
cluster.fork() failed - Cannot create child process