The maxBuffer exceeded error occurs when a child process produces more output than the buffer can hold. The default buffer size is 1MB, and when stdout or stderr data exceeds this limit, Node.js terminates the child process and throws an error. This commonly happens when executing commands that produce large amounts of output.
When you use child_process.exec() or child_process.execFile() in Node.js, these methods buffer all output from the child process in memory before returning it to you. The maxBuffer option controls the maximum size of this buffer, defaulting to 1MB (1024 * 1024 bytes). If the child process writes more data to stdout or stderr than this limit, Node.js kills the process and throws an "ERR_CHILD_PROCESS_STDIO_MAXBUFFER" error. This is a memory protection mechanism to prevent child processes from consuming unlimited RAM. The error indicates that your command produced too much output for the buffered approach used by exec(), and you need either a larger buffer or a streaming approach.
The best solution for handling large output is to use child_process.spawn() instead of exec(). spawn() returns a stream, allowing you to process output incrementally rather than buffering everything in memory. This is the recommended approach for commands with large or unknown output sizes.
const { spawn } = require('child_process');
// WRONG - buffers all output in memory
const { exec } = require('child_process');
exec('git log --all', (error, stdout) => {
// May fail with maxBuffer exceeded
console.log(stdout);
});
// CORRECT - streams output incrementally
const child = spawn('git', ['log', '--all']);
let output = '';
child.stdout.on('data', (data) => {
output += data.toString();
// Or process data immediately without accumulating
});
child.stderr.on('data', (data) => {
console.error(data.toString());
});
child.on('close', (code) => {
if (code === 0) {
console.log('Command completed successfully');
console.log(output);
} else {
console.error(`Process exited with code ${code}`);
}
});If you need to use exec() and the command produces predictable output that fits in memory, you can increase the maxBuffer option. However, this is not recommended for very large output as it consumes more RAM. Set maxBuffer to a value larger than your expected output size.
const { exec } = require('child_process');
// Increase buffer to 10MB
exec('npm install', { maxBuffer: 10 * 1024 * 1024 }, (error, stdout, stderr) => {
if (error) {
console.error(`Error: ${error.message}`);
return;
}
console.log(stdout);
});Note: This approach still buffers all output in memory, so it's not suitable for very large outputs or when output size is unknown.
For extremely large output that you don't need to process in Node.js, pipe the child process output directly to a file. This avoids buffering in memory entirely.
const { spawn } = require('child_process');
const fs = require('fs');
const child = spawn('docker', ['logs', 'my-container']);
const logFile = fs.createWriteStream('container-logs.txt');
child.stdout.pipe(logFile);
child.stderr.pipe(logFile);
child.on('close', (code) => {
console.log(`Logs written to file. Exit code: ${code}`);
logFile.end();
});If possible, modify the command to produce less output. Use quiet flags, disable verbose logging, or filter output to only what you need. This is often the most efficient solution.
const { exec } = require('child_process');
// Add flags to reduce output
exec('npm install --silent', (error, stdout, stderr) => {
// Less output, less likely to exceed buffer
console.log(stdout);
});
// Or filter git log output
exec('git log --oneline --max-count=100', (error, stdout) => {
// Limited output instead of entire history
console.log(stdout);
});For real-time processing or when you need to react to output as it arrives, handle data events directly without accumulating all output. This is ideal for monitoring progress, parsing logs, or extracting specific information.
const { spawn } = require('child_process');
const child = spawn('npm', ['test']);
child.stdout.on('data', (data) => {
// Process each chunk immediately
const lines = data.toString().split('\n');
lines.forEach(line => {
if (line.includes('PASS') || line.includes('FAIL')) {
console.log(line);
}
});
});
child.on('close', (code) => {
console.log(`Tests completed with exit code ${code}`);
});The fundamental difference between exec() and spawn() is that exec() buffers the entire output before calling the callback, while spawn() provides streaming access to output as it's produced. exec() is built on top of spawn() but adds the buffering layer for convenience. For most production use cases involving external commands, spawn() is the better choice. The default maxBuffer of 1MB was chosen as a reasonable balance between memory usage and typical command output sizes, but modern applications often need to handle larger outputs. When using spawn() with streaming, be careful about backpressure - if you're writing to a slow destination (like a network stream), the child process may pause if Node.js can't keep up with the output rate. The maxBuffer option applies to both stdout and stderr combined, so verbose error output can also trigger this limit. In some cases, you can redirect stderr to stdout in the command itself (using shell redirection) or handle them separately with different buffer sizes. For TypeScript users, ensure you properly type the data parameter in event handlers as Buffer rather than string. When debugging maxBuffer issues, use console.error() to output error messages and check if stderr is contributing to the buffer overflow.
Error: Listener already called (once event already fired)
EventEmitter listener already called with once()
Error: EACCES: permission denied, open '/root/file.txt'
EACCES: permission denied
Error: Invalid encoding specified (stream encoding not supported)
How to fix Invalid encoding error in Node.js readable streams
Error: EINVAL: invalid argument, open
EINVAL: invalid argument, open
TypeError: readableLength must be a positive integer (stream config)
TypeError: readableLength must be a positive integer in Node.js streams