This error occurs when an HTTP/2 connection violates flow control rules by sending more data than the peer is willing to accept. Flow control in HTTP/2 prevents overwhelming the receiver by using a window-based system. The error happens when the sender exceeds the advertised window size or improperly manages window updates, causing the protocol to abort the connection.
HTTP/2 FLOW_CONTROL_ERROR is a protocol-level error that indicates a violation of the HTTP/2 flow control mechanism. Flow control is a fundamental feature of HTTP/2 that manages how much data can be sent at any given time between the client and server. In HTTP/2, both the stream level and connection level have flow control windows. Each side advertises how much data it can receive through SETTINGS frames and WINDOW_UPDATE frames. When one side attempts to send more data than the other side's window allows, the receiver sends a FLOW_CONTROL_ERROR RST_STREAM (for stream errors) or GOAWAY (for connection errors). This is distinct from buffering issues or application-level errors - it's a strict protocol violation. The HTTP/2 specification (RFC 7540) mandates that any flow control violation must result in connection termination or stream reset. This error typically indicates a bug in how window sizes are being managed, incorrect stream handling, or resource exhaustion scenarios where data is being queued faster than it can be consumed.
Add debug logging to see exactly what's happening with flow control:
const http2 = require('http2');
const fs = require('fs');
// Enable HTTP/2 debug logging
process.env.NODE_DEBUG = 'http2';
// Or programmatically for more control
const session = http2.connect('https://example.com', {
// ... options
});
session.on('error', (err) => {
console.error('Session error:', err.code, err.message);
});
// Log stream-level errors too
const req = session.request({
':path': '/large-file',
':method': 'GET',
});
req.on('error', (err) => {
console.error('Stream error:', err.code, err.message);
});
req.on('data', (chunk) => {
console.log(`Received chunk: ${chunk.length} bytes`);
});Run with:
NODE_DEBUG=http2 node your-script.jsThis will show window updates, stream state changes, and data flow details that reveal where the violation occurs.
HTTP/2 flow control is automatically managed by the readable stream backpressure mechanism. Always check the return value of write() and handle the 'drain' event:
const http2 = require('http2');
const server = http2.createSecureServer({
key: fs.readFileSync('./server.key'),
cert: fs.readFileSync('./server.cert'),
});
server.on('stream', (stream, headers) => {
// WRONG - can cause flow control error if write returns false
fs.createReadStream('large-file.bin').pipe(stream);
// ^ Actually pipe() handles this correctly, but manual writing doesn't
// WRONG - writing without checking backpressure
stream.write(largeBuffer);
stream.write(anotherBuffer);
// CORRECT - respect backpressure
const shouldContinue = stream.write(largeBuffer);
if (!shouldContinue) {
// Wait for drain before writing more
stream.once('drain', () => {
stream.write(anotherBuffer);
});
}
});
server.listen(8443);Better approach using proper piping:
const http2 = require('http2');
const fs = require('fs');
server.on('stream', (stream, headers) => {
// pipe() automatically handles backpressure
fs.createReadStream('large-file.bin').pipe(stream);
// For manual writing, use a writable stream wrapper
const transform = new Transform({
transform(chunk, encoding, callback) {
// Process chunk if needed
callback(null, chunk);
}
});
fs.createReadStream('data.bin')
.pipe(transform)
.pipe(stream)
.on('error', (err) => console.error('Stream error:', err));
});Verify the window size settings are appropriate for your use case:
const http2 = require('http2');
const fs = require('fs');
const client = http2.connect('https://example.com');
// Check initial window size settings
console.log('Client session state:', client.state);
// Returns: { effectiveLocalWindowSize, remoteWindowSize, etc. }
// For large file transfers, you may need to increase window sizes
const session = http2.createSecureServer({
key: fs.readFileSync('./server.key'),
cert: fs.readFileSync('./server.cert'),
});
session.on('stream', (stream) => {
// Check stream window size
console.log('Stream state before write:', stream.state);
// Window size is negotiated via SETTINGS, default is 65535 bytes per stream
// Total connection window: 65535 * number of concurrent streams
});
// Configure window sizes (should be done before opening streams)
const settings = {
headerTableSize: 4096,
enablePush: true,
initialWindowSize: 65535, // 64KB default - increase for large transfers
maxFrameSize: 16384,
maxConcurrentStreams: 100,
maxHeaderListSize: 8192,
};
session.settings(settings);For clients connecting to a server:
const client = http2.connect('https://example.com', {
// These settings are sent to the remote server
settings: {
initialWindowSize: 2097152, // 2MB instead of default 64KB for faster transfers
maxConcurrentStreams: 50,
}
});Ensure you're not attempting to write to a stream that has already ended or errored:
const http2 = require('http2');
const server = http2.createSecureServer({
key: fs.readFileSync('./server.key'),
cert: fs.readFileSync('./server.cert'),
});
server.on('stream', (stream) => {
let ended = false;
// Track stream state
stream.on('close', () => {
ended = true;
console.log('Stream closed');
});
stream.on('error', (err) => {
ended = true;
console.error('Stream error:', err.code);
});
// Only write while stream is open
function writeData(data) {
if (ended) {
console.log('Cannot write to closed stream');
return false;
}
try {
return stream.write(data);
} catch (err) {
console.error('Write failed:', err.message);
return false;
}
}
// Safe approach
const readStream = fs.createReadStream('file.bin');
readStream.on('data', (chunk) => {
if (!writeData(chunk)) {
readStream.pause();
}
});
stream.on('drain', () => {
readStream.resume();
});
});Implement proper error handling for flow control and other protocol errors:
const http2 = require('http2');
const fs = require('fs');
const server = http2.createSecureServer({
key: fs.readFileSync('./server.key'),
cert: fs.readFileSync('./server.cert'),
});
// Handle session-level errors
server.on('sessionError', (err, session) => {
console.error('Session error:', {
code: err.code,
message: err.message,
type: err.type,
});
// Log for debugging flow control issues
if (err.code === 'ERR_HTTP2_FLOW_CONTROL_ERROR') {
console.error('Flow control violation detected');
}
});
// Handle stream-level errors
server.on('stream', (stream, headers) => {
stream.on('error', (err) => {
console.error('Stream error:', {
code: err.code,
message: err.message,
streamId: stream.id,
});
// Don't try to write to the stream after error
if (err.code === 'ERR_HTTP2_RST_STREAM') {
console.error('Stream was reset');
}
});
// Write data safely
const data = Buffer.alloc(1024 * 1024); // 1MB
try {
stream.write(data);
stream.end();
} catch (err) {
console.error('Failed to write:', err.message);
}
});
server.listen(8443);For client-side:
const http2 = require('http2');
const client = http2.connect('https://example.com');
// Handle session errors
client.on('error', (err) => {
console.error('Client session error:', err.code, err.message);
});
const req = client.request({
':path': '/large-file',
':method': 'GET',
});
// Handle stream errors
req.on('error', (err) => {
console.error('Request error:', err.code, err.message);
client.destroy(); // Force close connection
});
let received = 0;
req.on('data', (chunk) => {
received += chunk.length;
console.log(`Received ${received} bytes`);
});
req.on('end', () => {
console.log(`Transfer complete: ${received} bytes`);
client.close();
});Add detailed logging to track window size changes and identify when limits are reached:
const http2 = require('http2');
const fs = require('fs');
const server = http2.createSecureServer({
key: fs.readFileSync('./server.key'),
cert: fs.readFileSync('./server.cert'),
});
server.on('stream', (stream) => {
// Log initial state
console.log('Stream created:', {
streamId: stream.id,
state: stream.state,
});
// Monitor window size during transfer
let lastLogTime = Date.now();
const logInterval = setInterval(() => {
const now = Date.now();
if (now - lastLogTime > 1000) {
console.log('Stream state:', {
streamId: stream.id,
state: stream.state,
writableLength: stream.writableLength,
writableHighWaterMark: stream.writableHighWaterMark,
});
lastLogTime = now;
}
}, 100);
stream.on('close', () => {
clearInterval(logInterval);
});
// Example: serving a file with monitoring
const file = fs.createReadStream('large-file.bin');
let bytesWritten = 0;
file.on('data', (chunk) => {
const ok = stream.write(chunk);
bytesWritten += chunk.length;
if (!ok) {
console.log(`Backpressure at ${bytesWritten} bytes, pausing`);
file.pause();
}
});
stream.on('drain', () => {
console.log(`Drain event, resuming from ${bytesWritten} bytes`);
file.resume();
});
file.on('end', () => {
stream.end();
});
});
server.listen(8443);Understanding HTTP/2 Flow Control Windows
HTTP/2 uses a window-based flow control system. Each stream and the connection itself have:
- Receive window: How much data the receiver can accept (advertised via SETTINGS and WINDOW_UPDATE)
- Send window: How much data the sender can transmit without blocking
The default initial window size is 65,535 bytes per stream, with a connection-level window that's the sum across all streams. When sending large amounts of data (multiple MB), you may need to increase these windows or properly implement backpressure handling.
Flow Control vs Backpressure
These concepts work together:
- HTTP/2 Flow Control: Protocol-level mechanism that prevents sending too much data
- Node.js Backpressure: Stream API mechanism (write returns false, drain event)
When a write() returns false, it means the buffer is full and you should wait for drain. If you ignore this and keep writing, you fill up the internal buffer, which eventually causes the flow control window to be exceeded, triggering FLOW_CONTROL_ERROR.
Debugging with Built-in Tools
Node.js has several debug tools:
# Enable HTTP/2 protocol debugging
NODE_DEBUG=http2:* node app.js
# Or with more modules
NODE_DEBUG=http2:*,stream node app.js
# Trace all system calls (Linux)
strace -e trace=network node app.jsCommon Patterns That Cause This Error
1. Uncontrolled buffering: Writing in a loop without checking backpressure
2. Resource exhaustion: Too many concurrent streams with large data transfers
3. Misconfigured settings: Window sizes too small for your transfer patterns
4. Stream misuse: Writing after stream ends or errors
5. Proxy issues: Intermediate proxies not properly managing HTTP/2 flow control
Testing and Prevention
Before deploying large-scale HTTP/2 services:
// Load test to ensure flow control is working
const http2 = require('http2');
const { Writable } = require('stream');
const client = http2.connect('https://your-server.com');
// Create a high-speed data sink that tests backpressure
const counter = new Writable({
write(chunk, encoding, callback) {
// Simulate slow processing to test backpressure
setTimeout(callback, 10);
}
});
const req = client.request({':path': '/', ':method': 'GET'});
req.pipe(counter);
req.on('error', (err) => {
console.error('Request failed:', err.code);
process.exit(1);
});Error: EMFILE: too many open files, watch
EMFILE: fs.watch() limit exceeded
Error: Middleware next() called multiple times (next() invoked twice)
Express middleware next() called multiple times
Error: Worker failed to initialize (worker startup error)
Worker failed to initialize in Node.js
Error: EMFILE: too many open files, open 'file.txt'
EMFILE: too many open files
Error: cluster.fork() failed (cannot create child process)
cluster.fork() failed - Cannot create child process