This error occurs when your Node.js server receives connections faster than it can accept them, causing the TCP backlog queue to overflow. The server cannot handle the rate of incoming connection requests, and the OS refuses further connection attempts.
The "TCP backlog queue exceeded" error indicates that the listen backlog queue for your server has overflowed. When a client attempts to connect to your Node.js server, the OS places the connection in a queue (the "backlog") before your application accepts it via the `accept()` system call. Each server has a maximum backlog size. When legitimate connection requests arrive faster than your server can process them, the queue fills up. Once full, the OS refuses new incoming connections and may throw this error or drop connection attempts silently. This is a capacity issue: your server cannot keep up with the rate of incoming connections. It indicates either high traffic volume, slow connection handling, or insufficient server resources.
The default backlog value in Node.js is 511, which may be insufficient for high-traffic applications. Increase it when setting up your server.
For HTTP servers:
const http = require('http');
const server = http.createServer((req, res) => {
res.writeHead(200);
res.end('OK');
});
// Increase backlog to 2048
server.listen(3000, 'localhost', 2048, () => {
console.log('Server listening on port 3000 with backlog 2048');
});For net servers:
const net = require('net');
const server = net.createServer((socket) => {
socket.write('Connected!');
socket.end();
});
// Increase backlog to 2048
server.listen(3000, 'localhost', 2048);For Express:
const express = require('express');
const app = express();
const server = app.listen(3000, 'localhost', 2048, () => {
console.log('Express server with backlog 2048');
});Start with 2048 and increase further if needed. The value should match or exceed your expected concurrent connection count.
The actual effective backlog is limited by OS parameters. On Linux, check and increase these kernel parameters.
View current settings:
sysctl net.core.somaxconn
sysctl net.ipv4.tcp_max_syn_backlogIncrease temporarily (until reboot):
sudo sysctl -w net.core.somaxconn=4096
sudo sysctl -w net.ipv4.tcp_max_syn_backlog=4096Make changes permanent by editing /etc/sysctl.conf:
sudo nano /etc/sysctl.confAdd these lines:
net.core.somaxconn=4096
net.ipv4.tcp_max_syn_backlog=4096Apply the changes:
sudo sysctl -pNote: The effective backlog is the minimum of:
- Your application's backlog parameter
- The OS somaxconn limit
So if you set backlog=8192 but somaxconn=1024, the actual backlog is 1024.
Reduce the time your request handlers occupy connection slots. Slow handlers accumulate connections in the backlog.
Bad: Blocking operations in request handler
const http = require('http');
const server = http.createServer(async (req, res) => {
// Synchronous sleep blocks the event loop
const start = Date.now();
while (Date.now() - start < 1000) {} // 1 second blocking!
res.writeHead(200);
res.end('OK');
});
server.listen(3000);Good: Async/non-blocking operations
const http = require('http');
const server = http.createServer(async (req, res) => {
try {
// Non-blocking async operation
const data = await fetchDataAsync();
res.writeHead(200);
res.end(JSON.stringify(data));
} catch (error) {
res.writeHead(500);
res.end('Error');
}
});
async function fetchDataAsync() {
return new Promise(resolve => {
setTimeout(() => resolve({ status: 'ok' }), 100);
});
}
server.listen(3000);Monitor and optimize slow endpoints:
const http = require('http');
const server = http.createServer(async (req, res) => {
const start = Date.now();
// Your request handling
res.writeHead(200);
res.end('OK');
const duration = Date.now() - start;
if (duration > 100) {
console.warn(`Slow endpoint ${req.url}: ${duration}ms`);
}
});
server.listen(3000);Control the rate at which connections are accepted to prevent overwhelming your server.
Connection pooling with a queue:
const http = require('http');
const { EventEmitter } = require('events');
class ConnectionPool extends EventEmitter {
constructor(maxConnections) {
super();
this.maxConnections = maxConnections;
this.activeConnections = 0;
this.queue = [];
}
accept(socket) {
if (this.activeConnections < this.maxConnections) {
this.activeConnections++;
socket.on('close', () => {
this.activeConnections--;
this.processQueue();
});
return true;
}
return false;
}
processQueue() {
while (this.queue.length > 0 && this.activeConnections < this.maxConnections) {
const socket = this.queue.shift();
this.accept(socket);
}
}
}
const pool = new ConnectionPool(100); // Max 100 concurrent
const server = http.createServer((req, res) => {
res.writeHead(200);
res.end('OK');
});
server.on('connection', (socket) => {
if (!pool.accept(socket)) {
socket.destroy();
}
});
server.listen(3000);Rate limiting with Redis (using express-rate-limit):
const express = require('express');
const rateLimit = require('express-rate-limit');
const app = express();
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // Limit each IP to 100 requests per windowMs
});
app.use(limiter);
app.listen(3000);Track connection metrics to understand your server capacity and scale when needed.
Monitor backlog and connection states:
const http = require('http');
const server = http.createServer((req, res) => {
res.writeHead(200);
res.end('OK');
});
// Log connection metrics every 10 seconds
setInterval(() => {
server.getConnections((err, count) => {
if (!err) {
console.log(`Active connections: ${count}`);
}
});
}, 10000);
server.listen(3000);With clustering (multi-process on multi-core systems):
const cluster = require('cluster');
const os = require('os');
const http = require('http');
if (cluster.isMaster) {
const numCPUs = os.cpus().length;
// Fork workers for each CPU
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
console.log(`Master process with ${numCPUs} workers`);
} else {
// Worker process
const server = http.createServer((req, res) => {
res.writeHead(200);
res.end('OK');
});
server.listen(3000);
console.log(`Worker ${process.pid} listening on port 3000`);
}Load balancing in production:
Use reverse proxies like Nginx or HAProxy to distribute connections across multiple Node.js instances:
upstream node_servers {
server localhost:3001;
server localhost:3002;
server localhost:3003;
}
server {
listen 80;
location / {
proxy_pass http://node_servers;
}
}Implement graceful shutdown to ensure connections close properly without leaving orphaned processes.
Graceful shutdown example:
const http = require('http');
const server = http.createServer((req, res) => {
res.writeHead(200);
res.end('OK');
});
server.listen(3000, () => {
console.log('Server started');
});
// Handle shutdown gracefully
process.on('SIGTERM', () => {
console.log('SIGTERM received, closing server gracefully...');
server.close(() => {
console.log('Server closed');
process.exit(0);
});
// Force close after 30 seconds
setTimeout(() => {
console.error('Forced shutdown after timeout');
process.exit(1);
}, 30000);
});With connection tracking:
const http = require('http');
const server = http.createServer((req, res) => {
res.writeHead(200);
res.end('OK');
});
const connections = new Set();
server.on('connection', (conn) => {
connections.add(conn);
conn.on('close', () => connections.delete(conn));
});
process.on('SIGTERM', () => {
console.log('Shutting down gracefully...');
server.close(() => {
console.log('Server closed');
process.exit(0);
});
// Close all connections
for (const conn of connections) {
conn.destroy();
}
setTimeout(() => process.exit(1), 30000);
});
server.listen(3000);Understanding the TCP backlog queue:
The TCP listen backlog is a kernel data structure that queues incoming connections that have completed the three-way handshake but haven't yet been accepted by the application. The backlog parameter in server.listen() sets the maximum queue length. When this queue is full, the OS may drop new connection attempts or send RST packets.
Difference between backlog and active connections:
Backlog size does not directly limit the total number of concurrent connections your server can handle. It only limits the queue of connections waiting to be accepted. Once accepted, a connection occupies an open file descriptor, which is limited by OS ulimit settings (ulimit -n).
SYN cookies and SYN attacks:
When tcp_syncookies is enabled on Linux, the kernel can handle connection requests without storing them in the backlog, reducing the impact of SYN flood attacks. This can be checked with sysctl net.ipv4.tcp_syncookies.
Docker and backlog limits:
Docker containers inherit the host's kernel parameters. If running in Docker, adjust both the container's backlog parameter and the host's somaxconn setting.
Monitoring with ss command:
ss -tan | grep LISTENThe "Recv-Q" column shows the current backlog queue depth. "Send-Q" shows the backlog limit:
LISTEN 42 2048 0.0.0.0:3000 0.0.0.0:*Here, 42 connections are queued with a limit of 2048.
Node.js cluster mode benefits:
Using Node.js cluster module distributes incoming connections across multiple worker processes, effectively multiplying your backlog capacity and improving throughput.
Error: Listener already called (once event already fired)
EventEmitter listener already called with once()
Error: EACCES: permission denied, open '/root/file.txt'
EACCES: permission denied
Error: Invalid encoding specified (stream encoding not supported)
How to fix Invalid encoding error in Node.js readable streams
Error: EINVAL: invalid argument, open
EINVAL: invalid argument, open
TypeError: readableLength must be a positive integer (stream config)
TypeError: readableLength must be a positive integer in Node.js streams