This error occurs when your Node.js process has reached the operating system limit for the maximum number of file descriptors it can open simultaneously. File descriptors are used for files, network sockets, pipes, and other I/O resources.
The EMFILE error is a system-level error that indicates your application has exceeded the maximum number of file descriptors the operating system allows a single process to open at once. In Node.js, file descriptors are consumed not just by files on disk, but also by network connections, streams, pipes, and any other I/O resources. Every time your application opens a file using `fs.open()`, creates a network connection, or opens a stream, it consumes a file descriptor from the system pool. If these resources aren't properly closed after use, or if your application tries to open too many resources concurrently, you'll hit the system limit and receive this error. The default file descriptor limit varies by operating system but is typically around 1024 on Linux and macOS systems, which can be easily exceeded in high-concurrency applications or when processing many files in parallel.
First, verify your system's current file descriptor limit:
# On Linux/macOS
ulimit -n
# Check both soft and hard limits
ulimit -Sn # Soft limit
ulimit -Hn # Hard limitThe output shows the maximum number of file descriptors your process can open. A typical default is 1024, which may be too low for applications handling many files or connections.
For quick testing, increase the limit in your current terminal session:
# Set limit to 4096
ulimit -n 4096
# Then run your Node.js application
node your-app.jsThis change only affects the current shell session and will reset when you close the terminal.
To permanently increase the limit, edit /etc/security/limits.conf:
sudo nano /etc/security/limits.confAdd these lines (replace yourusername with your actual username or use * for all users):
* soft nofile 65536
* hard nofile 65536Save the file and log out completely, then log back in for changes to take effect. Verify with ulimit -n.
If running Node.js as a systemd service, add limits to your service file:
[Service]
LimitNOFILE=65536Reload systemd and restart your service:
sudo systemctl daemon-reload
sudo systemctl restart your-serviceInstall graceful-fs to automatically handle EMFILE errors with backoff:
npm install graceful-fsReplace the built-in fs module:
// At the top of your entry file
const gracefulFs = require('graceful-fs');
gracefulFs.gracefulify(require('fs'));
// Or use it directly as a drop-in replacement
const fs = require('graceful-fs');This module queues file operations when the limit is reached and retries them automatically.
Always close file handles after use:
const fs = require('fs').promises;
async function readFile(path) {
let fileHandle;
try {
fileHandle = await fs.open(path, 'r');
const content = await fileHandle.readFile('utf8');
return content;
} finally {
// Always close, even if reading fails
await fileHandle?.close();
}
}
// Or use streams which auto-close
const stream = fs.createReadStream('file.txt');
stream.on('close', () => console.log('Stream closed'));Use a concurrency control library like p-limit to process files in batches:
npm install p-limitconst pLimit = require('p-limit');
const fs = require('fs').promises;
const limit = pLimit(10); // Process max 10 files concurrently
const files = ['file1.txt', 'file2.txt', /* ... */];
const promises = files.map(file =>
limit(() => fs.readFile(file, 'utf8'))
);
const results = await Promise.all(promises);This prevents opening too many files at once while still maintaining good performance.
Prefer streams over loading entire files into memory:
const fs = require('fs');
const readline = require('readline');
const stream = fs.createReadStream('large-file.txt');
const rl = readline.createInterface({
input: stream,
crlfDelay: Infinity
});
for await (const line of rl) {
// Process line by line without loading entire file
console.log(line);
}Streams automatically manage resources and don't hold file descriptors longer than necessary.
Platform Differences: On macOS, the default limit is often much lower (256) than on Linux. Windows handles file descriptors differently and has much higher limits, so this error is less common on Windows systems.
Monitoring File Descriptors: You can monitor your process's open file descriptors on Linux using lsof -p <pid> | wc -l or by checking /proc/<pid>/fd/. This helps identify resource leaks.
Docker Considerations: When running Node.js in Docker containers, set ulimits in your docker-compose.yml or Dockerfile using the --ulimit flag. The container inherits limits from the host but can be further restricted.
Production Best Practices: In production environments, use a process manager like PM2 or systemd that allows you to configure resource limits. Also implement monitoring and alerting for file descriptor usage to catch issues before they cause outages.
Connection Pooling: For applications making many database or HTTP connections, use connection pooling libraries that reuse connections instead of creating new ones for each operation. This significantly reduces file descriptor consumption.
Debugging Leaks: If increasing limits doesn't solve the problem long-term, you likely have a resource leak. Use tools like wtfnode or enable Node.js's --trace-warnings flag to identify handles that aren't being closed properly.
Error: EMFILE: too many open files, watch
EMFILE: fs.watch() limit exceeded
Error: Middleware next() called multiple times (next() invoked twice)
Express middleware next() called multiple times
Error: Worker failed to initialize (worker startup error)
Worker failed to initialize in Node.js
Error: cluster.fork() failed (cannot create child process)
cluster.fork() failed - Cannot create child process
Error: RSA key generation failed (crypto operation failed)
RSA key generation failed