The "connection refused because too many open connections" error occurs when MongoDB or its client drivers cannot establish new connections due to hitting connection limits. This typically happens in high-traffic applications, misconfigured connection pools, or when connections are not properly closed. The error prevents new database operations until connections are freed up.
The "connection refused because too many open connections" error in MongoDB indicates that the database server or client connection pool has reached its maximum allowed concurrent connections. When this limit is exceeded, any attempt to establish a new connection will be refused until existing connections are closed or the limit is increased. This error can occur at multiple levels: 1. **MongoDB Server Limits**: MongoDB has a default maximum of 65,536 incoming connections (configurable via maxIncomingConnections). When this limit is reached, the server refuses new connections. 2. **Operating System Limits**: The underlying operating system has file descriptor limits that affect MongoDB's ability to accept connections. On Linux systems, the default ulimit for open files is often 1024, which can be insufficient for database servers. 3. **Client Connection Pools**: MongoDB drivers maintain connection pools to reuse connections efficiently. If the pool size is misconfigured or connections leak (aren't returned to the pool), the pool can exhaust available connections. 4. **Application-Level Issues**: Applications that don't properly close database connections, create new connections for each request without pooling, or have connection leaks can quickly exhaust available connections. The error is particularly common in: - High-traffic web applications with many concurrent users - Microservices architectures with many services connecting to the same database - Applications with connection leaks (connections not closed after use) - Development environments with default low connection limits - Applications using multiple database connections per request When this error occurs, it typically affects new requests or operations while existing connections continue to work, creating a partial outage scenario.
First, check how many connections are currently active and what limits are in place:
Check current connections on MongoDB server:
# Connect to MongoDB shell
mongosh
# Check current connection count
db.serverStatus().connectionsCheck MongoDB server connection limit:
# In mongosh
db.adminCommand({getParameter: 1, maxIncomingConnections: 1})Check operating system file descriptor limits:
# Check MongoDB process limits
cat /proc/$(pgrep mongod)/limits | grep "Max open files"
# Check system-wide limits
ulimit -n
# Check MongoDB user limits (if running as mongod user)
sudo -u mongod bash -c "ulimit -n"For MongoDB Atlas, check connection limits in the Atlas UI:
1. Navigate to your cluster → Settings → Additional Settings
2. Look for "Connection Limits" based on your cluster tier
3. Monitor "Connections" chart in the Metrics tab
If server limits are being hit, increase them appropriately:
Increase maxIncomingConnections in mongod.conf:
# /etc/mongod.conf
net:
port: 27017
maxIncomingConnections: 20000 # Increase from default 65536 if neededIncrease operating system file descriptor limits:
# Edit limits.conf
sudo nano /etc/security/limits.conf
# Add these lines (adjust values as needed)
mongod soft nofile 64000
mongod hard nofile 64000
* soft nofile 64000
* hard nofile 64000
# For systemd systems (Ubuntu 16.04+, CentOS 7+)
sudo nano /etc/systemd/system/mongod.service.d/limits.conf
# Add:
[Service]
LimitNOFILE=64000
LimitNPROC=64000
# Reload systemd and restart MongoDB
sudo systemctl daemon-reload
sudo systemctl restart mongodIncrease kernel-level limits:
# Check current kernel limits
sysctl fs.file-max
sysctl fs.file-nr
# Increase file-max (system-wide limit)
sudo sysctl -w fs.file-max=2097152
# Make permanent
echo "fs.file-max = 2097152" >> /etc/sysctl.conf
sudo sysctl -pFor MongoDB Atlas:
Connection limits are tied to your cluster tier. To increase limits:
1. Go to Atlas → Clusters → Your Cluster → "Configuration"
2. Click "Edit Configuration"
3. Upgrade to a higher tier (M10, M20, etc.) for more connections
4. Or consider sharding for horizontal scaling
Proper connection pool configuration prevents connection exhaustion:
Node.js MongoDB driver configuration:
const { MongoClient } = require('mongodb');
const uri = 'mongodb://localhost:27017/mydb';
const client = new MongoClient(uri, {
maxPoolSize: 100, // Maximum connections in pool
minPoolSize: 10, // Minimum connections to maintain
maxIdleTimeMS: 60000, // Close idle connections after 60s
waitQueueTimeoutMS: 10000, // Max time to wait for connection
socketTimeoutMS: 30000, // Socket operation timeout
connectTimeoutMS: 10000, // Connection establishment timeout
serverSelectionTimeoutMS: 30000, // Server selection timeout
});Mongoose configuration:
const mongoose = require('mongoose');
mongoose.connect('mongodb://localhost:27017/mydb', {
maxPoolSize: 100,
minPoolSize: 10,
socketTimeoutMS: 30000,
connectTimeoutMS: 10000,
serverSelectionTimeoutMS: 30000,
});Calculate optimal pool size:
- Web servers: maxPoolSize = (max concurrent requests) × (average queries per request)
- For 100 concurrent users with 2 queries each: 200 connections
- Add buffer: 220-250 connections
- Monitor and adjust based on actual usage
Connection leaks occur when connections aren't returned to the pool. Common patterns to fix:
1. Always close cursors and streams:
// BAD: Cursor not closed
const cursor = collection.find({});
const results = await cursor.toArray();
// Cursor remains open
// GOOD: Explicitly close cursor
const cursor = collection.find({});
try {
const results = await cursor.toArray();
return results;
} finally {
await cursor.close();
}
// BETTER: Use toArray() which automatically closes
const results = await collection.find({}).toArray();2. Use connection pooling properly:
// BAD: Creating new client for each request
app.get('/users', async (req, res) => {
const client = new MongoClient(uri); // New connection each time
await client.connect();
// ... query
await client.close(); // Closes connection instead of returning to pool
});
// GOOD: Reuse single client instance
const client = new MongoClient(uri, { maxPoolSize: 100 });
await client.connect(); // Once at startup
app.get('/users', async (req, res) => {
const db = client.db('mydb');
const users = await db.collection('users').find({}).toArray();
res.json(users);
// Connection automatically returns to pool
});3. Handle errors properly:
// BAD: Connection leak on error
async function updateUser(userId, data) {
const session = client.startSession();
try {
session.startTransaction();
await collection.updateOne({ _id: userId }, { $set: data }, { session });
await session.commitTransaction();
} catch (error) {
await session.abortTransaction();
throw error;
}
// Session not ended - connection leak!
}
// GOOD: Always end session
async function updateUser(userId, data) {
const session = client.startSession();
try {
session.startTransaction();
await collection.updateOne({ _id: userId }, { $set: data }, { session });
await session.commitTransaction();
} catch (error) {
await session.abortTransaction();
throw error;
} finally {
await session.endSession(); // Always end session
}
}Follow these practices to prevent connection exhaustion:
1. Use connection string options effectively:
const uri = 'mongodb://host1,host2,host3/mydb?' +
'replicaSet=myReplicaSet&' +
'maxPoolSize=100&' +
'minPoolSize=10&' +
'maxIdleTimeMS=60000&' +
'waitQueueTimeoutMS=10000&' +
'socketTimeoutMS=30000&' +
'connectTimeoutMS=10000&' +
'serverSelectionTimeoutMS=30000&' +
'retryWrites=true&' +
'retryReads=true&' +
'readPreference=secondaryPreferred&' +
'maxStalenessSeconds=90';2. Implement connection health checks:
// Periodic health check
setInterval(async () => {
try {
await client.db('admin').command({ ping: 1 });
console.log('Connection healthy');
} catch (error) {
console.error('Connection health check failed:', error);
// Implement reconnection logic
}
}, 30000);3. Implement graceful shutdown:
// Handle application shutdown
process.on('SIGTERM', async () => {
console.log('SIGTERM received, closing MongoDB connections...');
// Close all connections gracefully
await client.close();
// Additional cleanup
await mongoose.disconnect();
console.log('MongoDB connections closed');
process.exit(0);
});When single database can't handle connection load, consider scaling:
1. Implement read/write separation:
// Use different connections for reads and writes
const writeClient = new MongoClient('mongodb://primary-host/mydb', {
maxPoolSize: 50,
readPreference: 'primary',
});
const readClient = new MongoClient('mongodb://secondary-host/mydb', {
maxPoolSize: 100,
readPreference: 'secondary',
});
// Use write client for mutations
async function createUser(user) {
const db = writeClient.db('mydb');
return await db.collection('users').insertOne(user);
}
// Use read client for queries
async function getUsers() {
const db = readClient.db('mydb');
return await db.collection('users').find({}).toArray();
}2. Use connection per service/feature:
// Instead of one giant pool, use specialized pools
const userServiceClient = new MongoClient(uri, { maxPoolSize: 50 });
const orderServiceClient = new MongoClient(uri, { maxPoolSize: 30 });
const analyticsClient = new MongoClient(uri, { maxPoolSize: 20 });
// Each service manages its own connection pool3. Implement database sharding:
# Enable sharding
mongosh
use admin
sh.enableSharding("mydb")
# Shard a collection
sh.shardCollection("mydb.users", { userId: "hashed" })
# Add shards
sh.addShard("shard1.example.com:27017")
sh.addShard("shard2.example.com:27017")Understanding MongoDB Connection Architecture:
MongoDB uses a threaded connection model where each incoming connection is handled by a separate thread. The number of threads is managed by the operating system and MongoDB's thread pool. Key components:
1. Connection Acceptor Thread: Listens for new connections on the port
2. Worker Threads: Handle individual connection I/O
3. Connection Pool: Maintains established connections for reuse
4. Session Manager: Tracks client sessions for transactions
Monitoring Deep Dive:
Use MongoDB diagnostic commands:
// Detailed connection analysis
db.currentOp(true) // Show all current operations
db.serverStatus().connections // Connection statistics
db.serverStatus().network // Network statistics
db.serverStatus().locks // Lock contention
db.serverStatus().wiredTiger.concurrentTransactions // WT transactions
// Connection source analysis
db.currentOp().forEach(op => {
if (op.client) {
print('Connection from ' + op.client + ' running ' + op.op);
}
});
// Identify long-running connections
db.currentOp({ "secs_running": { "$gt": 60 } })Operating System Tuning:
For Linux systems, optimize these parameters:
# Increase epoll limits
sysctl -w fs.epoll.max_user_watches=524288
sysctl -w fs.epoll.max_user_instances=8192
# TCP tuning for many connections
sysctl -w net.core.somaxconn=4096
sysctl -w net.ipv4.tcp_max_syn_backlog=4096
sysctl -w net.core.netdev_max_backlog=5000
# Memory for connections
sysctl -w net.ipv4.tcp_mem='262144 524288 1048576'
sysctl -w net.ipv4.tcp_rmem='4096 87380 6291456'
sysctl -w net.ipv4.tcp_wmem='4096 16384 4194304'
# Connection reuse
sysctl -w net.ipv4.tcp_tw_reuse=1
sysctl -w net.ipv4.tcp_tw_recycle=0 // Disabled in modern kernels
sysctl -w net.ipv4.tcp_fin_timeout=30MongoDB Atlas Specifics:
Atlas has tier-based connection limits:
- M0 (Free): 500 connections
- M2/M5: 1,000 connections
- M10: 2,000 connections
- M20: 3,000 connections
- M30: 4,000 connections
- M40+: 5,000+ connections
Atlas also enforces:
- 30-minute idle connection timeout
- Connection rate limiting
- IP-based connection limits
When to Consider Alternative Solutions:
1. Change Data Capture (CDC): Instead of many polling connections, use MongoDB Change Streams
2. Message Queue: Buffer requests instead of direct database connections
3. Connection Proxy: Use proxy like ProxySQL or HAProxy for connection management
4. Application-Level Caching: Reduce database hits with Redis or Memcached
5. Read Replicas: Distribute read load across multiple secondaries
6. Microservices Database Per Service: Each service has its own database instance
Emergency Response Plan:
When hitting connection limits in production:
1. Immediately: Increase OS file descriptor limits
2. Short-term: Restart application to clear leaked connections
3. Medium-term: Optimize connection pooling configuration
4. Long-term: Implement proper connection management and monitoring
5. Scale: Upgrade MongoDB tier or implement sharding
DivergentArrayError: For your own good, using document.save() to update an array which was selected using an $elemMatch projection will not work
How to fix "DivergentArrayError: For your own good, using document.save() to update an array which was selected using an $elemMatch projection will not work" in MongoDB
MongoServerError: bad auth : authentication failed
How to fix "MongoServerError: bad auth : authentication failed" in MongoDB
CannotCreateIndex: Cannot create index
CannotCreateIndex: Cannot create index
StaleShardVersion: shard version mismatch
How to fix "StaleShardVersion: shard version mismatch" in MongoDB
MongoOperationTimeoutError: Operation timed out
How to fix "MongoOperationTimeoutError: Operation timed out" in MongoDB