The "ERR max number of clients reached" error occurs when the number of active client connections exceeds Redis's configured maxclients limit (default 10,000). Common causes include connection leaks, insufficient connection pooling, and undersized Redis instances. Fix by using connection pooling, increasing maxclients, upgrading your Redis instance, or reducing connection usage.
Redis has a built-in limit on the maximum number of simultaneous client connections it accepts. When a new client tries to connect and the server has already reached this limit, the connection is immediately rejected with "ERR max number of clients reached". The default maxclients limit in Redis 2.6+ is 10,000 connections. However, managed Redis services (Heroku, Redis Cloud, AWS ElastiCache) often have lower limits depending on your plan tier—mini plans may allow only 20 clients, while premium tiers support thousands. When the limit is hit, no new connections can be established until existing connections are closed. This error is a critical signal that your application is either creating too many connections, not properly closing connections (connection leak), or your Redis instance is undersized for your workload. It requires immediate attention as it prevents new requests from accessing the database.
First, understand how many connections are currently active to identify the severity.
# Connect to Redis CLI:
redis-cli
# Check current connections:
INFO clients
# Look for the "connected_clients" field. Example output:
# connected_clients:15243
# blocked_clients:2
# maxclients:10000If connected_clients is at or very close to maxclients, you've hit the limit. If it's consistently high (>80% of maxclients), action is needed even if you haven't hit the hard limit yet.
Use the CLIENT command to list all connections and identify heavy consumers.
# List all connected clients:
redis-cli CLIENT LIST
# Get a summary:
redis-cli CLIENT LIST | wc -l
# Filter by specific source:
redis-cli CLIENT LIST | grep "addr=<IP_ADDRESS>"Look for patterns:
- Too many connections from a single server (indicates undersized pool or no pooling)
- Many "idle" connections stuck waiting for operations
- Connections with large "buf" values (buffered output, slow clients)
Connection pooling reuses connections instead of creating new ones for each request.
Node.js example with ioredis:
const Redis = require("ioredis");
const redis = new Redis({
host: "localhost",
port: 6379,
maxRetriesPerRequest: null,
enableReadyCheck: false,
// Pool settings to limit total connections:
// Reuse connections across requests
});
// Reuse the same redis instance across your app
module.exports = redis;Ruby/Rails example:
# config/initializers/redis.rb
REDIS = Redis.new(
url: ENV["REDIS_URL"],
pool: { size: 10, timeout: 5 } # Limit to 10 connections
)Python example:
from redis import ConnectionPool, Redis
pool = ConnectionPool.from_url(
"redis://localhost:6379",
max_connections=20 # Limit total connections
)
redis_client = Redis(connection_pool=pool)For queue processors (Sidekiq, Bull, Resque), ensure you've configured the concurrency and pool size appropriately.
If you've confirmed pooling is correct but still need more capacity, increase maxclients.
# Check current limit:
redis-cli CONFIG GET maxclients
# Increase it at runtime (survives until next restart):
redis-cli CONFIG SET maxclients 20000
# Verify the change:
redis-cli CONFIG GET maxclientsFor persistent configuration, edit your redis.conf:
# /etc/redis/redis.conf
maxclients 20000Then restart Redis:
sudo systemctl restart redis-serverImportant: Increasing maxclients also requires sufficient OS file descriptor limits. Check: ulimit -n (should be higher than maxclients).
Redis can't accept more connections than the OS allows. File descriptors must exceed maxclients.
# Check current file descriptor limit:
ulimit -n
# For root/Redis service, check:
cat /proc/sys/fs/file-maxIncrease if needed:
# Temporary (until reboot):
ulimit -n 65536
# Permanent (edit /etc/security/limits.conf):
echo "redis soft nofile 65536" | sudo tee -a /etc/security/limits.conf
echo "redis hard nofile 65536" | sudo tee -a /etc/security/limits.conf
# Apply changes:
sudo sysctl -pFor systemd services, edit /etc/systemd/system/redis.service.d/override.conf:
[Service]
LimitNOFILE=65536Then reload and restart:
sudo systemctl daemon-reload
sudo systemctl restart redis-serverFor hosted Redis services (Heroku, Redis Cloud, AWS ElastiCache), plans have hard limits.
# Check which plan you're on:
# Heroku:
heroku redis:info
# AWS ElastiCache:
aws elasticache describe-cache-clusters --cache-cluster-id my-redisIf you're consistently at or near your plan's connection limit:
- Upgrade to a larger tier (premium plans support more connections)
- Switch providers if current rates are uneconomical
- Consider read replicas or cluster mode to scale horizontally
Migration example for Heroku:
# Provision larger Redis instance
heroku addons:create heroku-redis:premium-2
# Update your app to use new instance
# Deploy and verifyFor queue systems like Sidekiq, the connection count is: (number of worker processes) * (concurrency) * (connections per worker). Sidekiq 6+ uses about 2 connections per concurrency slot. If you have 10 Sidekiq processes with concurrency=25, expect ~500 Redis connections. Calculate your needs and set maxclients with 20% headroom.
Blocking operations (BLPOP, BRPOP, BZPOPMIN, etc.) are common culprits for connection exhaustion. If a queue processor blocks on BLPOP indefinitely, it holds a connection. Set timeouts: BLPOP key 30 ensures the connection is released every 30 seconds if no data arrives.
For Redis Sentinel or Cluster setups, each sentinel monitor connection counts toward maxclients. Three sentinels checking the same master means 3+ extra connections. Plan accordingly.
In Kubernetes, verify Pod resource limits and connection pooling. A single Redis service may be shared by 50 pods—each pod needs only a few connections via pooling, not 10+ direct connections.
Monitor connections continuously. Set alerts at 80% of maxclients and track trends over time using redis-cli --latency and MONITOR to identify slow commands.
ERR fsync error
How to fix "ERR fsync error" in Redis
CLUSTERDOWN The cluster is down
How to fix 'CLUSTERDOWN The cluster is down' in Redis
ERR Job for redis-server.service failed because a timeout was exceeded
Job for redis-server.service failed because a timeout was exceeded
ERR Unbalanced XREAD list of streams
How to fix "ERR Unbalanced XREAD list" in Redis
ERR syntax error
How to fix "ERR syntax error" in Redis