This error occurs in ioredis when a command cannot complete because the Redis connection has been lost and reconnection attempts have exceeded the configured limit. By default, ioredis will retry each pending command up to 20 times before failing with this error.
The MaxRetriesPerRequestError is thrown by the ioredis client library when commands in the retry queue cannot be executed within the allowed number of reconnection attempts. When the connection to Redis is lost, ioredis automatically places pending commands into a queue and attempts to resend them once the connection is restored. The maxRetriesPerRequest option controls how many reconnection attempts are allowed per command before giving up. This error is a protective mechanism to prevent commands from waiting indefinitely when Redis is unavailable. Prior to ioredis v4, commands would wait forever for reconnection, which could cause applications to hang. The default limit of 20 retries typically represents about 10 seconds of waiting, depending on your retryStrategy configuration. Understanding this error is crucial for building resilient Redis integrations, as it signals that your application needs to handle temporary Redis unavailability gracefully rather than assuming immediate connectivity.
Check if your Redis server is running and reachable from your application:
# Test Redis connectivity
redis-cli -h your-redis-host -p 6379 ping
# Should return: PONG
# Check if Redis is running (local)
sudo systemctl status redis
# Or check Redis logs
sudo tail -f /var/log/redis/redis-server.logIf Redis is not running, start it:
# Start Redis service
sudo systemctl start redis
# Or if using Docker
docker start redis-containerEnsure network connectivity between your application and Redis:
# Test TCP connection to Redis port
telnet your-redis-host 6379
# Or use nc (netcat)
nc -zv your-redis-host 6379
# Check if firewall is blocking the port
sudo ufw status
sudo iptables -L -n | grep 6379For cloud environments, verify security groups allow inbound connections on port 6379 from your application's IP range.
If Redis occasionally goes down but recovers quickly, increase the retry limit:
import Redis from 'ioredis';
const redis = new Redis({
host: 'your-redis-host',
port: 6379,
maxRetriesPerRequest: 50, // Increased from default 20
retryStrategy(times) {
const delay = Math.min(times * 200, 2000);
return delay;
},
});This gives commands more time to succeed during temporary outages. Each retry attempt typically waits 200-2000ms, so 50 retries allows up to ~25 seconds of waiting.
For non-critical background jobs where you want commands to wait indefinitely:
import Redis from 'ioredis';
const redis = new Redis({
host: 'your-redis-host',
port: 6379,
maxRetriesPerRequest: null, // Wait forever until connection restored
retryStrategy(times) {
const delay = Math.min(times * 200, 2000);
return delay;
},
});Warning: This can cause your application to hang if Redis never recovers. Only use this when you have external health checks and restart mechanisms.
Catch and handle MaxRetriesPerRequestError gracefully:
import Redis from 'ioredis';
const redis = new Redis({
host: 'your-redis-host',
port: 6379,
maxRetriesPerRequest: 20,
});
async function setWithRetry(key: string, value: string, maxAttempts = 3) {
for (let attempt = 1; attempt <= maxAttempts; attempt++) {
try {
await redis.set(key, value);
return; // Success
} catch (error) {
if (error.name === 'MaxRetriesPerRequestError') {
console.error(`Attempt ${attempt}/${maxAttempts} failed: Redis unavailable`);
if (attempt === maxAttempts) {
// Final attempt failed - log to monitoring, use fallback
throw new Error('Redis persistently unavailable');
}
// Wait before retrying
await new Promise(resolve => setTimeout(resolve, 2000 * attempt));
} else {
throw error; // Re-throw non-retry errors
}
}
}
}This pattern prevents application crashes and allows for graceful degradation.
Implement exponential backoff with maximum retry limits:
import Redis from 'ioredis';
const redis = new Redis({
host: 'your-redis-host',
port: 6379,
maxRetriesPerRequest: 30,
retryStrategy(times) {
// Stop retrying after 100 total connection attempts
if (times > 100) {
console.error('Max connection retries exceeded');
return null; // Stop retrying
}
// Exponential backoff: 200ms, 400ms, 800ms, max 5000ms
const delay = Math.min(times * 200, 5000);
console.log(`Retry attempt ${times}, waiting ${delay}ms`);
return delay;
},
});
// Listen to connection events
redis.on('connect', () => console.log('Redis connected'));
redis.on('ready', () => console.log('Redis ready to accept commands'));
redis.on('error', (err) => console.error('Redis error:', err));
redis.on('close', () => console.log('Redis connection closed'));
redis.on('reconnecting', (delay) => console.log(`Reconnecting in ${delay}ms`));This provides visibility into connection health and prevents infinite retry loops.
For applications with many concurrent Redis operations, use a connection pool:
import Redis, { Cluster } from 'ioredis';
// For Redis Cluster
const cluster = new Cluster(
[
{ host: 'redis-node-1', port: 6379 },
{ host: 'redis-node-2', port: 6379 },
],
{
redisOptions: {
maxRetriesPerRequest: 20,
},
clusterRetryStrategy(times) {
return Math.min(100 * times, 2000);
},
}
);
// For single Redis with connection pooling
class RedisPool {
private connections: Redis[] = [];
private poolSize: number;
constructor(config: any, poolSize = 10) {
this.poolSize = poolSize;
for (let i = 0; i < poolSize; i++) {
this.connections.push(new Redis(config));
}
}
getConnection(): Redis {
return this.connections[Math.floor(Math.random() * this.poolSize)];
}
}
const pool = new RedisPool({
host: 'your-redis-host',
port: 6379,
maxRetriesPerRequest: 20,
}, 10);Connection pooling distributes load and provides failover capabilities.
Understanding retry behavior in queue systems:
When using ioredis with job queue libraries like BullMQ or Bee-Queue, the default maxRetriesPerRequest of 20 is often appropriate. Setting it to null (infinite retries) can cause job processors to hang indefinitely, preventing graceful shutdown. For queue workers, it's better to let jobs fail quickly with MaxRetriesPerRequestError and rely on the queue's built-in retry mechanisms.
Monitoring and alerting:
Track MaxRetriesPerRequestError occurrences as a critical metric. High error rates indicate:
- Redis infrastructure problems requiring immediate attention
- Network instability between services
- Insufficient Redis resources (CPU, memory, connections)
Implement circuit breaker patterns to temporarily disable Redis-dependent features when error rates exceed thresholds, allowing your application to degrade gracefully.
Connection vs request retries:
The maxRetriesPerRequest option is distinct from the retryStrategy option:
- retryStrategy: Controls connection retry attempts and delays when establishing initial connection
- maxRetriesPerRequest: Controls how many reconnection attempts each queued command gets after connection loss
Both work together to determine overall resilience. A command can fail with MaxRetriesPerRequestError even if retryStrategy keeps trying to reconnect.
Performance considerations:
Lowering maxRetriesPerRequest (e.g., to 1 or 5) makes applications fail faster during Redis outages, which can be desirable for API endpoints where users expect quick responses. This allows you to return cached data or error messages promptly rather than making users wait for retry timeouts.
Redis Sentinel and Cluster:
When using Redis Sentinel or Cluster mode, failover typically completes within seconds. A maxRetriesPerRequest of 20-30 is usually sufficient to survive automatic failovers without application errors. However, test your specific infrastructure to determine optimal values.
ERR Unbalanced XREAD list of streams
How to fix "ERR Unbalanced XREAD list" in Redis
ERR syntax error
How to fix "ERR syntax error" in Redis
ConnectionError: Error while reading from socket
ConnectionError: Error while reading from socket in redis-py
ERR unknown command
How to fix ERR unknown command in Redis
Command timed out
How to fix 'Command timed out' in ioredis