This error occurs when the redis-py client encounters a socket exception while reading data from the Redis server. It typically happens due to connection resets, server-side connection closures, or network issues.
This error is raised by the redis-py client library when it encounters a socket.error or socket.timeout exception while attempting to read data from the Redis server connection. The error indicates that the TCP socket connection between your Python application and the Redis server has been disrupted during a read operation. The error can manifest in several variations including "Connection reset by peer", "Connection closed by server", or simply a timeout during the read operation. This typically happens when the connection is unexpectedly terminated either by the server (due to timeout policies or resource limits) or by network issues between the client and server. In production environments, this error is often related to idle connection timeouts, where cloud providers like Azure or AWS automatically close connections that have been idle for a certain period (commonly 10 minutes). It can also occur during high-concurrency scenarios where connection pool resources are exhausted or when network hiccups cause temporary disruptions.
Enable socket keepalive in your Redis connection to prevent idle connection timeouts:
import redis
# Configure connection with keepalive
r = redis.Redis(
host='localhost',
port=6379,
socket_keepalive=True,
socket_keepalive_options={
redis.connection.TCP_KEEPIDLE: 60, # Start keepalive after 60s idle
redis.connection.TCP_KEEPINTVL: 10, # Keepalive interval
redis.connection.TCP_KEEPCNT: 3 # Number of keepalive probes
}
)This ensures the connection stays alive even during idle periods.
Use the health_check_interval parameter to periodically ping the server:
import redis
# Connection pool with health checks
pool = redis.ConnectionPool(
host='localhost',
port=6379,
health_check_interval=30, # Ping every 30 seconds
socket_connect_timeout=5,
socket_timeout=5
)
r = redis.Redis(connection_pool=pool)This proactively detects and replaces stale connections before operations fail.
Wrap Redis operations in retry logic to handle transient connection errors:
import redis
from redis.retry import Retry
from redis.backoff import ExponentialBackoff
# Configure automatic retries
retry = Retry(ExponentialBackoff(), 3) # Retry 3 times with backoff
r = redis.Redis(
host='localhost',
port=6379,
retry=retry,
retry_on_error=[redis.exceptions.ConnectionError, redis.exceptions.TimeoutError]
)
# Or manual retry wrapper
from time import sleep
def redis_operation_with_retry(func, max_retries=3):
for attempt in range(max_retries):
try:
return func()
except redis.exceptions.ConnectionError as e:
if attempt == max_retries - 1:
raise
sleep(2 ** attempt) # Exponential backoff
continue
# Usage
result = redis_operation_with_retry(lambda: r.get('mykey'))Set explicit timeout values to prevent indefinite hangs:
import redis
r = redis.Redis(
host='localhost',
port=6379,
socket_connect_timeout=5, # Timeout for establishing connection
socket_timeout=5, # Timeout for socket operations
socket_keepalive=True
)For blocking commands like BLPOP, ensure socket_timeout is higher than the command timeout:
# If using BLPOP with 30s timeout
r = redis.Redis(
host='localhost',
port=6379,
socket_timeout=35 # Higher than BLPOP timeout
)
# Use the blocking command
result = r.blpop('myqueue', timeout=30)Implement connection pooling to manage connections efficiently:
import redis
# Create a shared connection pool
pool = redis.ConnectionPool(
host='localhost',
port=6379,
max_connections=50, # Limit pool size
socket_keepalive=True,
health_check_interval=30,
socket_connect_timeout=5,
socket_timeout=5
)
# Reuse the pool across your application
r = redis.Redis(connection_pool=pool)
# Always close the pool when shutting down
# pool.disconnect()Ensure you're not creating new connection pools repeatedly, which can exhaust server resources.
Verify the Redis server hasn't reached its connection limit:
# Connect to Redis CLI
redis-cli
# Check current connections
CLIENT LIST
# Check maxclients setting
CONFIG GET maxclients
# Increase if needed (requires admin access)
CONFIG SET maxclients 10000For permanent changes, edit redis.conf:
# /etc/redis/redis.conf
maxclients 10000Then restart Redis server.
Cloud Redis Considerations:
Cloud providers like Azure Redis and AWS ElastiCache have specific idle timeout policies. Azure Redis closes idle connections after 10 minutes by default. Always enable socket_keepalive or health_check_interval when using cloud Redis services.
Connection Pool Sizing:
The default max_connections in redis-py connection pools is quite large. For most applications, setting a reasonable limit (e.g., 50-100 connections) prevents resource exhaustion on both client and server sides. Monitor actual connection usage with CLIENT LIST.
Network Proxy and Load Balancer Issues:
If Redis is behind a load balancer or proxy, ensure the intermediate layer doesn't have shorter timeout values than your client configuration. Some proxies aggressively close idle connections.
unix domain sockets:
If your Python application and Redis server are on the same machine, consider using Unix domain sockets instead of TCP for better performance and fewer connection issues:
r = redis.Redis(unix_socket_path='/var/run/redis/redis.sock')Monitoring and Alerting:
Track ConnectionError rates in your application monitoring. A sudden spike often indicates Redis server issues, network problems, or deployment changes. Use structured logging to capture the full error details including the socket error code.
ERR Unbalanced XREAD list of streams
How to fix "ERR Unbalanced XREAD list" in Redis
ERR syntax error
How to fix "ERR syntax error" in Redis
ERR unknown command
How to fix ERR unknown command in Redis
Command timed out
How to fix 'Command timed out' in ioredis
ERR DISCARD without MULTI
How to fix "ERR DISCARD without MULTI" in Redis