A socket timeout occurs when redis-py cannot read a response from Redis within the configured time limit. This typically happens with slow networks, blocking commands, or idle connections closed by the server.
When you see this error, it means the redis-py client waited for a response from the Redis server but did not receive data within the socket_timeout interval. This happens at the network layerβthe client's socket has a deadline for reading bytes from the server, and that deadline expired. The Redis server might be slow to respond, the network might be congested, or the connection might be stale (especially in cloud environments like Azure Redis that close idle connections after 10 minutes). Unlike connection timeouts which happen during the initial TCP handshake, socket timeouts occur after a connection is established and a command has been sent.
The socket_timeout parameter controls how long redis-py waits for a response. Start by increasing it to allow more time for the server to respond:
import redis
# Set socket_timeout to 5 seconds (default is often too low)
r = redis.Redis(
host="localhost",
port=6379,
socket_timeout=5, # seconds
socket_connect_timeout=5, # connection timeout
decode_responses=True
)
try:
result = r.get("mykey")
except redis.TimeoutError:
print("Command timed out")Choose a value appropriate for your use case. For most applications, 5-10 seconds is reasonable. For heavy operations, use 30+ seconds.
Cloud providers often close idle connections. Enable socket keep-alive to detect dead connections early:
import redis
r = redis.Redis(
host="localhost",
port=6379,
socket_timeout=5,
socket_keepalive=True,
socket_keepalive_options={
1: 1, # TCP_KEEPIDLE: start keep-alive after 1 second
2: 1, # TCP_KEEPINTVL: send keep-alive probes every 1 second
3: 2, # TCP_KEEPCNT: close after 2 failed probes
},
decode_responses=True
)This aggressively detects broken connections and reconnects, preventing long hangs.
redis-py 5.0+ supports automatic retries with exponential backoff for transient errors:
import redis
from redis.retry import Retry
from redis.backoff import ExponentialBackoff
from redis.exceptions import TimeoutError, ConnectionError
r = redis.Redis(
host="localhost",
port=6379,
socket_timeout=5,
retry=Retry(
ExponentialBackoff(cap=10, base=1),
retries=25 # retry up to 25 times
),
retry_on_error=[
ConnectionError,
TimeoutError,
ConnectionResetError,
],
health_check_interval=1,
decode_responses=True
)
# Automatically retries with exponential backoff on timeout
result = r.get("mykey")This configuration retries failed commands with increasing delays (1s, 2s, 4s, ..., up to 10s) before giving up.
When using blocking commands like BLPOP or BRPOP, the command timeout must be less than socket_timeout:
import redis
r = redis.Redis(
host="localhost",
port=6379,
socket_timeout=30, # socket timeout: 30 seconds
decode_responses=True
)
# Command timeout (20s) is shorter than socket_timeout (30s)
# This prevents TimeoutError from socket_timeout firing during blocking wait
result = r.blpop("myqueue", timeout=20)If your BLPOP timeout is 120 seconds but socket_timeout is 5 seconds, the socket will timeout before the command completes. Set socket_timeout higher than your blocking command timeout, or catch the exception and retry.
Sometimes the issue is not the client but the Redis server:
# Check Redis response time and memory usage
redis-cli --latency
redis-cli info stats | grep total_commands_processed
redis-cli info memory | grep used_memory_human# In Python, measure Redis latency
import redis
import time
r = redis.Redis(host="localhost", port=6379)
start = time.time()
for _ in range(1000):
r.ping()
latency = (time.time() - start) / 1000
print(f"Average latency: {latency*1000:.2f}ms")If latency is consistently high (>100ms), increase socket_timeout. If the Redis server is CPU-bound or memory-constrained, upgrade resources or offload work.
For critical blocking operations, create a dedicated client without a socket_timeout to avoid interruption:
import redis
# Main client for non-blocking operations
r = redis.Redis(
host="localhost",
port=6379,
socket_timeout=5, # short timeout
decode_responses=True
)
# Dedicated client for blocking operations (no timeout)
blocking_r = redis.Redis(
host="localhost",
port=6379,
socket_timeout=None, # no socket timeout
decode_responses=True
)
# Non-blocking commands
value = r.get("key")
# Blocking commands (won't be interrupted by socket timeout)
result = blocking_r.blpop("queue", timeout=300)This prevents blocking operations from being killed by socket timeouts while keeping normal commands responsive.
Understanding socket_timeout vs socket_connect_timeout: socket_connect_timeout controls the TCP handshake timeout during connection setup, while socket_timeout controls how long the client waits for data after a command is sent. In redis-py < 2.10, these were the same parameter.
Azure Redis timeout issue: Azure closes idle connections after 10 minutes by default. If you have a connection that sits unused for 600+ seconds, Azure will close it server-side, causing subsequent commands to timeout. Enable socket_keepalive to send periodic probes before the connection dies.
Pipeline and timeout interaction: When using redis.pipeline(), the socket_timeout applies to the entire pipeline execution, not individual commands. A large pipeline with many commands can exceed the timeout even if each command is fast. Either increase socket_timeout for pipelines or split large operations into smaller batches.
Retry side effects: Automatic retries can mask transient issues. Monitor retry rates in production. High retry rates (>5%) indicate network or server problems that should be investigated.
ERR fsync error
How to fix "ERR fsync error" in Redis
CLUSTERDOWN The cluster is down
How to fix 'CLUSTERDOWN The cluster is down' in Redis
ERR Job for redis-server.service failed because a timeout was exceeded
Job for redis-server.service failed because a timeout was exceeded
ERR Unbalanced XREAD list of streams
How to fix "ERR Unbalanced XREAD list" in Redis
ERR syntax error
How to fix "ERR syntax error" in Redis