The "ERR timeout reading from client" error occurs when Redis cannot read a command or response from a client within the configured timeout period. This commonly happens with long-running Lua scripts, slow commands, network issues, or misconfigured timeouts. Fixing requires optimizing scripts, adjusting timeout values, and investigating network conditions.
The "ERR timeout reading from client" error indicates that Redis has been waiting for incoming data from a connected client but did not receive it within the configured timeout threshold. This can occur on the server side when Redis expects data from a client (like command continuation or pipelined commands) or on the client side when the client cannot read the response from the server in time. This error is particularly common in scenarios involving Lua script execution, where long-running scripts block the Redis server and cause other clients to timeout while waiting for responses. Since Redis is single-threaded, any operation that takes excessive time prevents the server from processing other client requests, leading to timeout errors for waiting clients. The timeout mechanism is a safety feature to prevent zombie connections and resource exhaustion. However, legitimate operations—such as complex Lua scripts, large bulk operations, or commands running under high load—may exceed the default timeout, causing this error.
The SLOWLOG command reveals which operations are exceeding time thresholds and may be causing timeouts.
# Connect to Redis:
redis-cli
# View recent slow commands:
SLOWLOG GET 10
# Output format:
1) 1) "13547312"
2) "1698765432"
3) "5234000" # Execution time in microseconds
4) "EVAL"
5) "..."
# Check the slowlog threshold:
CONFIG GET slowlog-max-len
CONFIG GET slowlog-log-slower-than
# If needed, lower the threshold to catch more slow commands:
CONFIG SET slowlog-log-slower-than 1000 # Log commands slower than 1msLook for EVAL/EVALSHA commands with execution times exceeding your client timeout settings.
Client timeout values should be higher than typical server response times, especially if running complex operations.
Node.js (redis/node-redis):
import { createClient } from 'redis';
const client = createClient({
host: 'localhost',
port: 6379,
socket: {
connectTimeout: 5000, // 5 seconds
timeout: 30000, // 30 seconds read timeout
}
});Python (redis-py):
import redis
client = redis.Redis(
host='localhost',
port=6379,
socket_connect_timeout=5,
socket_timeout=30 # 30 seconds
)Java (Lettuce):
RedisURI uri = RedisURI.builder()
.withHost("localhost")
.withPort(6379)
.build();
uri.setTimeout(Duration.ofSeconds(30));Recommended starting values: 30-60 seconds for normal operations, 120+ seconds for Lua scripts.
Long-running scripts block the entire Redis server. Break them into smaller operations or use more efficient logic.
-- BEFORE: Slow script that iterates many times
local result = {}
for i = 1, 10000 do
redis.call('SET', KEYS[1] .. ':' .. i, ARGV[i])
end
return result
-- AFTER: Use pipelining and fewer iterations
local pipeline = {}
for i = 1, 100 do -- Reduced iterations
table.insert(pipeline, {'SET', KEYS[1] .. ':' .. i, ARGV[i]})
end
-- Alternatively, move bulk operations to client sideBest practices:
- Minimize loops and string operations inside Lua
- Use pipelining from the client for bulk operations instead of Lua
- Avoid KEYS *, SCAN, or expensive lookups inside scripts
- Consider if the logic actually needs to be in Lua for atomicity
The lua-time-limit controls the maximum execution time before Redis stops a script. Increase it if scripts legitimately need more time.
# View current setting:
redis-cli CONFIG GET lua-time-limit
# Default: 5000 (5 seconds)
# Increase to 30 seconds:
redis-cli CONFIG SET lua-time-limit 30000
# To persist across restarts, add to redis.conf:
echo 'lua-time-limit 30000' >> /etc/redis/redis.conf
# Restart Redis:
sudo systemctl restart redis-serverCAUTION: Increasing this timeout allows blocking scripts to hold the server longer, potentially worsening client timeouts. Only increase if scripts are legitimately complex and you cannot optimize them further.
Transient timeouts are often temporary; retrying with backoff can help.
// Node.js example:
async function executeWithRetry(command, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
return await client.sendCommand(command);
} catch (error) {
if (error.message.includes('timeout') && i < maxRetries - 1) {
const backoff = Math.pow(2, i) * 1000; // 1s, 2s, 4s
await new Promise(resolve => setTimeout(resolve, backoff));
continue;
}
throw error;
}
}
}
// Usage:
try {
const result = await executeWithRetry(['SET', key, value]);
} catch (error) {
console.error('Failed after retries:', error);
}Don't retry indefinitely; use exponential backoff and max retry counts.
Network issues can cause timeouts even if Redis is functioning normally.
# Check latency from the client:
redis-cli --latency-history # Run from the Redis server host
# Or use ping to measure round-trip time:
redis-cli PING
# Check for packet loss:
ping -c 100 <redis-host> # Look for % packet loss
# Monitor network on Redis server:
sudo netstat -an | grep 6379
sudo ss -tuln | grep 6379
# Check for connection timeouts:
sudo iptables -L -n # Verify firewall rules allow Redis port
# Test from client host:
telnet <redis-host> 6379If latency exceeds 50ms consistently, investigate network routing, firewalls, or move the client closer to Redis.
Persistence operations can block Redis temporarily, causing client timeouts.
# Check if BGSAVE or BGREWRITEAOF are running:
redis-cli INFO persistence
# Look for:
# rdb_bgsave_in_progress:0 (1 = bgsave running)
# aof_rewrite_in_progress:0 (1 = aof rewrite running)
# Check SLOWLOG for persistence-related delays
redis-cli SLOWLOG GET 5
# If snapshots are too frequent, adjust in redis.conf:
# save 900 1 # Save every 15 minutes if 1 key changed
# save 300 10 # Save every 5 minutes if 10 keys changed
# save 60 10000 # Save every 1 minute if 10k keys changed
# Or disable RDB entirely (if using AOF only):
# save "" # Remove all save linesDuring RDB snapshots, Redis is still responsive but may process commands slowly, causing timeouts for long-running operations.
For Lua script timeouts specifically: Redis executes the lua-time-limit check only when control returns to the event loop. A tight loop in Lua might not trigger the timeout immediately. Use redis.call() to yield control frequently. Additionally, after a script exceeds lua-time-limit, Redis responds to new commands with "BUSY" rather than actually killing the script; only SCRIPT KILL or SHUTDOWN NOSAVE can forcefully terminate it.
In clustered Redis deployments, timeout issues may stem from cross-slot operations or cluster resharding. Ensure clients use cluster-aware libraries like redis-py with rediscluster or lettuce with cluster support.
For Azure Cache for Redis, idle connections are terminated after 10 minutes. Configure your client with a keepalive heartbeat if connections remain idle. Similarly, AWS ElastiCache enforces connection limits per node; monitor connection count with INFO stats.
When running Lua scripts in production, always test with timeout values matching your environment. Use redis-benchmark with scripting to profile behavior: redis-benchmark -t set,get -n 100000 -l.
If timeouts occur only under load, consider implementing connection pooling and rate limiting on the client side. Monitor Redis CPU (check top, htop) to determine if the server is CPU-bound; if so, optimize commands or scale horizontally with Redis Cluster or sentinel-based replication.
ERR fsync error
How to fix "ERR fsync error" in Redis
CLUSTERDOWN The cluster is down
How to fix 'CLUSTERDOWN The cluster is down' in Redis
ERR Job for redis-server.service failed because a timeout was exceeded
Job for redis-server.service failed because a timeout was exceeded
ERR Unbalanced XREAD list of streams
How to fix "ERR Unbalanced XREAD list" in Redis
ERR syntax error
How to fix "ERR syntax error" in Redis