Redis Sentinel logs "+elected-leader: Won election for epoch" when a Sentinel successfully wins the leader election to perform a failover. This is a normal operational message indicating that the Sentinel has received majority votes and is now authorized to promote a replica to master. Understanding this message helps you monitor failover processes and verify that high availability mechanisms are working correctly.
The "+elected-leader: Won election for epoch" message appears in Redis Sentinel logs when a Sentinel instance has successfully won the leader election for a specific configuration epoch. This is not an error but rather an informational message indicating that the distributed consensus protocol has completed and this Sentinel is now authorized to perform the failover operation. In Redis Sentinel's high-availability architecture, when a master is detected as down, one Sentinel must be chosen to coordinate the failover process. Sentinels use a voting mechanism similar to the Raft consensus algorithm, where each Sentinel can vote for one leader per epoch. The Sentinel that receives votes from the majority (quorum) wins the election and gains the authority to promote a replica to master. The epoch number is a monotonically increasing version counter that ensures each failover operation has a unique identifier. This prevents conflicts and ensures that all Sentinels eventually converge on the same configuration. When you see this message, it means the failover process is progressing normally and the elected Sentinel is about to promote a new master.
Check which Sentinel won the election and monitor the failover progress:
# Connect to any Sentinel instance
redis-cli -p 26379
# Check master status and current epoch
SENTINEL master <master-name>
# Look for relevant fields:
# - config-epoch: Current configuration version
# - leader-epoch: Epoch of the current/last failover leader
# - failover-state: Current failover stage
# View recent Sentinel events
SENTINEL events <master-name>The config-epoch should match the epoch number from the log message. If failover-state shows a value like select-slave or wait-promotion, the failover is actively progressing.
Track the failover process from election to completion:
# Watch Sentinel logs in real-time
tail -f /var/log/redis/sentinel.log
# Expected sequence after +elected-leader:
# +elected-leader: Won election for epoch <N>
# +selected-slave: <replica-ip>:<port>
# +promoted-slave: <replica-ip>:<port>
# +reconf-slaves: <master-name>
# +failover-end: Failover finished successfully
# Verify new master address
SENTINEL get-master-addr-by-name <master-name>
# Confirm all Sentinels agree on new master
SENTINEL sentinels <master-name> | grep -i ipThe full failover typically completes within 30-60 seconds. If you see +elected-leader but no subsequent promotion messages, check for network issues or replica availability problems.
Ensure the election process is happening efficiently:
# Check quorum requirements
SENTINEL master <master-name> | grep quorum
# View Sentinel configuration
grep -E "quorum|down-after-milliseconds|failover-timeout" /etc/redis/sentinel.conf
# Example configuration for 3-Sentinel setup:
sentinel monitor mymaster 127.0.0.1 6379 2 # quorum=2 (majority of 3)
sentinel down-after-milliseconds mymaster 5000
sentinel parallel-syncs mymaster 1
sentinel failover-timeout mymaster 180000For N Sentinels, set quorum to N/2+1 (e.g., 2 for 3 Sentinels, 3 for 5 Sentinels). This ensures majority agreement. If elections take too long, reduce down-after-milliseconds (default 30000ms), but avoid setting it too low to prevent false positives.
Ensure your applications successfully connect to the new master:
# Test connectivity to new master
redis-cli -h <new-master-ip> -p 6379 PING
# Check replication status on new master
redis-cli -h <new-master-ip> -p 6379 INFO replication
# Expected output:
# role:master
# connected_slaves:2 (or however many replicas you have)
# In your application, use Sentinel-aware clients:
# Node.js (ioredis):
const Redis = require('ioredis');
const redis = new Redis({
sentinels: [
{ host: 'sentinel1', port: 26379 },
{ host: 'sentinel2', port: 26379 },
{ host: 'sentinel3', port: 26379 },
],
name: 'mymaster',
});Sentinel-aware clients automatically discover the new master after failover. Test by sending commands immediately after seeing the +elected-leader message—clients should reconnect within seconds.
Implement monitoring to track leader elections and failover events:
# Parse Sentinel logs for election events
grep -E "\+elected-leader|\+failover-end|\-failover-abort" /var/log/redis/sentinel.log
# Count elections over time (frequent elections indicate instability)
grep "+elected-leader" /var/log/redis/sentinel.log | wc -l
# Example monitoring with Prometheus/Grafana:
# - Alert on multiple elections within short timeframes
# - Track time from +elected-leader to +failover-end
# - Monitor epoch progression for anomalies
# Log aggregation query (e.g., in ELK Stack):
# message:"elected-leader" AND message:"Won election for epoch"
# | stats count by sentinel_host, epochA healthy Redis Sentinel setup should have zero to very few elections under normal operation. If you see multiple +elected-leader messages per day, investigate master stability, network reliability, or down-after-milliseconds tuning.
Redis Sentinel's election algorithm is inspired by the Raft consensus protocol but adapted for Redis's specific needs. Each Sentinel maintains both a currentEpoch (for tracking overall system versioning) and a failover_epoch (specific to the failover operation). When a Sentinel wins an election, it increments the configuration epoch and broadcasts the new configuration to all other Sentinels.
The voting process has several safeguards: (1) each Sentinel can only vote once per epoch, (2) votes are only granted if the requesting Sentinel has the same or higher epoch, and (3) once a majority is reached, other Sentinels cannot win the same epoch. This prevents split-brain scenarios even during network partitions.
In production environments, consider these best practices:
1. Deploy odd numbers of Sentinels (3, 5, or 7) across different availability zones or data centers to ensure clean majority decisions
2. Set quorum carefully: For 3 Sentinels, use quorum=2; for 5 Sentinels, use quorum=3. Never set quorum=1 in production
3. Monitor epoch progression: Large gaps or rapid increases in epoch numbers indicate instability
4. Use Sentinel-aware clients: These automatically handle master address changes without manual intervention
For extremely high-traffic systems (100k+ ops/sec), the brief master unavailability during elections (typically 5-15 seconds) may still impact SLAs. In these cases, consider Redis Cluster for automatic client-side routing, or implement application-level circuit breakers to gracefully handle failover windows.
ERR Unbalanced XREAD list of streams
How to fix "ERR Unbalanced XREAD list" in Redis
ERR syntax error
How to fix "ERR syntax error" in Redis
ERR fsync error
How to fix "ERR fsync error" in Redis
ConnectionError: Error while reading from socket
ConnectionError: Error while reading from socket in redis-py
CLUSTERDOWN The cluster is down
How to fix 'CLUSTERDOWN The cluster is down' in Redis