The 'OOM command not allowed when used memory exceeds maxmemory' error occurs when a Redis instance reaches its configured memory limit and the maxmemory policy prevents new writes. This happens when the cache is full and Redis cannot store additional data without exceeding the memory threshold.
The "OOM command not allowed when used memory exceeds maxmemory" error indicates that your Redis instance has reached the memory limit defined by the `maxmemory` configuration parameter and cannot accept new write operations. This error occurs because: - Redis has a configured memory limit (`maxmemory`) that prevents it from using more RAM than specified - Your Redis instance has stored enough data to reach this limit - The current eviction policy (if set to `noeviction`) prevents Redis from removing old data to make room for new entries - New write commands (SET, LPUSH, INCR, etc.) are rejected to prevent exceeding the memory limit This is a safety mechanism to prevent Redis from consuming unlimited memory and causing system crashes. However, it requires you to either increase the memory limit, remove unused data, or adjust the eviction policy.
First, diagnose the problem by inspecting Redis memory settings:
# Using redis-cli
redis-cli info memory
# Key metrics to check:
# - used_memory: Current memory used in bytes
# - maxmemory: Configured memory limit (in bytes)
# - maxmemory_policy: Current eviction policy
# - mem_fragmentation_ratio: Memory fragmentation levelYou'll see output like:
# Memory
used_memory:1073741824
maxmemory:1073741824
maxmemory_policy:noeviction
mem_fragmentation_ratio:1.05If used_memory equals maxmemory and maxmemory_policy is noeviction, this confirms the problem.
The simplest solution is to increase the memory limit if your server has available RAM:
Option A: Using redis-cli (temporary, lost on restart)
# Connect to Redis
redis-cli
# Set new maxmemory limit (in bytes)
config set maxmemory 2147483648 # 2GB example
# Verify
config get maxmemoryOption B: Edit redis.conf (persistent)
# Find redis.conf location (usually /etc/redis/redis.conf or similar)
sudo nano /etc/redis/redis.conf
# Find the line starting with 'maxmemory'
# Uncomment and adjust:
maxmemory 2147483648
# Or if using Redis Enterprise/AWS ElastiCache, increase through their web consoleOption C: Using Docker (if containerized)
# Set memory limit in Dockerfile or docker-compose.yml
services:
redis:
image: redis:latest
command: redis-server --maxmemory 2gb --maxmemory-policy allkeys-lruRestart Redis after changes (if editing config file):
# Method 1: Using systemctl
sudo systemctl restart redis-server
# Method 2: Using docker
docker restart <container_name>
# Method 3: Manual restart (WARNING: will drop all data)
redis-cli SHUTDOWN
redis-server /etc/redis/redis.confIf increasing memory isn't feasible, change the eviction policy to automatically remove old data:
Check current policy:
redis-cli config get maxmemory-policy
# Output: maxmemory-policy: noevictionSet a more aggressive eviction policy:
# Using redis-cli (temporary)
redis-cli config set maxmemory-policy allkeys-lru
# allkeys-lru: Removes least recently used keys from all keys
# This is usually the best choice for caches
# Other options:
# - allkeys-lfu: Removes least frequently used keys
# - allkeys-random: Removes random keys
# - volatile-lru: Removes least recently used keys WITH expiration set
# - volatile-lfu: Removes least frequently used keys WITH expiration set
# - volatile-random: Removes random keys WITH expiration set
# - volatile-ttl: Removes keys with shortest TTLPersistent change (edit redis.conf):
sudo nano /etc/redis/redis.conf
# Change:
maxmemory-policy noeviction
# To:
maxmemory-policy allkeys-lruIf using volatile eviction, add TTL to keys:
For policies like volatile-lru to work, keys must have expiration times:
redis-cli
# Set expiration on keys (example)
SET mykey "value" EX 3600 # Expires in 1 hour
EXPIRE existingkey 3600 # Add expiration to existing key
# Check TTL
TTL mykeyIf you need to quickly free up space, delete unused keys:
Option A: Flush entire cache (WARNING: destroys all data)
redis-cli FLUSHALL # Removes all data from all databases
redis-cli FLUSHDB # Removes all data from current database onlyOption B: Delete specific keys
# Delete a single key
redis-cli DEL key_name
# Delete keys matching a pattern
redis-cli KEYS "pattern:*" | xargs redis-cli DEL
# Delete keys from a specific database
redis-cli -n 1 FLUSHDB # Database 1 onlyOption C: Clear expired keys
redis-cli
# Redis has a background eviction process, but you can trigger it
# Check how many keys have TTL set
DBSIZE # Total keys
KEYS * # List all keys (be careful on large databases)
# Keys without TTL will never be automatically removed
# Identify and remove them manually or with a scriptOption D: Use memory profiling to find large keys
# In redis-cli, use MEMORY DOCTOR for diagnostics
redis-cli MEMORY DOCTOR
# Find biggest keys (Redis 4.0+)
redis-cli --bigkeys
# Get memory usage per key
redis-cli --memkeysConfigure your application to set expiration times on cache keys:
Node.js/JavaScript example:
// Using redis package
const redis = require('redis');
const client = redis.createClient();
// Set key with expiration (10 minutes = 600 seconds)
client.setex('cache_key', 600, 'cache_value', (err, reply) => {
if (err) console.error(err);
});
// Or using async/await with redis v4+
await client.setEx('cache_key', 600, 'cache_value');Python example:
import redis
r = redis.Redis(host='localhost', port=6379, db=0)
# Set with expiration (600 seconds)
r.setex('cache_key', 600, 'cache_value')
# Or use expire()
r.set('cache_key', 'cache_value')
r.expire('cache_key', 600)Java example:
Jedis jedis = new Jedis("localhost", 6379);
// Set with expiration (EX = seconds)
jedis.set("cache_key", "cache_value", "EX", "600");bash/redis-cli example:
redis-cli SET mykey "value" EX 3600 # 1 hour expirationSet up monitoring to prevent future OOM errors:
Check memory usage regularly:
# Get detailed memory info
redis-cli info memory
# Monitor in real-time
redis-cli --statKey metrics to track:
- used_memory: Actual memory consumed
- used_memory_peak: Highest memory ever used
- mem_fragmentation_ratio: Should be close to 1.0; >1.5 indicates fragmentation
- evicted_keys: Number of keys removed by eviction policy
- rejected_connections: Sign that memory limit was reachedCalculate appropriate maxmemory:
# Formula:
# maxmemory = (used_memory_peak * 1.2) + buffer
# Example:
# If peak usage was 800MB, set to 1000MB (800 * 1.25)
# This allows 20% headroom for temporary spikesFor cloud deployments (AWS, Azure, GCP):
1. Use CloudWatch/Monitoring dashboards to track memory trends
2. Set up alerts when memory usage exceeds 80% of maxmemory
3. Use read replicas to distribute load
4. Consider Redis Cluster for horizontal scaling
High memory fragmentation can cause Redis to appear full when it isn't:
Check fragmentation ratio:
redis-cli info memory | grep mem_fragmentation_ratio
# Ratio explanation:
# 1.0 = No fragmentation (ideal)
# 1.0-1.5 = Acceptable
# >1.5 = Significant fragmentation, consider actionFix fragmentation:
# Option 1: Restart Redis (loses all data in memory-only mode)
redis-cli SHUTDOWN
redis-server
# Option 2: Enable active defragmentation (Redis 4.0+)
redis-cli
# Enable defragmentation
config set activedefrag yes
# Adjust aggressiveness (percentage of memory freed per loop)
config set active-defrag-threshold-lower 10
# Restart or wait for automatic defragmentationNote: Defragmentation runs in background and doesn't block commands, but it uses CPU. Enable during off-peak hours if possible.
Understanding Redis Memory Management
Redis is an in-memory database, meaning all data is stored in RAM. The maxmemory setting prevents unlimited growth that would eventually crash the server.
Memory Calculation:
Redis memory includes:
- Actual data: Key names + values
- Internal structures: Hash tables, linked lists, etc.
- Encoding overhead: Different data types use different amounts of memory
- Replication buffers: If replication is enabled
- AOF buffer: If Append-Only File is enabled
Eviction Policies Explained:
- noeviction: Reject writes when full (your current error)
- allkeys-lru: Evict any key, choosing least recently used (best for caches)
- volatile-lru: Only evict keys with TTL, choosing least recently used
- allkeys-lfu: Evict any key, choosing least frequently used
- volatile-lfu: Only evict keys with TTL, choosing least frequently used
Choose based on your use case:
- Cache: Use allkeys-lru
- Sessions with TTL: Use volatile-lru
- Queues: Use volatile-ttl (evict keys expiring soonest)
Memory Fragmentation
Linux memory allocator (jemalloc, libc) may allocate more memory than needed. This appears as "free" memory inside Redis's allocated space but is unavailable to other processes. Solutions:
1. Restart Redis (expensive)
2. Use jemalloc which performs better than libc
3. Enable active defragmentation (Redis 4.0+)
Troubleshooting Tips
1. `WRONGTYPE Operation against a key holding the wrong kind of value`: Often a symptom of memory pressure causing eviction
2. Slow performance despite memory available: Check mem_fragmentation_ratio and eviction activity
3. Memory grows unbounded: Missing TTL on keys or inefficient data structure
AWS ElastiCache Specifics
- Use parameter groups to set maxmemory-policy
- Monitor with CloudWatch DatabaseMemoryUsagePercentage metric
- Scale vertically (larger node) or horizontally (cluster mode)
Redis 7.0+ Features
- Improved memory efficiency with faster encoding
- Better eviction policy performance
- Enhanced active defragmentation
ERR fsync error
How to fix "ERR fsync error" in Redis
CLUSTERDOWN The cluster is down
How to fix 'CLUSTERDOWN The cluster is down' in Redis
ERR Job for redis-server.service failed because a timeout was exceeded
Job for redis-server.service failed because a timeout was exceeded
ERR Unbalanced XREAD list of streams
How to fix "ERR Unbalanced XREAD list" in Redis
ERR syntax error
How to fix "ERR syntax error" in Redis