Redis is an in-memory data structure store that requires careful memory management to maintain optimal performance. When Redis approaches its memory limit, it must decide which keys to remove to make space for new data. This process is called memory eviction.
flowchart TD
A[Redis Instance] --> B{Memory Usage Check}
B -->|Below maxmemory| C[Accept New Data]
B -->|At maxmemory| D[Apply Eviction Policy]
D --> E[Select Keys to Evict]
E --> F[Remove Selected Keys]
F --> G[Accept New Data]
style A fill:#f9f,stroke:#333,stroke-width:2px
style D fill:#bbf,stroke:#333,stroke-width:2px
style E fill:#fbb,stroke:#333,stroke-width:2px
Interview Insight: Why is memory management crucial in Redis?
Redis stores all data in RAM for fast access
Uncontrolled memory growth can lead to system crashes
Proper eviction prevents OOM (Out of Memory) errors
Maintains predictable performance characteristics
Redis Memory Eviction Policies
Redis offers 8 different eviction policies, each serving different use cases:
LRU-Based Policies
allkeys-lru
Evicts the least recently used keys across all keys in the database.
1 2 3 4 5 6 7 8
# Configuration CONFIG SET maxmemory-policy allkeys-lru
# Example scenario SET user:1001 "John Doe"# Time: T1 GET user:1001 # Access at T2 SET user:1002 "Jane Smith"# Time: T3 # If memory is full, user:1002 is more likely to be evicted
Best Practice: Use when you have a natural access pattern where some data is accessed more frequently than others.
volatile-lru
Evicts the least recently used keys only among keys with an expiration set.
1 2 3 4 5
# Setup SET session:abc123 "user_data" EX 3600 # With expiration SET config:theme "dark"# Without expiration
# Only session:abc123 is eligible for LRU eviction
Use Case: Session management where you want to preserve configuration data.
LFU-Based Policies
allkeys-lfu
Evicts the least frequently used keys across all keys.
1 2 3 4 5 6
# Example: Access frequency tracking SET product:1 "laptop"# Accessed 100 times SET product:2 "mouse"# Accessed 5 times SET product:3 "keyboard"# Accessed 50 times
# product:2 (mouse) would be evicted first due to lowest frequency
volatile-lfu
Evicts the least frequently used keys only among keys with expiration.
Interview Insight: When would you choose LFU over LRU?
LFU is better for data with consistent access patterns
LRU is better for data with temporal locality
LFU prevents cache pollution from occasional bulk operations
Random Policies
allkeys-random
Randomly selects keys for eviction across all keys.
Randomly selects keys for eviction only among keys with expiration.
When to Use Random Policies:
When access patterns are completely unpredictable
For testing and development environments
When you need simple, fast eviction decisions
TTL-Based Policy
volatile-ttl
Evicts keys with expiration, prioritizing those with shorter remaining TTL.
1 2 3 4 5 6
# Example scenario SET cache:data1 "value1" EX 3600 # Expires in 1 hour SET cache:data2 "value2" EX 1800 # Expires in 30 minutes SET cache:data3 "value3" EX 7200 # Expires in 2 hours
# cache:data2 will be evicted first (shortest TTL)
No Eviction Policy
noeviction
Returns errors when memory limit is reached instead of evicting keys.
1 2 3 4 5
CONFIG SET maxmemory-policy noeviction
# When memory is full: SET new_key "value" # Error: OOM command not allowed when used memory > 'maxmemory'
Use Case: Critical systems where data loss is unacceptable.
Memory Limitation Strategies
Why Limit Cache Memory?
flowchart LR
A[Unlimited Memory] --> B[System Instability]
A --> C[Unpredictable Performance]
A --> D[Resource Contention]
E[Limited Memory] --> F[Predictable Behavior]
E --> G[System Stability]
E --> H[Better Resource Planning]
style A fill:#fbb,stroke:#333,stroke-width:2px
style E fill:#bfb,stroke:#333,stroke-width:2px
Production Reasons:
System Stability: Prevents Redis from consuming all available RAM
Performance Predictability: Maintains consistent response times
Multi-tenancy: Allows multiple services to coexist
# Set maximum memory limit (512MB) CONFIG SET maxmemory 536870912
# Set eviction policy CONFIG SET maxmemory-policy allkeys-lru
# Check current memory usage INFO memory
Using Lua Scripts for Advanced Memory Control
Limiting Key-Value Pairs
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
-- limit_keys.lua: Limit total number of keys local max_keys = tonumber(ARGV[1]) local current_keys = redis.call('DBSIZE')
if current_keys >= max_keys then -- Get random key and delete it local keys = redis.call('RANDOMKEY') if keys then redis.call('DEL', keys) return"Evicted key: " .. keys end end
-- Add the new key redis.call('SET', KEYS[1], ARGV[2]) return"Key added successfully"
-- memory_aware_set.lua: Check memory before setting local key = KEYS[1] local value = ARGV[1] local memory_threshold = tonumber(ARGV[2])
-- Get current memory usage local memory_info = redis.call('MEMORY', 'USAGE', 'SAMPLES', '0') local used_memory = memory_info['used_memory'] local max_memory = memory_info['maxmemory']
if max_memory > 0and used_memory > (max_memory * memory_threshold / 100) then -- Trigger manual cleanup local keys_to_check = redis.call('RANDOMKEY') if keys_to_check then local key_memory = redis.call('MEMORY', 'USAGE', keys_to_check) if key_memory > 1000then-- If key uses more than 1KB redis.call('DEL', keys_to_check) end end end
redis.call('SET', key, value) return"Key set with memory check"
Practical Cache Eviction Solutions
Big Object Evict First Strategy
This strategy prioritizes evicting large objects to free maximum memory quickly.
-- small_object_evict.lua localfunctionget_object_size(key) return redis.call('MEMORY', 'USAGE', key) or0 end
localfunctionevict_small_objects(count) local all_keys = redis.call('KEYS', '*') local small_keys = {} for i, key inipairs(all_keys) do local size = get_object_size(key) if size < 1000then-- Less than 1KB table.insert(small_keys, {key, size}) end end -- Sort by size (smallest first) table.sort(small_keys, function(a, b)return a[2] < b[2] end) local evicted = 0 for i = 1, math.min(count, #small_keys) do redis.call('DEL', small_keys[i][1]) evicted = evicted + 1 end return evicted end
import time from datetime import datetime, timedelta
classColdDataEviction: def__init__(self, redis_client): self.redis = redis_client self.access_tracking_key = "access_log" defget_with_tracking(self, key): # Record access now = int(time.time()) self.redis.zadd(self.access_tracking_key, {key: now}) # Get value returnself.redis.get(key) defset_with_tracking(self, key, value): now = int(time.time()) # Set value and track access pipe = self.redis.pipeline() pipe.set(key, value) pipe.zadd(self.access_tracking_key, {key: now}) pipe.execute() defevict_cold_data(self, days_threshold=7, max_evict=100): """Evict data not accessed within threshold days""" cutoff_time = int(time.time()) - (days_threshold * 24 * 3600) # Get cold keys (accessed before cutoff time) cold_keys = self.redis.zrangebyscore( self.access_tracking_key, 0, cutoff_time, start=0, num=max_evict ) evicted_count = 0 if cold_keys: pipe = self.redis.pipeline() for key in cold_keys: pipe.delete(key) pipe.zrem(self.access_tracking_key, key) evicted_count += 1 pipe.execute() return evicted_count defget_access_stats(self): """Get access statistics""" now = int(time.time()) day_ago = now - 86400 week_ago = now - (7 * 86400) recent_keys = self.redis.zrangebyscore(self.access_tracking_key, day_ago, now) weekly_keys = self.redis.zrangebyscore(self.access_tracking_key, week_ago, now) total_keys = self.redis.zcard(self.access_tracking_key) return { 'total_tracked_keys': total_keys, 'accessed_last_day': len(recent_keys), 'accessed_last_week': len(weekly_keys), 'cold_keys': total_keys - len(weekly_keys) }
# Usage example cold_eviction = ColdDataEviction(redis.Redis())
# Use with tracking cold_eviction.set_with_tracking("user:1001", "user_data") value = cold_eviction.get_with_tracking("user:1001")
# Evict data not accessed in 7 days evicted = cold_eviction.evict_cold_data(days_threshold=7) print(f"Evicted {evicted} cold data items")
# Get statistics stats = cold_eviction.get_access_stats() print(f"Access stats: {stats}")
Algorithm Deep Dive
LRU Implementation Details
Redis uses an approximate LRU algorithm for efficiency:
flowchart TD
A[Key Access] --> B[Update LRU Clock]
B --> C{Memory Full?}
C -->|No| D[Operation Complete]
C -->|Yes| E[Sample Random Keys]
E --> F[Calculate LRU Score]
F --> G[Select Oldest Key]
G --> H[Evict Key]
H --> I[Operation Complete]
style E fill:#bbf,stroke:#333,stroke-width:2px
style F fill:#fbb,stroke:#333,stroke-width:2px
Interview Question: Why doesn’t Redis use true LRU?
True LRU requires maintaining a doubly-linked list of all keys
This would consume significant memory overhead
Approximate LRU samples random keys and picks the best candidate
Provides good enough results with much better performance
LFU Implementation Details
Redis LFU uses a probabilistic counter that decays over time:
# Simplified LFU counter simulation import time import random
classLFUCounter: def__init__(self): self.counter = 0 self.last_access = time.time() defincrement(self): # Probabilistic increment based on current counter # Higher counters increment less frequently probability = 1.0 / (self.counter * 10 + 1) if random.random() < probability: self.counter += 1 self.last_access = time.time() defdecay(self, decay_time_minutes=1): # Decay counter over time now = time.time() minutes_passed = (now - self.last_access) / 60 if minutes_passed > decay_time_minutes: decay_amount = int(minutes_passed / decay_time_minutes) self.counter = max(0, self.counter - decay_amount) self.last_access = now
# Example usage counter = LFUCounter() for _ inrange(100): counter.increment() print(f"Counter after 100 accesses: {counter.counter}")
Choosing the Right Eviction Policy
Decision Matrix
flowchart TD
A[Choose Eviction Policy] --> B{Data has TTL?}
B -->|Yes| C{Preserve non-expiring data?}
B -->|No| D{Access pattern known?}
C -->|Yes| E[volatile-lru/lfu/ttl]
C -->|No| F[allkeys-lru/lfu]
D -->|Temporal locality| G[allkeys-lru]
D -->|Frequency based| H[allkeys-lfu]
D -->|Unknown/Random| I[allkeys-random]
J{Can tolerate data loss?} --> K[No eviction]
J -->|Yes| L[Choose based on pattern]
style E fill:#bfb,stroke:#333,stroke-width:2px
style G fill:#bbf,stroke:#333,stroke-width:2px
style H fill:#fbb,stroke:#333,stroke-width:2px
Use Case Recommendations
Use Case
Recommended Policy
Reason
Web session store
volatile-lru
Sessions have TTL, preserve config data
Cache layer
allkeys-lru
Recent data more likely to be accessed
Analytics cache
allkeys-lfu
Popular queries accessed frequently
Rate limiting
volatile-ttl
Remove expired limits first
Database cache
allkeys-lfu
Hot data accessed repeatedly
Production Configuration Example
1 2 3 4 5 6 7 8
# redis.conf production settings maxmemory 2gb maxmemory-policy allkeys-lru maxmemory-samples 10
-- Custom: Evict based on business priority localfunctionbusiness_priority_evict() local keys = redis.call('KEYS', '*') local priorities = {} for i, key inipairs(keys) do local priority = redis.call('HGET', key .. ':meta', 'business_priority') if priority then table.insert(priorities, {key, tonumber(priority)}) end end table.sort(priorities, function(a, b)return a[2] < b[2] end) if #priorities > 0then redis.call('DEL', priorities[1][1]) return priorities[1][1] end returnnil end
Best Practices Summary
Configuration Best Practices
Set appropriate maxmemory: 80% of available RAM for dedicated Redis instances
Choose policy based on use case: LRU for temporal, LFU for frequency patterns
Monitor continuously: Track hit ratios, eviction rates, and memory usage
Test under load: Verify eviction behavior matches expectations
This comprehensive guide provides the foundation for implementing effective memory eviction strategies in Redis production environments. The combination of theoretical understanding and practical implementation examples ensures robust cache management that scales with your application needs.