Redis implements multiple expiration deletion strategies to efficiently manage memory and ensure optimal performance. Understanding these mechanisms is crucial for building scalable, high-performance applications.
Interview Insight: “How does Redis handle expired keys?” - Redis uses a combination of lazy deletion and active deletion strategies. It doesn’t immediately delete expired keys but employs intelligent algorithms to balance performance and memory usage.
Core Expiration Deletion Policies
Lazy Deletion (Passive Expiration)
Lazy deletion is the primary mechanism where expired keys are only removed when they are accessed.
How it works:
When a client attempts to access a key, Redis checks if it has expired
If expired, the key is immediately deleted and NULL is returned
No background scanning or proactive deletion occurs
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
# Example: Lazy deletion in action import redis import time
r = redis.Redis()
# Set a key with 2-second expiration r.setex('temp_key', 2, 'temporary_value')
# Key is deleted only when accessed (lazy deletion) print(r.get('temp_key')) # None
Advantages:
Minimal CPU overhead
No background processing required
Perfect for frequently accessed keys
Disadvantages:
Memory waste if expired keys are never accessed
Unpredictable memory usage patterns
Active Deletion (Proactive Scanning)
Redis periodically scans and removes expired keys to prevent memory bloat.
Algorithm Details:
Redis runs expiration cycles approximately 10 times per second
Each cycle samples 20 random keys from the expires dictionary
If more than 25% are expired, repeat the process
Maximum execution time per cycle is limited to prevent blocking
flowchart TD
A[Start Expiration Cycle] --> B[Sample 20 Random Keys]
B --> C{More than 25% expired?}
C -->|Yes| D[Delete Expired Keys]
D --> E{Time limit reached?}
E -->|No| B
E -->|Yes| F[End Cycle]
C -->|No| F
F --> G[Wait ~100ms]
G --> A
Configuration Parameters:
1 2 3
# Redis configuration for active expiration hz 10 # Frequency of background tasks (10 Hz = 10 times/second) active-expire-effort 1 # CPU effort for active expiration (1-10)
Timer-Based Deletion
While Redis doesn’t implement traditional timer-based deletion, you can simulate it using sorted sets:
Interview Insight: “What’s the difference between active and passive expiration?” - Passive (lazy) expiration only occurs when keys are accessed, while active expiration proactively scans and removes expired keys in background cycles to prevent memory bloat.
Redis Expiration Policies (Eviction Policies)
When Redis reaches memory limits, it employs eviction policies to free up space:
Available Eviction Policies
1 2 3
# Configuration in redis.conf maxmemory 2gb maxmemory-policy allkeys-lru
Policy Types:
noeviction (default)
No keys are evicted
Write operations return errors when memory limit reached
Use case: Critical data that cannot be lost
allkeys-lru
Removes least recently used keys from all keys
Use case: General caching scenarios
allkeys-lfu
Removes least frequently used keys
Use case: Applications with distinct access patterns
volatile-lru
Removes LRU keys only from keys with expiration set
Use case: Mixed persistent and temporary data
volatile-lfu
Removes LFU keys only from keys with expiration set
allkeys-random
Randomly removes keys
Use case: When access patterns are unpredictable
volatile-random
Randomly removes keys with expiration set
volatile-ttl
Removes keys with shortest TTL first
Use case: Time-sensitive data prioritization
Policy Selection Guide
flowchart TD
A[Memory Pressure] --> B{All data equally important?}
B -->|Yes| C[allkeys-lru/lfu]
B -->|No| D{Temporary vs Persistent data?}
D -->|Mixed| E[volatile-lru/lfu]
D -->|Time-sensitive| F[volatile-ttl]
C --> G[High access pattern variance?]
G -->|Yes| H[allkeys-lfu]
G -->|No| I[allkeys-lru]
Master-Slave Cluster Expiration Mechanisms
Replication of Expiration
In Redis clusters, expiration handling follows specific patterns:
Master-Slave Expiration Flow:
Only masters perform active expiration
Masters send explicit DEL commands to slaves
Slaves don’t independently expire keys (except for lazy deletion)
sequenceDiagram
participant M as Master
participant S1 as Slave 1
participant S2 as Slave 2
participant C as Client
Note over M: Active expiration cycle
M->>M: Check expired keys
M->>S1: DEL expired_key
M->>S2: DEL expired_key
C->>S1: GET expired_key
S1->>S1: Lazy expiration check
S1->>C: NULL (key expired)
Production Example - Redis Sentinel with Expiration:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
import redis.sentinel
# Sentinel configuration for high availability sentinels = [('localhost', 26379), ('localhost', 26380), ('localhost', 26381)] sentinel = redis.sentinel.Sentinel(sentinels)
# Get master and slave connections master = sentinel.master_for('mymaster', socket_timeout=0.1) slave = sentinel.slave_for('mymaster', socket_timeout=0.1)
# Write to master with expiration master.setex('session:user:1', 3600, 'session_data')
# Read from slave (expiration handled consistently) session_data = slave.get('session:user:1')
Interview Insight: “How does Redis handle expiration in a cluster?” - In Redis clusters, only master nodes perform active expiration. When a master expires a key, it sends explicit DEL commands to all slaves to maintain consistency.
Durability and Expired Keys
RDB Persistence
Expired keys are handled during RDB operations:
1 2 3 4 5 6 7 8
# RDB configuration save 900 1 # Save if at least 1 key changed in 900 seconds save 300 10 # Save if at least 10 keys changed in 300 seconds save 60 10000 # Save if at least 10000 keys changed in 60 seconds
# Expired keys are not saved to RDB files rdbcompression yes rdbchecksum yes
import redis import random import threading from collections import defaultdict
classHotKeyManager: def__init__(self): self.redis = redis.Redis() self.access_stats = defaultdict(int) self.hot_key_threshold = 1000# Requests per minute defget_with_hot_key_protection(self, key): """Get value with hot key protection""" self.access_stats[key] += 1 # Check if key is hot ifself.access_stats[key] > self.hot_key_threshold: returnself._handle_hot_key(key) returnself.redis.get(key) def_handle_hot_key(self, hot_key): """Handle hot key with multiple strategies""" strategies = [ self._local_cache_strategy, self._replica_strategy, self._fragmentation_strategy ] # Choose strategy based on key characteristics return random.choice(strategies)(hot_key) def_local_cache_strategy(self, key): """Use local cache for hot keys""" local_cache_key = f"local:{key}" # Check local cache first (simulate with Redis) local_value = self.redis.get(local_cache_key) if local_value: return local_value # Get from main cache and store locally value = self.redis.get(key) if value: # Short TTL for local cache self.redis.setex(local_cache_key, 60, value) return value def_replica_strategy(self, key): """Create multiple replicas of hot key""" replica_count = 5 replica_key = f"{key}:replica:{random.randint(1, replica_count)}" # Try to get from replica value = self.redis.get(replica_key) ifnot value: # Get from master and update replica value = self.redis.get(key) if value: self.redis.setex(replica_key, 300, value) # 5 min TTL return value def_fragmentation_strategy(self, key): """Fragment hot key into smaller pieces""" # For large objects, split into fragments fragments = [] fragment_index = 0 whileTrue: fragment_key = f"{key}:frag:{fragment_index}" fragment = self.redis.get(fragment_key) ifnot fragment: break fragments.append(fragment) fragment_index += 1 if fragments: returnb''.join(fragments) returnself.redis.get(key)
# Usage example hot_key_manager = HotKeyManager() value = hot_key_manager.get_with_hot_key_protection('popular_product:123')
classPredictiveCacheManager: def__init__(self, redis_client): self.redis = redis_client defpreload_related_data(self, primary_key, related_keys_func, short_ttl=300): """ Pre-load related data with shorter TTL Useful for pagination, related products, etc. """ # Get related keys that might be accessed soon related_keys = related_keys_func(primary_key) pipeline = self.redis.pipeline() for related_key in related_keys: # Check if already cached ifnotself.redis.exists(related_key): # Pre-load with shorter TTL related_data = self._fetch_data(related_key) pipeline.setex(related_key, short_ttl, related_data) pipeline.execute() defcache_with_prefetch(self, key, value, ttl=3600, prefetch_ratio=0.1): """ Cache data and trigger prefetch when TTL is near expiration """ self.redis.setex(key, ttl, value) # Set a prefetch trigger at 90% of TTL prefetch_ttl = int(ttl * prefetch_ratio) prefetch_key = f"prefetch:{key}" self.redis.setex(prefetch_key, ttl - prefetch_ttl, "trigger") defcheck_and_prefetch(self, key, refresh_func): """Check if prefetch is needed and refresh in background""" prefetch_key = f"prefetch:{key}" ifnotself.redis.exists(prefetch_key): # Prefetch trigger expired - refresh in background threading.Thread( target=self._background_refresh, args=(key, refresh_func) ).start() def_background_refresh(self, key, refresh_func): """Refresh data in background before expiration""" try: new_value = refresh_func() current_ttl = self.redis.ttl(key) if current_ttl > 0: # Extend current key TTL and set new value self.redis.setex(key, current_ttl + 3600, new_value) except Exception as e: # Log error but don't fail main request print(f"Background refresh failed for {key}: {e}")
# Example usage for e-commerce defget_related_product_keys(product_id): """Return keys for related products, reviews, recommendations""" return [ f"product:{product_id}:reviews", f"product:{product_id}:recommendations", f"product:{product_id}:similar", f"category:{get_category(product_id)}:featured" ]
# Pre-load when user views a product predictive_cache = PredictiveCacheManager(redis_client) predictive_cache.preload_related_data( f"product:{product_id}", get_related_product_keys, short_ttl=600# 10 minutes for related data )
Q: How does Redis handle expiration in a master-slave setup, and what happens during failover?
A: In Redis replication, only the master performs expiration logic. When a key expires on the master (either through lazy or active expiration), the master sends an explicit DEL command to all slaves. Slaves never expire keys independently - they wait for the master’s instruction.
During failover, the promoted slave becomes the new master and starts handling expiration. However, there might be temporary inconsistencies because:
The old master might have expired keys that weren’t yet replicated
Clock differences can cause timing variations
Some keys might appear “unexpired” on the new master
Production applications should handle these edge cases by implementing fallback mechanisms and not relying solely on Redis for strict expiration timing.
Q: What’s the difference between eviction and expiration, and how do they interact?
A: Expiration is time-based removal of keys that have reached their TTL, while eviction is memory-pressure-based removal when Redis reaches its memory limit.
They interact in several ways:
Eviction policies like volatile-lru only consider keys with expiration set
Active expiration reduces memory pressure, potentially avoiding eviction
The volatile-ttl policy evicts keys with the shortest remaining TTL first
Proper TTL configuration can reduce eviction frequency and improve cache performance
Q: How would you optimize Redis expiration for a high-traffic e-commerce site?
A: For high-traffic e-commerce, I’d implement a multi-tier expiration strategy:
Product Catalog: Long TTL (4-24 hours) with background refresh
Inventory Counts: Short TTL (1-5 minutes) with real-time updates
User Sessions: Medium TTL (30 minutes) with sliding expiration
Shopping Carts: Longer TTL (24-48 hours) with cleanup processes
Search Results: Staggered TTL (15-60 minutes) with jitter to prevent thundering herd
Key optimizations:
Use allkeys-lru eviction for cache-heavy workloads
Implement predictive pre-loading for related products
Add jitter to TTL values to prevent simultaneous expiration
Monitor hot keys and implement replication strategies
Use pipeline operations for bulk TTL updates
The goal is balancing data freshness, memory usage, and system performance while handling traffic spikes gracefully.
Redis expiration deletion policies are crucial for maintaining optimal performance and memory usage in production systems. The combination of lazy deletion, active expiration, and memory eviction policies provides flexible options for different use cases.
Success in production requires understanding the trade-offs between memory usage, CPU overhead, and data consistency, especially in distributed environments. Monitoring expiration efficiency and implementing appropriate TTL strategies based on access patterns is essential for maintaining high-performance Redis deployments.
The key is matching expiration strategies to your specific use case: use longer TTLs with background refresh for stable data, shorter TTLs for frequently changing data, and implement sophisticated hot key handling for high-traffic scenarios.