Databaseshigh

ElastiCache (Redis vs Memcached)

ElastiCache is a managed in-memory caching service supporting Redis and Memcached. It reduces database load and application latency by caching frequently accessed data in memory.

Memory anchor

ElastiCache is a cheat sheet you can consult before doing the math (database query). Redis is a detailed cheat sheet with notes, highlighted sections, and flashcards. Memcached is a plain sticky note — simple and fast, but no extras.

Expected depth

Redis: rich data structures (strings, hashes, lists, sets, sorted sets, streams), persistence (RDB snapshots, AOF logs), replication (primary + up to 5 replicas), clustering (hash slots for horizontal sharding), pub/sub messaging, Lua scripting, atomic operations. Memcached: simpler, multi-threaded, horizontal scaling by adding nodes, no persistence, no replication — pure cache. Choose Redis for persistence, complex data types, pub/sub, or HA. Choose Memcached only for simple string caching where you need multi-threading and horizontal scaling.

Deep — senior internals

Redis Cluster mode: distributes data across up to 500 shards (hash slots 0–16383). Each shard has a primary and up to 5 replicas. ElastiCache Global Datastore replicates Redis to read-only replica clusters in other regions for low-latency global reads. Cache-aside (lazy loading): app checks cache, on miss queries DB and populates cache. Write-through: write to cache and DB simultaneously — no stale data but higher write cost. TTL (expiration) prevents stale data buildup. ElastiCache for Redis 7+ supports multi-AZ with auto-failover natively. Valkey (open-source Redis fork) is now an ElastiCache option following the Redis license change.

🎤Interview-ready answer

I use Redis (via ElastiCache) for almost all caching needs — it supports persistence, replication, cluster mode for sharding, and rich data types for session storage, leaderboards, rate limiting, and pub/sub. Memcached I'd only choose for simple key-value caching on teams with no Redis expertise. Cache strategy: cache-aside for read-heavy workloads, write-through for data that must always be fresh. Always set TTLs to prevent stale cache accumulation.

Common trap

Caching without TTLs leads to stale data that never expires, requiring a cache flush to fix. Always set appropriate TTLs — and handle cache misses gracefully so cache failures degrade to slower (DB) queries rather than errors.

Related concepts