17 caching solutions compared across latency, throughput, pricing, features, and architecture. No bias — we show where each solution excels and where it falls short. Updated March 2026.
The scannable overview. Click any product name for the detailed head-to-head comparison.
| Product | Type | Latency (P50) | Throughput | Pricing Model | Best For |
|---|---|---|---|---|---|
| Open Source / Self-Hosted | |||||
| Redis (OSS) | Remote, single-threaded | 0.3ms | 100-150K ops/s | Free (self-hosted compute) | Shared state, pub/sub, data structures, Lua scripting |
| Valkey | Remote, single-threaded | 0.3ms | 100-150K ops/s | Free (BSD-3, self-hosted) | Redis replacement without licensing risk |
| DragonflyDB | Remote, multi-threaded | 0.25ms | 1-4M ops/s | Free (BSL 1.1, self-hosted) | High-throughput single-node, Redis-compatible API |
| Memcached | Remote, multi-threaded | 0.2ms | 200-600K ops/s | Free (BSD, self-hosted) | Simple key-value, lowest protocol overhead |
| KeyDB | Remote, multi-threaded | 0.25ms | 300K-1M ops/s | Free (BSD-3, self-hosted) | Multi-threaded Redis with active replication |
| Garnet | Remote, multi-threaded | 0.3ms | 200K-1M ops/s | Free (MIT, self-hosted) | RESP-compatible, .NET ecosystem, Microsoft-backed |
| Managed Cloud | |||||
| AWS ElastiCache | Managed remote | 0.3-0.5ms | 100-150K ops/s/node | Hourly instance + data transfer | AWS-native apps, managed Redis/Valkey |
| Redis Enterprise | Managed remote | 0.3ms | 100K-1M ops/s/node | Subscription + usage | Enterprise clustering, active-active geo, modules |
| Azure Cache for Redis | Managed remote | 0.3-0.5ms | 100-150K ops/s/node | Hourly tier-based | Azure-native apps, managed Redis |
| Google Memorystore | Managed remote | 0.3-0.5ms | 100-150K ops/s/node | Hourly capacity-based | GCP-native apps, managed Redis/Memcached |
| Upstash | Serverless remote | 1-5ms | 1K-10K ops/s | Pay-per-request ($0.20/100K) | Serverless, edge functions, low-volume apps |
| Momento | Serverless remote | 1-5ms | Scales on demand | Pay-per-request + data transfer | Zero-config, serverless, no infrastructure mgmt |
| Specialized | |||||
| Hazelcast | Distributed data grid | 0.5-2ms | 100K-500K ops/s | OSS free / Enterprise license | Java ecosystem, distributed computing, near-cache |
| Aerospike | Hybrid SSD + memory | 0.5-1ms | 500K-2M ops/s | OSS free / Enterprise license | Large datasets on SSD, ad-tech, high cardinality |
| CloudFront | CDN edge cache | 5-50ms | Millions (edge network) | Per-request + data transfer | Static assets, global edge delivery, not a data cache |
| ReadySet | SQL query cache/proxy | 0.5-2ms | 10K-100K queries/s | OSS free / Cloud pricing | Automatic SQL materialized views, Postgres/MySQL |
| In-Process / L1 | |||||
| Cachee | In-process Rust engine | 0.0015ms | 660K API / 215M+ in-process | Sidecar (near-zero marginal cost) | Sub-us reads, ML eviction, CDC, dependency graph, vector search |
Latency numbers represent typical production conditions. Actual performance varies with hardware, network, and workload. All remote caches include network round-trip.
All remote caches measured with network round-trip included. In-process caches measured with direct function call. Numbers reflect single-node performance unless noted.
P50 and P99 latency under sustained load. Lower is better.
| Product | P50 Latency | P99 Latency | Architecture Notes |
|---|---|---|---|
| Cachee (L1) | 0.0015ms | 0.004ms | In-process, zero network hops, Rust engine |
| Memcached | 0.2ms | 0.8ms | Simplest protocol, multi-threaded, no data structures overhead |
| DragonflyDB | 0.25ms | 1.0ms | Multi-threaded shared-nothing, RESP protocol |
| KeyDB | 0.25ms | 1.0ms | Multi-threaded Redis fork, same protocol |
| Redis (OSS) | 0.3ms | 1.2ms | Single-threaded event loop, I/O threads in Redis 7+ |
| Valkey | 0.3ms | 1.2ms | Redis fork, same architecture and performance profile |
| Redis Enterprise | 0.3ms | 1.0ms | Optimized proxy layer, multi-shard parallelism |
| Garnet | 0.3ms | 1.3ms | C#/.NET runtime, RESP-compatible, epoch-based GC |
| ElastiCache | 0.3ms | 1.5ms | Managed Redis/Valkey, same-AZ latency |
| Azure Cache | 0.3ms | 1.5ms | Managed Redis, Azure-hosted |
| Memorystore | 0.3ms | 1.5ms | Managed Redis/Memcached on GCP |
| Hazelcast | 0.5ms | 3ms | JVM overhead, distributed data grid, near-cache can be faster |
| Aerospike | 0.5ms | 2ms | Optimized SSD access, hybrid memory architecture |
| ReadySet | 0.5ms | 3ms | SQL query cache, materialized view lookup |
| Upstash | 2ms | 8ms | Serverless, HTTP-based, regional routing |
| Momento | 2ms | 10ms | Serverless, gRPC, auto-scaling |
| CloudFront | 5-50ms | 50-200ms | CDN edge, varies by POP proximity (not a data cache) |
Cachee's latency advantage comes from eliminating the network round-trip entirely. All remote caches are fundamentally bounded by TCP/loopback latency (minimum ~0.1ms).
Operations per second on a single node. Higher is better. Multi-node clusters scale linearly for most products.
| Product | Ops/sec (Single Node) | Scaling Model | Notes |
|---|---|---|---|
| Cachee | 660K API / 215M+ in-process | Per-instance (scales with app instances) | In-process reads bypass all serialization |
| DragonflyDB | 1-4M ops/s | Vertical (more cores = more throughput) | Shared-nothing threading, highest single-node throughput of remote caches |
| Aerospike | 500K-2M ops/s | Horizontal + vertical | SSD-optimized, excellent at large working sets |
| KeyDB | 300K-1M ops/s | Vertical (multi-threaded) | Multi-threaded Redis fork, scales with cores |
| Garnet | 200K-1M ops/s | Vertical (multi-threaded) | Microsoft research project, competitive on multi-core |
| Memcached | 200-600K ops/s | Horizontal (consistent hashing) | Simple protocol, very efficient per-request |
| Hazelcast | 100-500K ops/s | Horizontal (data grid) | JVM overhead, but near-cache can be much faster |
| Redis (OSS) | 100-150K ops/s | Horizontal (Redis Cluster) | Single-threaded per shard, I/O threads in 7.x help |
| Valkey | 100-150K ops/s | Horizontal (cluster mode) | Same as Redis, exploring multi-threading in future |
| Redis Enterprise | 100K-1M+ ops/s/node | Horizontal (auto-sharding) | Multiple Redis processes per node, enterprise proxy |
| ElastiCache | 100-150K ops/s/node | Horizontal (cluster mode) | Managed Redis/Valkey, auto-scaling available |
| Azure Cache | 100-150K ops/s/node | Horizontal (clustering) | Managed Redis, tier-dependent performance |
| Memorystore | 100-150K ops/s/node | Horizontal (clustering) | Managed Redis on GCP |
| ReadySet | 10-100K queries/s | Vertical (single proxy) | Depends on query complexity and table size |
| Upstash | 1-10K ops/s | Auto-scaling (serverless) | Throttled by plan, HTTP overhead |
| Momento | Auto-scaling | Auto-scaling (serverless) | No published single-node limits, scales transparently |
| CloudFront | Millions (distributed) | Global edge network | CDN, not comparable to data caches |
DragonflyDB leads in remote single-node throughput. Redis/Valkey scale horizontally via clustering. Cachee's in-process model means throughput scales 1:1 with application instances.
Methodology: Benchmarks compiled from official documentation, published benchmarks (redis-benchmark, memtier_benchmark), and independent third-party tests. Cachee numbers from internal wrk2 benchmarks on c6g.2xlarge. See full methodology.
Green check = native support. Yellow ~ = partial or via plugin. Red X = not available. Scroll horizontally to see all products.
| Feature | Redis | Valkey | Dragonfly | Memcached | KeyDB | Garnet | ElastiCache | Redis Ent. | Azure | Memorystore | Upstash | Momento | Hazelcast | Aerospike | CloudFront | ReadySet | Cachee |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Core Cache Features | |||||||||||||||||
| Key-Value Store | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✓ |
| Data Structures (hash, list, set, sorted set) | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ~ | ~ | ✗ | ✗ | ✓ |
| Clustering / Sharding | ✓ | ✓ | ~ | ✗ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ |
| Persistence (RDB/AOF/disk) | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ |
| Lua Scripting | ✓ | ✓ | ✓ | ✗ | ✓ | ~ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Pub/Sub | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✓ |
| Streams | ✓ | ✓ | ✓ | ✗ | ✓ | ~ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ~ | ✗ | ✗ | ✗ | ✓ |
| Transactions | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✗ | ✗ | ✓ |
| Advanced / AI-Powered Features | |||||||||||||||||
| Vector Search | ~ (module) | ✗ | ✗ | ✗ | ✗ | ✗ | ~ (module) | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
| CDC Auto-Invalidation | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ |
| Causal Dependency Graph | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
| Cache Contracts | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
| Semantic Invalidation | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
| Cache Fusion (multi-layer) | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ~ | ✗ | ✗ | ✗ | ✓ |
| Speculative Pre-Fetch | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
| Self-Healing Consistency | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
| Federated Intelligence | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
| ML-Powered Eviction | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
~ (yellow) = partial support via module, plugin, or workaround. ReadySet's CDC support is specific to SQL materialized views, not general-purpose cache invalidation.
A note on fairness: The advanced features in the bottom half of this table are capabilities Cachee pioneered. No other product claims to offer them because they represent a fundamentally different approach to caching (in-process, ML-driven, content-aware). Comparing on these dimensions alone would be misleading. The core features in the top half — clustering, persistence, pub/sub, Lua scripting — are areas where Redis, Valkey, and other remote caches are superior. Cachee does not have persistence or clustering because it is an L1 in-process layer, not a remote data store.
Estimated monthly cost for each solution at three traffic levels. Self-hosted assumes AWS us-east-1 with reserved pricing. Managed service pricing from published rate cards as of March 2026.
| Product | 10M req/mo | 100M req/mo | 1B req/mo | Pricing Model |
|---|---|---|---|---|
| Self-Hosted (compute + memory cost) | ||||
| Redis (OSS) | $25-50 | $50-100 | $150-400 | Single t3.medium handles 10M; r6g.large for 1B |
| Valkey | $25-50 | $50-100 | $150-400 | Same compute as Redis, zero license cost (BSD-3) |
| DragonflyDB | $25-50 | $40-80 | $80-200 | Higher throughput/node means fewer instances |
| Memcached | $20-40 | $40-80 | $100-300 | Lowest memory overhead per key |
| KeyDB | $25-50 | $50-100 | $120-350 | Multi-threaded, slightly better utilization |
| Garnet | $25-50 | $50-100 | $120-350 | Free (MIT), similar compute to Redis |
| Managed Cloud Services | ||||
| ElastiCache | $50-130 | $130-350 | $500-1,500 | cache.r6g.large min, scales with node count |
| Redis Enterprise | $65-200 | $200-600 | $800-3,000 | Subscription tiers, active-active costs more |
| Azure Cache | $55-150 | $150-400 | $600-2,000 | Standard/Premium tiers, similar to ElastiCache |
| Memorystore | $55-150 | $150-400 | $600-2,000 | Capacity-based, similar to ElastiCache |
| Upstash | $20 | $200 | $2,000 | $0.20/100K commands, linear scaling |
| Momento | $15-50 | $150-500 | $1,500-5,000 | Pay per request + data transfer, free tier available |
| Specialized | ||||
| Hazelcast | $50-100 | $200-500 | $800-2,000 | OSS free, Enterprise license for production features |
| Aerospike | $30-80 | $100-300 | $300-1,000 | SSD-optimized = cheaper per GB than pure memory |
| CloudFront | $5-15 | $50-150 | $500-1,200 | Per-request + data transfer, volume discounts |
| ReadySet | $0 (OSS) | $50-100 | $150-500 | OSS free, Cloud pricing for managed |
| In-Process / L1 | ||||
| Cachee (Sidecar) | ~$0* | ~$0* | ~$0* | Runs inside existing compute. Subscription for managed features. |
* Cachee sidecar uses ~50-200MB of your existing application memory. No separate infrastructure cost. Managed Cachee service (CDN, dashboard, support) has separate pricing. Ranges reflect different hardware sizes and configurations.
Where others cost less: For very low-volume use cases (under 1M req/month), Upstash and Momento offer generous free tiers that may cost nothing. CloudFront is far cheaper for static asset delivery. ReadySet (OSS) is free for SQL query caching. Self-hosted Redis/Valkey on a t3.micro costs under $10/month. Cachee's value proposition scales with traffic — the higher your volume, the more you save by eliminating separate cache infrastructure.
Every caching solution has a sweet spot. Here is where each product genuinely excels, and where it doesn't.
You need a shared, network-accessible data structure server with rich data types (hashes, sorted sets, streams, HyperLogLog), Lua scripting, and a massive ecosystem of client libraries. Redis is the most battle-tested cache on the planet with 15+ years of production use.
Largest ecosystem Data structures Pub/Sub Single-threaded License (SSPL/RSALv2)You want everything Redis offers but with a permissive BSD-3 license and backing from AWS, Google, Oracle, and the Linux Foundation. Valkey is the safe choice for organizations concerned about Redis's 2024 license change. Performance is identical.
BSD-3 license LF governance Drop-in Redis replacement Fewer modules than Redis StackYou need maximum throughput on a single server and want to stay within the RESP protocol ecosystem. Dragonfly's multi-threaded, shared-nothing architecture can deliver 3-25x Redis throughput on high-core machines. Ideal when vertical scaling beats horizontal complexity.
Highest single-node throughput Redis-compatible API Smaller community BSL 1.1 licenseYou need the simplest, fastest key-value cache without data structure overhead. Memcached's multi-threaded design is excellent for high-throughput GET/SET workloads. Perfect for session stores, page fragment caching, and anywhere you just need string key-value pairs.
Simplest protocol Multi-threaded Lowest per-key overhead No data structures No persistenceYou want a multi-threaded Redis fork with active replication and FLASH storage support. KeyDB adds multi-threading on top of Redis's API, giving better throughput per node. Active-active replication is simpler than Redis Cluster for some deployments.
Multi-threaded Redis Active replication Smaller community Development pace slowerYou're in the Microsoft/.NET ecosystem and want a RESP-compatible cache built on modern C# with epoch-based garbage collection. Garnet shows strong benchmark numbers on multi-core machines and is backed by Microsoft Research.
MIT license .NET ecosystem Young project Smaller communityYou're on AWS and want fully managed Redis or Valkey without operational overhead. ElastiCache handles patching, failover, backups, and scaling. The premium over self-hosted Redis is worth it if you don't have dedicated infrastructure engineers.
Fully managed AWS integration Auto-scaling AWS lock-in 2-5x self-hosted costYou need active-active geo-replication, RediSearch, RedisJSON, RedisTimeSeries, or enterprise SLA guarantees. Redis Enterprise is the premium tier of the Redis ecosystem with capabilities no OSS fork can match.
Active-active geo Enterprise modules 99.999% SLA Expensive Vendor lock-inYou need serverless Redis that scales to zero and charges per request. Perfect for edge functions (Cloudflare Workers, Vercel Edge), low-traffic APIs, and projects where you don't want to manage infrastructure. Free tier is generous for small projects.
Serverless Scales to zero Edge-compatible Higher per-request latency Expensive at scaleYou want zero-configuration caching with no infrastructure decisions. Momento abstracts away nodes, clusters, and capacity planning entirely. Pay for what you use. Best for teams that want to focus on application logic, not cache operations.
Zero config No capacity planning Less control Expensive at high volumeYou need a distributed in-memory data grid with compute capabilities (entry processors, SQL, distributed executor). Hazelcast's near-cache feature provides L1-like performance for frequently accessed data in Java/JVM applications.
Near-cache (L1) Distributed computing Java ecosystem JVM overhead Complex deploymentYour working set exceeds available RAM. Aerospike's SSD-optimized storage engine delivers sub-millisecond reads from NVMe drives, making it 10-100x cheaper per GB than pure in-memory caches for large datasets. Dominant in ad-tech and user profile stores.
SSD-optimized Cost-effective at scale Strong consistency Not a drop-in Redis replacementYou need global edge delivery of static assets, API responses, or media content. CloudFront is a CDN, not a data cache — it excels at reducing latency for geographically distributed end users, not at application-level key-value caching.
Global edge network Static asset delivery Not a data cache TTL-based invalidation onlyYou want to accelerate SQL queries without changing application code. ReadySet sits between your app and database (Postgres/MySQL), automatically materializing and caching query results. It watches the replication stream to keep materialized views fresh.
Zero code changes Auto-maintained views CDC-based freshness SQL only Not a general cacheYou need sub-microsecond cache reads with zero network hops, ML-powered eviction that achieves 99%+ hit rates, and advanced invalidation (CDC, dependency graphs, semantic rules, cache contracts). Cachee deploys as an in-process engine or sidecar alongside your existing infrastructure.
0.0015ms reads 99%+ hit rate CDC invalidation 12 unique features No persistence No clustering Per-instance stateYou need shared mutable state across multiple application instances (use Redis/Valkey). You need durable persistence that survives process restarts (use Redis with AOF/RDB). You need cross-region replication (use Redis Enterprise). You need pub/sub as your primary message bus (use Redis or a dedicated message broker). Cachee is an L1 read acceleration layer, not a remote data store.
No cross-instance sharing No disk persistence Not a message broker12 capabilities that exist in no other caching product. Each one solves a real problem that teams currently work around with custom application code.
For in-depth analysis of each matchup, see our dedicated comparison pages with full benchmarks, architecture diagrams, and migration guides.
Original 7-product comparison → | Redis optimization tools → | Traditional vs Predictive caching →
Deploy Cachee alongside your existing cache. No migration, no data movement. Compare real numbers from your own traffic in under 10 minutes.