Same instance. Same network. Same Redis cluster. Zero code changes. Cachee Edge Proxy delivers 1.76x faster throughput and 4x lower latency than direct ElastiCache access.
redis-benchmark, 200K operations, 50 concurrent clients, 128-byte values
| Test | Direct ElastiCache | Cachee Proxy (TCP) | Cachee (Unix Socket) | Speedup |
|---|---|---|---|---|
| SET (50 clients) | 95,012 ops/s | 156,986 ops/s | 203,459 ops/s | 1.65x – 2.14x |
| GET (50 clients) | 90,703 ops/s | 159,363 ops/s | 196,078 ops/s | 1.76x – 2.16x |
| SET (pipeline 64) | 2,704,433 ops/s | 296,834 ops/s | — | 0.11x* |
| GET (pipeline 64) | 3,089,723 ops/s | 1,471,059 ops/s | — | 0.48x* |
* Pipeline mode is a synthetic benchmark that batches 64 commands per round-trip. Real-world applications use single-command mode where Cachee delivers 1.65–2.16x improvement. Write-behind mode returns SETs immediately to the client while batching upstream writes.
Same machine, zero network latency. Redis 7.0.15 standalone on c7i.xlarge (4 vCPUs).
| Test | Standard Redis | Cachee TCP | Cachee Unix | Speedup |
|---|---|---|---|---|
| SET (50 clients) | 102,407 ops/s | 128,123 ops/s | 181,818 ops/s | 1.25x – 1.78x |
| GET (50 clients) | 110,803 ops/s | 136,054 ops/s | 186,047 ops/s | 1.23x – 1.68x |
| GET (pipeline 16) | 754,717 ops/s | 1,250,000 ops/s | — | 1.66x |
Even on localhost with zero network latency, Cachee's L1 cache serves reads faster than Redis itself. SET p50: 0.223ms (Cachee) vs 0.319ms (Redis) = 30% lower. GET p50: 0.191ms vs 0.279ms = 32% lower.
Measured with redis-cli --latency-history on the same c7i.metal-48xl instance
The further your application is from Redis, the bigger the win. Cachee's L1 cache returns hits in 16µs regardless of where your Redis lives.
| Your App ↔ Redis | Redis Latency | Cachee L1 Hit | Speedup |
|---|---|---|---|
| Same AZ (our benchmark) | 339 µs | 16 µs | 21x |
| Cross-AZ (same region) | 1–3 ms | 16 µs | 62–187x |
| Cross-Region (e.g. us-east → eu-west) | 30–80 ms | 16 µs | 1,875–5,000x |
| Public Internet / VPN / Hybrid Cloud | 50–150 ms | 16 µs | 3,125–9,375x |
At worst-case internet latency, every Cachee L1 cache hit saves 150ms per request. At 10,000 requests/sec, that's 25 minutes of cumulative latency saved every second.
Zero code changes. Just point your app at Cachee instead of Redis.
Drop-in proxy. No code changes. See results in under 5 minutes.
Talk To Someone Quick Start Guide →