How It Works
All Verticals 5G Telecom Ad Tech DEX Protocols Fraud Detection Gaming IoT & Messaging MEV RPC Providers Trading Trading Infra Validators
Pricing Blog Docs Start Free Trial

The Distance Multiplier: Why Cachee Gets Faster the Further You Are from Redis

February 7, 2026 • 6 min read • Architecture

Here's something counterintuitive we discovered while benchmarking Cachee against AWS ElastiCache: the worse your network latency to Redis, the more Cachee helps.

Not a little more. Exponentially more.

9,375x
faster at worst-case internet latency

Let us explain.

The Constant: 16 Microseconds

When Cachee serves a cache hit from its L1 memory, the latency is 16 microseconds. This number doesn't change based on where your Redis lives. It's a local hash table lookup—no network involved.

That 16μs is the same whether your Redis cluster is:

But the latency you're avoiding changes dramatically.

The Variable: Network Round-Trip

Without Cachee, every Redis command requires a network round-trip. That round-trip time depends entirely on distance:

Your App ↔ Redis Round-Trip Latency Cachee L1 Hit Speedup
Same AZ ~339 μs 16 μs 21x
Cross-AZ 1–3 ms 16 μs 62–187x
Cross-Region 30–80 ms 16 μs 1,875–5,000x
Public Internet / VPN 50–150 ms 16 μs 3,125–9,375x

Read the rightmost column. The speedup goes from 21x to nearly ten thousand. Same technology. Same proxy. The only variable is how far away Redis is.

Why This Matters More Than You Think

Most Apps Don't Run Next to Redis

Our benchmark was the best-case scenario for ElastiCache: same machine, same AZ, sub-millisecond network. We still got 1.76x faster throughput and 4x lower latency.

But in production, your app probably isn't on the same rack as your Redis cluster. Common real-world scenarios:

Scenario 1: Cross-AZ Deployment (Most Common)

Your app runs in us-east-1a. Your ElastiCache primary is in us-east-1b. Every Redis call crosses an AZ boundary—typically 1-3ms.

With Cachee: 62–187x faster

At 10,000 GET requests/second, you're saving 10–30 seconds of cumulative latency every second.

Scenario 2: Multi-Region Application

You have users in Europe but your Redis is in us-east-1. Each Redis call takes 70-80ms across the Atlantic.

With Cachee running at the edge in eu-west-1: 5,000x faster

An 80ms round-trip becomes a 16μs local lookup. Your European users get the same cache performance as if Redis were sitting next to them.

Scenario 3: Hybrid Cloud

Your app runs on-premises or in a different cloud provider. Redis traffic goes through a VPN tunnel or Direct Connect. Round-trips: 50-150ms.

With Cachee on your app servers: 3,125–9,375x faster

This is where the distance multiplier is most dramatic. A 150ms VPN hop becomes a 16μs memory read.

The Cumulative Impact

Let's do the math for a real application. Say you handle 10,000 requests per second, and each request makes 5 Redis calls. That's 50,000 Redis operations per second.

Scenario Without Cachee With Cachee Time Saved / Second
Same AZ 16.95 sec 0.80 sec 16.15 sec/sec
Cross-AZ (2ms) 100 sec 0.80 sec 99.2 sec/sec
Cross-Region (70ms) 3,500 sec 0.80 sec 3,499 sec/sec
Hybrid/VPN (100ms) 5,000 sec 0.80 sec 4,999 sec/sec

"Time saved per second" is cumulative latency across all requests. In the cross-region case, you're saving 58 minutes of cumulative wait time every single second. That translates directly to freed-up connections, lower tail latencies, and fewer timeouts.

Key insight: Cachee doesn't just cache your data. It eliminates the physics of distance. Light takes 70ms to cross the Atlantic and back. Memory access takes 16 microseconds. No amount of infrastructure spending can change the speed of light—but caching at the edge can sidestep it entirely.

Why Not Just Use a Local Redis?

You could spin up a local Redis replica in each region. Here's why Cachee is better:

Deploying at the Edge

The distance multiplier is most powerful when you deploy Cachee on each application server or in each region:

# Region: eu-west-1 (Redis is in us-east-1)
# Before: Every GET crosses the Atlantic (70ms)
# After:  L1 hit in 16μs, miss forwards to us-east-1

Your EU App Server
    ↓
Cachee Proxy (local, eu-west-1)
    ↓ 16μs hit         ↓ 70ms miss (first access only)
    L1 Cache            ElastiCache (us-east-1)

The first access to a key takes the full cross-region round-trip. Every subsequent access—16 microseconds. With typical key reuse patterns, your effective hit rate approaches 100%.

The Bottom Line

Most caching benchmarks are run in ideal conditions—same machine, same network. Cachee wins even there (1.76x faster, 4x lower latency). But the real story is what happens as distance increases.

Every millisecond of network latency you have to Redis is a millisecond that Cachee eliminates on cache hits. The further away your Redis is, the more dramatic the improvement. At cross-region distances, we're not talking about percentage improvements. We're talking about orders of magnitude.

If your app is further than a millisecond from Redis, the distance multiplier is working against you right now. Cachee turns it into your advantage.

See the Benchmark Data

Full head-to-head results with charts, latency measurements, and infrastructure details.

View Case Study →