DoorDash recommends restaurants based on user history, location, time of day, and cuisine preferences — all via embedding similarity. The “For You” feed, search results, and “Similar Restaurants” all run vector lookups against user and restaurant embeddings. But there is a critical difference between DoorDash and other recommendation systems: users ordering food have almost zero patience. They are hungry. Every second of delay increases the chance they switch to Uber Eats or Grubhub. DoorDash needs to return personalized recommendations in under 50ms total pipeline — and if vector search consumes 5–25ms of that budget, the math does not work.
The Hungry User Problem
Food delivery is unique among recommendation domains because user intent is immediate and high-stakes. A Spotify user browsing music can tolerate a half-second delay before recommendations load. A DoorDash user who just decided they are hungry is making a purchase decision in the next 30–60 seconds. DoorDash’s own research and public engineering blog posts have shown that conversion rate is acutely sensitive to load time. A 100ms increase in page load time correlates with measurable drops in order completion. At DoorDash’s scale — over 30 million monthly active users, millions of daily orders — every 100ms of latency reduction translates directly into revenue.
The recommendation pipeline is the critical path. When a user opens DoorDash, the app must: determine the user’s location and delivery zone, fetch the user’s embedding (encoding order history, cuisine preferences, price sensitivity, time-of-day patterns), query restaurant embeddings for the top-K most similar restaurants in the delivery zone, apply business logic filters (open now, meets delivery time threshold, meets minimum rating), and rank the results by a final scoring model. The total budget for this pipeline is 50ms from request to response. If embedding search alone takes 5–25ms, it consumes 10–50% of the entire budget.
Why Restaurant Embeddings Fit Perfectly in L1
DoorDash operates in approximately 900 cities across the US, Canada, Australia, Japan, and Germany. In any given city, the active restaurant inventory is measured in the thousands, not millions. New York City, the largest market, has roughly 25,000 restaurants on the platform. The entire US restaurant inventory is approximately 500,000 restaurants. This is a remarkably small dataset for an in-process vector search index.
At 256 dimensions and 4 bytes per float, each restaurant embedding is 1KB. The top 500,000 restaurants = 500MB. With HNSW graph overhead (~30%), the total L1 index size is approximately 650MB. This fits in the L3 cache of a modern server CPU. The hot subset — restaurants in the user’s delivery zone, which is what actually gets queried — is typically 2,000–5,000 restaurants. That subset fits comfortably in L1/L2 CPU cache, which means vector similarity search runs at memory bandwidth speeds, not DRAM speeds.
The Latency Waterfall: Before and After
A DoorDash recommendation request currently looks like this: the recommendation service receives the request, fetches the user embedding from a feature store (2–4ms network round-trip), queries a centralized vector service for similar restaurants (5–15ms including serialization, network, and index query), applies geo-filtering and business rules (1–2ms), and runs a final ranking model (2–3ms). The vector search step dominates the pipeline.
Current DoorDash recommendation pipeline
With L1 in-process HNSW
The total pipeline drops from 15.4ms to 3.7ms. The vector search step — previously 75% of the pipeline — becomes invisible at 0.003ms combined. DoorDash now has 46ms of headroom within its 50ms budget. That headroom can be spent on richer ranking models, more personalization signals, or simply absorbed as a reliability margin for p99 latency.
The Conversion Rate Math
DoorDash processes approximately 2 million orders per day. Industry research consistently shows that every 100ms of latency reduction in e-commerce improves conversion rates by 0.5–1.0%. Moving from 15.4ms to 3.7ms is an 11.7ms improvement in the recommendation pipeline alone — but the downstream effect is larger because faster recommendations enable faster page renders, which compound across the entire session.
Conservatively estimating a 0.3% conversion rate improvement from the combined latency gains: 0.3% of 2 million daily orders = 6,000 additional orders per day. At an average order value of $35 and a ~20% take rate, that is $42,000 per day in additional gross revenue — or $15.3 million per year. And this is the conservative estimate. The actual impact is likely higher because food delivery has a uniquely steep latency-to-conversion curve: hungry users are the most impatient users on the internet.
Pre-Warming by Geography and Time
DoorDash’s data has strong temporal and geographic patterns. Lunch orders spike between 11am and 1pm. Dinner orders peak between 5pm and 8pm. Weekend brunch has its own pattern. Cachee’s predictive warming layer can pre-load restaurant embeddings based on time-of-day demand curves: sushi restaurants weighted higher at dinner, breakfast spots pre-warmed before 7am, trending restaurants boosted when they appear on social media. The L1 index dynamically reflects what users are about to search for, not just what they searched for yesterday.
For DoorDash, the question is not whether in-process vector search is worth the engineering investment. The question is how much revenue they are leaving on the table every day that hungry users wait an extra 12 milliseconds for restaurant recommendations to load. At their scale, the answer is measured in millions.
Related Reading
Also Read
The Numbers That Matter
Cache performance discussions get philosophical fast. Here are the actual measured numbers from production deployments running on documented hardware, so you can compare against your own infrastructure instead of trusting marketing copy.
- L0 hot path GET: 28.9 nanoseconds on Apple M4 Max, single-threaded against pre-warmed in-memory cache. This is the floor — there's no faster way to read a key.
- L1 CacheeLFU GET: ~89 nanoseconds on AWS Graviton4 (c8g.metal-48xl). Sharded DashMap with admission filtering.
- Sustained throughput: 32 million ops/sec single-threaded on M4 Max, 7.41 million ops/sec at 16 workers on Graviton4 c8g.16xlarge.
- L2 fallback: Sub-millisecond hits against ElastiCache Redis 7.4 over same-AZ network when L1 misses cascade through.
The compounding effect matters more than any single number. A 28-nanosecond L0 hit means your application spends almost zero time on cache lookups in the hot path, leaving the CPU free for the actual business logic that generates revenue.
When Caching Actually Helps
Caching isn't free. It introduces a consistency problem you didn't have before. Before adding any cache layer, the question to answer is whether your workload actually benefits from caching at all.
Caching helps when three conditions hold simultaneously. First, your reads dramatically outnumber your writes — typically a 10:1 ratio or higher. Second, the same keys get read repeatedly within a window where a cached value remains valid. Third, the cost of computing or fetching the underlying value is meaningfully higher than the cost of a cache lookup. Database queries that hit secondary indexes, RPC calls to slow upstream services, expensive computed aggregations, and rendered template fragments all qualify.
Caching hurts when those conditions don't hold. Write-heavy workloads suffer because every write invalidates a cache entry, multiplying your work. Workloads with poor key locality suffer because the cache wastes memory storing entries that never get reused. Workloads where the underlying fetch is already fast — well-indexed primary key lookups against a properly tuned database, for example — gain almost nothing from caching and inherit the consistency complexity for no reason.
The honest first step before any cache deployment is measuring your actual read/write ratio, key access distribution, and underlying fetch latency. If your read/write ratio is below 5:1 or your underlying database is already returning results in single-digit milliseconds, the engineering time is better spent elsewhere.
Memory Efficiency Is The Hidden Cost Lever
Throughput numbers get the headlines but memory efficiency determines your monthly bill. A cache that stores the same hot data in less RAM lets you run a smaller instance class — and on AWS that's the difference between profitable and breakeven for a lot of services.
Redis stores each key as a Simple Dynamic String with 16 bytes of header overhead, plus dictEntry pointers in the main hashtable, plus embedded TTL metadata. For 1KB values, per-entry overhead lands around 1100-1200 bytes once you account for hashtable load factor and slab fragmentation. At a million keys, that's roughly 1.2 GB of resident memory just for the data.
Cachee's L1 layer uses sharded DashMap entries with compact packing — a 64-bit key hash, value bytes, an 8-byte expiry timestamp, and a small frequency counter for the CacheeLFU admission filter. Per-entry overhead lands at roughly 40 bytes of structural data on top of the value itself. For the same million-key workload, that's about 13% smaller resident memory. On AWS ElastiCache pricing, that gap is the difference between needing a cache.r7g.large versus a cache.r7g.xlarge for borderline workloads.
What This Actually Costs
Concrete pricing math beats hypothetical. A typical SaaS workload with 1 billion cache operations per month, average 800-byte values, and a 5 GB hot working set currently runs on AWS ElastiCache cache.r7g.xlarge primary plus a read replica — roughly $480 per month for the two nodes, plus cross-AZ data transfer charges that quietly add another $50-150 per month depending on access patterns.
Migrating the hot path to an in-process L0/L1 cache and keeping ElastiCache as a cold L2 fallback drops the dedicated cache spend to $120-180 per month. For workloads where the hot working set fits inside the application's existing memory budget, you can eliminate the dedicated cache tier entirely. The cache becomes a library you link into your binary instead of a separate service to operate.
Compounded over twelve months, that's $3,600 to $4,500 per year on a single small workload. Multiply across a fleet of services and the savings start showing up in finance team conversations. The bigger savings usually come from eliminating cross-AZ data transfer charges, which Redis-as-a-service architectures incur on every read that crosses an availability zone.
Hungry Users Won’t Wait. Neither Should Your Recommendations.
L1 HNSW at 0.0015ms per lookup — restaurant recommendations faster than your competitors’ apps can load.
Start Free Trial Schedule Demo