Skip to main content
Why CacheeHow It Works
All Verticals5G TelecomAd TechAI InfrastructureFraud DetectionGamingTrading
PricingDocsBlogSchedule DemoLog InStart Free Trial
← Back to Blog
AI Infrastructure

Real-Time Recommendations Without a Vector Database: The L1 Cache Approach

Spotify, DoorDash, Netflix, and Instacart all face the same engineering constraint: generate personalized recommendations in under 50 milliseconds, end to end. The recommendation pipeline includes candidate generation, embedding lookups, similarity scoring, ranking, and business-rule filtering. When you use a vector database for similarity search, each lookup adds 1–5 milliseconds of network latency. A single recommendation request that requires 5–10 embedding lookups burns 5–50ms on vector search alone — consuming most or all of the latency budget before ranking even starts. An L1 in-process HNSW index at 0.0015ms per lookup changes the math entirely. Ten lookups take 0.015ms. The entire recommendation pipeline fits within budget with room to spare.

The Vector Database Latency Problem

Vector databases were designed for analytical workloads: index millions of embeddings, query them at moderate throughput, return results in single-digit milliseconds. Pinecone advertises p50 latencies of 5–10ms. Weaviate benchmarks at 3–15ms depending on dataset size and query complexity. Qdrant and Milvus fall in similar ranges. For batch processing, document retrieval, or semantic search with relaxed latency requirements, these numbers are fine.

Recommendation engines do not have relaxed latency requirements. Spotify has written publicly about their vector search challenges — their two-tower recommendation model generates candidate embeddings that must be compared against a catalog of 100+ million track embeddings. The initial candidate retrieval pass needs to score hundreds of items in embedding space to feed the downstream ranking model. At 5ms per vector lookup through an external database, the math does not work. Even with batched queries and connection pooling, the network round-trip between application servers and the vector database cluster introduces latency that cannot be engineered away.

DoorDash faces a similar constraint in their real-time restaurant and item recommendations. A user opens the app and expects personalized suggestions within the time it takes the page to render — roughly 100ms total page load budget, of which the recommendation API gets 30–50ms. The recommendation pipeline must retrieve the user’s embedding, find similar users for collaborative filtering, retrieve item embeddings for the candidate set, compute similarity scores, apply business rules (restaurant open hours, delivery radius, inventory), and return ranked results. Each of those embedding operations through a vector database adds 2–5ms. Four to six lookups and the budget is gone.

1–5ms Vector DB per Lookup
0.0015ms L1 HNSW per Lookup
3,333× Faster per Query
<50ms Reco Latency Budget

The L1 In-Process Architecture

The solution is to move hot embeddings out of the vector database and into the application process itself. Cachee’s L1 HNSW index runs in-process — no TCP connection, no serialization, no network hop. The HNSW graph lives in the same memory space as the application, and queries resolve through direct pointer traversal. A single approximate nearest neighbor lookup on a 384-dimension embedding completes in 1.5 microseconds (0.0015ms). Ten lookups complete in 15 microseconds. The entire embedding similarity phase of a recommendation pipeline takes less time than a single network packet round-trip to an external database.

This is not a new concept in isolation — Facebook (Meta) built FAISS for exactly this reason, and it is used internally for their recommendation systems. The difference is that FAISS is a library that requires manual index management, persistence logic, and update coordination across replicas. Cachee wraps HNSW in a managed cache layer with automatic TTL expiration, predictive pre-warming, and tiered fallback to an L2 vector database for cold embeddings.

The tiered architecture: L1 (in-process HNSW) holds hot embeddings — trending items, active users, popular categories. L2 (external vector DB) holds the full catalog. On L1 miss, the query falls through to L2, and the result is promoted to L1 for subsequent lookups. Hit rates of 85–95% on L1 are typical for recommendation workloads because traffic follows a power-law distribution.

Latency Waterfall: Before and After

Here is what a typical recommendation request looks like with and without L1 caching. The scenario: a food delivery app generating 10 personalized restaurant recommendations for a user opening the home screen.

Without L1 — Vector DB for All Lookups

User embedding lookup
3.2ms
Similar users (5 lookups)
14.5ms
Item embeddings (10 lookups)
18.3ms
Ranking model
4.1ms
Business rules
1.8ms
Total
41.9ms

With L1 HNSW — Hot Embeddings In-Process

User embedding (L1)
0.0015ms
Similar users (5 × L1)
0.0075ms
Item embeddings (10 × L1)
0.015ms
Ranking model
4.1ms
Business rules
1.8ms
Total
5.9ms

The vector search phase drops from 36ms to 0.024ms — a 1,500x improvement. Total pipeline latency drops from 41.9ms to 5.9ms. That is a 7x improvement end-to-end, and the pipeline now fits comfortably within even the strictest 10ms latency budget. The ranking model and business rules become the dominant cost, which is exactly where you want your latency budget spent — on logic that actually differentiates the user experience, not on network hops to retrieve data.

Pre-Warming Trending Items

The L1 cache is most effective when hot embeddings are already warm before the first request arrives. Cachee’s predictive warming system learns traffic patterns and pre-loads embeddings that are likely to be requested in the next time window. For recommendation engines, the pre-warming strategy is straightforward.

With pre-warming, L1 hit rates for recommendation workloads typically reach 90–95%. The remaining 5–10% of misses fall through to the L2 vector database, are served with standard latency, and are simultaneously promoted to L1 for subsequent requests. The cold-start penalty is absorbed on the first request for any given embedding; all subsequent requests are served from L1.

// Recommendation pipeline with L1 HNSW async function getRecommendations(userId, count = 10) { // L1 lookup: 0.0015ms per query const userEmb = await cachee.vsearch(userId, { index: "user-embeddings", topK: 1, }); // Find similar users for collaborative signal const similar = await cachee.vsearch(userEmb.vector, { index: "user-embeddings", topK: 20, exclude: [userId], }); // 0.0015ms // Retrieve candidate item embeddings const candidates = await cachee.vsearch(userEmb.vector, { index: "item-embeddings", topK: 100, }); // 0.0015ms // Rank and filter (this is where latency should go) return rank(candidates, similar, count); }
Memory footprint: 1 million 384-dim embeddings in L1 HNSW requires approximately 1.5GB of memory. 10 million embeddings requires ~15GB. Most recommendation systems only need their hot set in L1 — typically the top 1–5% of items and active users — making the memory requirement negligible on modern servers. The full catalog stays in the L2 tier.

Related Reading

Also Read

Recommendations at the Speed of Memory.

L1 in-process HNSW delivers 0.0015ms embedding lookups. No vector database. No network hops. No latency budget overruns.

Start Free Trial Schedule Demo