Every DEX aggregator faces the same invisible tax: reading pool state. Before your routing algorithm can evaluate a single path, it needs reserves, tick data, fee tiers, and sqrtPrice from 40 to 60 liquidity pools. Each RPC call takes 0.8–1.2 milliseconds. Multiply that by 50 pools and you have spent 40–60 milliseconds — more than half your quote budget — just reading data. Your routing algorithm never gets the chance to do what it was built to do: find the best price.
Cachee is an AI-powered L1 caching layer that serves pool state in 1.5 microseconds instead of 1 millisecond. It sits in front of your existing RPC infrastructure, predicts which pools will be queried next, and pre-loads their state into in-process memory. The result: your aggregator evaluates 155× more routes per quote, finds splits that were previously invisible, and delivers +13 basis points better execution on every swap.
The Pool State Bottleneck
A modern DEX aggregator quoting a SOL→USDC swap does not simply check one pool. It queries Raydium concentrated liquidity positions, Orca whirlpools, Meteora DLMM bins, Jupiter limit-order books, and dozens of smaller AMMs. Each pool needs its current reserves, active tick range, and fee configuration before the router can model the output amount. For a single swap quote on Solana, this means 40–60 individual state reads.
On Ethereum and L2s, the situation is identical. Uniswap v3 and v4 tick-based pools, Curve stableswap pools, Balancer weighted pools, and emerging concentrated liquidity protocols all require the same per-pool state fetch before the router can score a route. Every pool read that goes through RPC is a network round-trip — and those round-trips dominate your quote latency.
Anatomy of a Swap Quote: Before and After
Walk through a typical aggregator quote on both stacks to see where the time goes:
Standard Infrastructure (RPC)
Cachee L1 Infrastructure
The 49 milliseconds recovered from pool reads do not simply make the quote faster. They give the routing algorithm 49 additional milliseconds to explore paths. A router that previously had 15ms to score routes now has 64ms. That is not an incremental improvement — it is a fundamental change in how many routes can be evaluated per quote.
More Routes = Better Prices
Route exploration is the core competitive advantage of any DEX aggregator. The aggregator that evaluates more paths finds better splits, discovers hidden liquidity in long-tail pools, and produces tighter quotes. But route exploration scales linearly with available compute time and sub-linearly with I/O latency. When pool reads are the bottleneck, adding more CPU to your routing engine does nothing.
With Cachee, pool reads drop from the dominant cost to a rounding error. The routing engine becomes CPU-bound instead of I/O-bound, and it can finally do what it was designed to do:
- Standard infrastructure: 200–500 routes explored per quote
- Cachee infrastructure: 10,000+ routes explored per quote — 155× more
The impact on execution quality is measurable: +13 basis points better average execution. On a $10,000 swap, that is $13. Multiply by millions of swaps per day across a major aggregator, and the numbers become very large very quickly.
Why Multi-Hop Splits Benefit Most
The RPC bottleneck is especially punishing for multi-hop routes. A simple A→B swap needs pool state from one set of pools. A multi-hop A→B→C→D route needs state from three sets. Each additional hop multiplies the number of pool reads, and under standard infrastructure, each additional hop adds 40–50ms of I/O latency. This makes the router heavily biased toward simple, direct routes even when a multi-hop split would produce a better price.
With Cachee, each additional hop adds 0.9ms instead of 50ms. Three-hop and four-hop routes become as cheap to evaluate as direct swaps. The router can explore the full solution space without penalty, finding split routes across fragmented liquidity that were previously invisible within the quote budget.
Six DEX Use Cases
🔄 Swap Routing
Read reserves, tick data, and fee tiers from 50+ pools per quote. Cachee serves all 50 reads in under 0.1ms — faster than a single RPC call.
50 pools in <0.1ms🔀 Multi-Hop Split Orders
Cascading pool reads for A→B→C→D routes happen at memory speed. Three-hop routes cost the same as direct swaps.
3-hop routes at 0.9ms total🔔 Limit Orders
Monitor prices continuously across 10,000+ pools. Cachee’s 1.5µs reads enable real-time fill detection without RPC rate limits.
10K+ pools monitored in real-time📊 LP Position Management
Track impermanent loss in real time by reading pool state and price ticks at memory speed instead of polling RPCs every few seconds.
1.5µs state reads vs 1s+ polling⚡ JIT Liquidity
Pre-cache incoming swap sizes and pool state to provision optimal tick ranges just in time. React to mempool activity before the block lands.
Sub-µs reaction time🌍 Cross-Chain Routing
Unify pool state from Solana, Ethereum, Arbitrum, Base, and Polygon in a single L1 cache layer. Quote cross-chain swaps without per-chain RPC latency.
5 chains, 1 unified cacheThe Revenue Math
Better execution is not just a user experience metric. For aggregators, it translates directly to volume, fees, and market share. Users route through the aggregator that gives them the best price. Better infrastructure → better prices → more volume → more fees. The flywheel is mechanical.
For a top-10 DEX aggregator processing $2B+ in daily volume:
- +13 bps better execution attracts volume from competing aggregators
- +22% volume increase from consistently better prices (measured across routing benchmarks)
- $12M additional annual fee revenue from volume uplift
- $6M annual RPC cost savings from cache hits replacing redundant RPC calls
- $18M+ total annual impact at a fraction of the cost
Three-Minute Integration
Cachee speaks native RESP protocol. It drops in as an L1 layer in front of your existing cache or state store. Your routing engine does not change. Your pool indexer does not change. Your transaction builder does not change. You change two environment variables and pool reads go from milliseconds to microseconds.
Hot pools — the ones your router queries most often — serve from L1 in-process memory at 1.5µs. Warm pools serve from L2 shared cache at sub-10µs. Cold pools cascade through to your existing RPC infrastructure automatically. There is zero cold-start risk: if Cachee has not pre-warmed a pool, the read falls through to the same RPC you use today.
The AI prediction engine learns your aggregator’s access patterns. It observes which pools are queried together, detects volume surges on specific pairs, and pre-loads pool state before the routing engine asks for it. The result is a 99.05% cache hit rate — meaning 99 out of 100 pool reads never touch RPC at all.
The Aggregator Arms Race
DEX aggregation is a winner-take-most market. The aggregator with the best quotes captures the most volume, which generates the most fee revenue, which funds better infrastructure, which produces even better quotes. It is a compounding flywheel, and the differentiator is not the routing algorithm — every serious aggregator uses some variant of Dijkstra, Bellman-Ford, or graph-based optimization. The differentiator is how much of the solution space you can explore within the quote deadline.
That is an infrastructure problem, not an algorithm problem. And it is the problem Cachee was built to solve.
When your aggregator can read pool state from 50 pools in 0.9ms instead of 50ms, when it can evaluate 10,000 routes instead of 500, when every multi-hop split is as cheap to explore as a direct swap — the routing algorithm you already built can finally operate at its full potential. The alpha was always in your code. It was just waiting for faster data.
Your Routing Algorithm Deserves Faster Data
See how 1.5µs pool reads transform your aggregator’s execution quality.
Start Free Trial Explore DEX Solutions