Skip to main content
Why CacheeHow It Works
All Verticals5G TelecomAd TechAI InfrastructureFraud DetectionGamingTrading
PricingDocsBlogSchedule DemoLog InStart Free Trial
← Back to Blog
Blockchain Infrastructure

How Cachee Turns RPC Nodes into Sub-Microsecond State Engines

Every Solana and Ethereum RPC node faces the same hidden bottleneck: RocksDB. Every getAccountInfo, every getBalance, every getTokenAccountsByOwner call triggers a disk read through RocksDB’s LSM tree. Under load, these reads take 1–5 milliseconds each — fast enough for casual queries, catastrophically slow for high-frequency DeFi applications, MEV searchers, and RPC providers serving millions of requests per second. The result is a fleet of expensive bare-metal nodes spending 87% of their I/O on redundant reads for data that changed less than a second ago.

Cachee is an AI-powered caching sidecar that sits between your RPC consumers and your validator’s RocksDB. It serves getAccountInfo in 31 nanoseconds instead of 1–5 milliseconds, handles 2.4 million requests per second per sidecar, reduces RocksDB I/O by 87%, and extends SSD lifespan by . Zero code changes. Zero API changes. One Docker container.

31 ns getAccountInfo
2.4M Requests/Sec
87% Less RocksDB I/O
SSD Lifespan
$0.001 Cost Per 1M Reads

The RocksDB Problem Nobody Talks About

Solana validators store account state in RocksDB, a log-structured merge-tree database designed for write-heavy workloads. It is excellent at ingesting the 50,000+ transactions per second that Solana processes. It is terrible at serving the read-heavy workload that RPC consumers generate.

Here is the problem: RocksDB reads traverse multiple sorted string table (SST) files on disk. A single getAccountInfo call may check 3–7 SST levels, perform bloom filter lookups, decompress data blocks, and return the result. On a warm SSD, this takes 1–2 milliseconds. Under contention from hundreds of concurrent RPC consumers, it balloons to 5–12 milliseconds. P99 latency can spike to 50 milliseconds or more during high-traffic periods.

The cruel irony is that most of these reads are redundant. The hot account set on Solana — the top token mints, DEX pools, lending positions, and oracle accounts — covers approximately 85% of all RPC reads. These accounts change at most once per slot (400ms). Yet every getAccountInfo call for these accounts hits RocksDB as if the data has never been read before.

The math: A managed RPC provider serving 100M reads/day at $0.0084 per query through RocksDB spends $840,000/day on I/O that is 87% redundant. That is $306M/year in wasted disk operations. Every read for a hot account that could have been served from memory instead burns an SSD cycle, adds a millisecond of latency, and costs real money.

How Cachee Solves RPC Reads

Cachee deploys as a single Docker sidecar alongside your Solana validator or RPC node. It intercepts incoming RPC requests, checks its in-process L1 memory cache, and returns the result in nanoseconds if the account is cached. On a cache miss, it falls through to the validator’s RPC port transparently — the consumer never knows whether the response came from cache or disk.

Pre-Loading the Hot Account Set

On startup, Cachee subscribes to the validator’s AccountSubscribe WebSocket and pre-loads the hot account set into memory. This includes the top token mints (USDC, USDT, SOL, wSOL), all major DEX pool accounts (Raydium, Orca, Meteora, Jupiter), oracle price feed accounts (Pyth, Switchboard), and the most-queried program-derived addresses. The AI prediction engine continuously refines this set based on actual query patterns, adding accounts that are frequently requested and evicting accounts that go cold.

Sub-Slot Invalidation

Stale data is the enemy of correctness. Cachee solves this with sub-slot invalidation: when an account’s state changes (detected via AccountSubscribe webhook), the cached entry is invalidated within microseconds. The next read for that account fetches fresh state from RocksDB and re-caches it. This means cached data is never more than one slot stale for actively changing accounts, and reads for stable accounts (which rarely change) are served from cache indefinitely at zero disk cost.

Per-Method Latency Comparison

Every standard Solana RPC method benefits from caching, but the impact varies by method complexity:

Standard RocksDB Latency

getAccountInfo
1–5 ms
getBalance
1–3 ms
getTokenAccountsByOwner
5–50 ms
getProgramAccounts (1K)
50–500 ms
getMultipleAccounts (10)
5–15 ms
getSlot / getBlockHeight
0.5–2 ms

Cachee L1 Latency

getAccountInfo
31 ns
getBalance
12 ns
getTokenAccountsByOwner
340 ns
getProgramAccounts (1K)
8.2 µs
getMultipleAccounts (10)
170 ns
getSlot / getBlockHeight
5 ns

The pattern is consistent: 58,000× to 183,000× faster across every method. For batch reads like getProgramAccounts, the improvement is even more dramatic because Cachee avoids the compounding disk seeks that plague RocksDB when scanning thousands of accounts sequentially.

Who Benefits Most

🏢 Managed RPC Providers

Helius, QuickNode, Alchemy, Chainstack — serve 10× more customers on the same hardware. A single Cachee sidecar replaces ~60 RocksDB read threads, turning every bare-metal node into a high-throughput read engine.

10× customer density

High-Frequency DeFi

Jupiter, Raydium, Marinade — applications that call getAccountInfo hundreds of thousands of times per second. Cachee eliminates the RocksDB bottleneck and lets your application logic become the throughput ceiling, not your RPC node.

2.4M reads/sec per sidecar

🖥 Validator Operators

RPC reads compete with consensus writes for SSD bandwidth. Cachee offloads 87% of reads from disk, freeing I/O for the write-ahead log, snapshot creation, and account updates. Your validator’s SSDs last 5× longer.

5× SSD lifespan

📊 Indexers & Analytics

Flipside, Dune, Nansen — analytics platforms that scan millions of accounts for dashboards and alerts. Cachee’s getProgramAccounts at 8.2µs means your indexer stays current without overwhelming the validator.

getProgramAccounts in 8.2µs

The Economics of RPC Caching

RPC infrastructure is expensive. A production Solana validator with adequate NVMe storage, memory, and network costs $3,000–$8,000 per month in bare-metal hosting. Managed RPC providers pass this cost through at $0.005–$0.010 per read. The margins are thin, and the way to improve them is not to add more hardware — it is to serve more reads from the hardware you already have.

Cost Per Million Reads

Infrastructure Cost / 1M Reads Throughput P99 Latency
Standard RocksDB $8.40 ~50K req/sec 12 ms
Redis Cache Layer $1.20 ~200K req/sec 0.8 ms
Cachee L1 Sidecar $0.001 2.4M req/sec 42 ns

The cost reduction is not incremental — it is three orders of magnitude. At $0.001 per million reads, the cost of serving cached state becomes a rounding error in your infrastructure budget. The savings come from two sources: fewer disk operations (extending hardware life) and higher per-node throughput (serving more customers per machine).

Fleet economics: A 50-node RPC fleet caching 100M reads/day per node saves $14.6M/year in freed capacity. Alternatively, serve 10× more customers on the same fleet — without adding a single machine.

Deployment: One Docker Container

Cachee deploys as a single Docker sidecar with a config file pointing to your validator’s RPC port. No code changes. No API changes. No client-side modifications. Your RPC consumers continue using the same JSON-RPC endpoints they use today — they just get responses 183,000× faster for cached accounts.

# docker-compose.yml — add alongside your validator cachee-rpc: image: cachee/rpc-sidecar:latest environment: VALIDATOR_RPC: http://localhost:8899 CACHEE_API_KEY: ck_live_your_key_here CACHE_MODE: l1-memory ports: - "8900:8900" # Point your RPC consumers to port 8900 instead of 8899. # Cache hits: 31ns from L1 memory. # Cache misses: transparent fallthrough to validator RPC.

The sidecar pre-loads the hot account set on startup via AccountSubscribe, immediately serving 85% of reads from memory. The AI prediction engine refines the cache contents over the first few minutes, learning your specific query patterns and pushing the hit rate to 100%. Within 10 minutes of deployment, 99 out of 100 reads never touch RocksDB.

Correctness Guarantees

The most common objection to RPC caching is staleness. Cachee addresses this with three mechanisms:

The RPC Provider Arms Race

The managed RPC market is commoditizing. Every provider offers the same JSON-RPC methods, the same Solana and Ethereum support, and approximately the same uptime guarantees. The differentiator is no longer availability — it is latency and cost efficiency. The provider that serves reads faster at lower cost captures the high-frequency DeFi market, the MEV searcher market, and the analytics market — the three highest-volume, highest-margin segments.

That is an infrastructure problem. And it is the problem Cachee was built to solve. When your RPC node serves getAccountInfo in 31 nanoseconds instead of 5 milliseconds, when it handles 2.4 million requests per second instead of 50,000, when your SSDs last 5 years instead of 1 — you are not competing on the same playing field as providers running bare RocksDB. You are operating on a fundamentally different cost curve.

Related Reading

Also Read

Your RPC Nodes Deserve an L1 Cache Layer

See how 31ns reads and 2.4M req/sec transform your RPC infrastructure economics.

Start Free Trial Schedule a Demo