ZK-STARK ZK-SNARK Post-Quantum Signed

ZK Caching

Cache STARK and SNARK verification results. Verify a proof once.
Reuse the result forever. The layer that eliminates redundant proof verification.

294x
STARK Speedup
85ns
Cached Lookup
7
Proof Systems
3
PQ Families
Definition

ZK caching eliminates repeated zero-knowledge proof verification. A STARK or SNARK proof is verified once. The verification result — valid or invalid — is cached with a computation fingerprint that binds it to the exact proof, verification key, and parameters. Subsequent requests retrieve the cached result in nanoseconds instead of re-executing the verification. STARK verification: 25 microseconds uncached, 85 nanoseconds cached. 294x speedup.

The Gap

25 microseconds vs 85 nanoseconds

The same verification result. One path recomputes it. The other remembers it.

STARK Verification (FRI + constraints + Merkle + OOD) 25,000 ns
Re-verify every time
Cached Result (hash lookup + pointer dereference) 85 ns
294x
No polynomial evaluation. No Merkle traversal. No field arithmetic. Just truth, from cache.

ZK-STARK Caching

294x
25us verification to 85ns cached

Transparent setup. Larger proofs (10-100KB). More expensive verification. Benefits most from caching. FRI protocol is the bottleneck (37% of verification time).

ZK-SNARK Caching

~50x
1-3ms verification to ~60ns cached

Trusted setup required. Compact proofs (200B for Groth16). Faster verification but still adds up at scale. Caching eliminates pairing checks entirely.

The Problem

Why ZK Proofs Need Caching

Zero-knowledge proof verification is deterministic. The same proof, verified with the same key and parameters, always produces the same result. Yet every system that consumes ZK proofs re-verifies them on every request.

A rollup validator re-verifies the state transition proof on every block. A bridge validator re-verifies the cross-chain proof on every transfer. An identity system re-verifies the credential proof on every login. The computation is identical. The result is identical. The cost is not.

SystemProof TypeVerifications/DayCost Without CacheCost With Cache
L2 RollupSTARK~100K2.5 CPU-seconds/sec8.5ms/sec
ZK BridgeSTARK/SNARK~50K1.25 CPU-seconds/sec4.25ms/sec
Identity/AuthSNARK~1M1,000 CPU-seconds/sec60ms/sec
DeFi ProtocolSTARK~200K5 CPU-seconds/sec17ms/sec
How It Works

What Gets Cached

The cached value is the verification result — not the proof itself. A verification result is a boolean (valid/invalid) plus metadata: which proof was verified, with what key, at what time. Total size: ~33 bytes per cached entry.

The proof itself (10-100KB for STARKs, 200 bytes for Groth16) is not stored in the cache unless explicitly archived. This keeps the cache small and fast.

The Computation Fingerprint

The cache key is a computation fingerprint — a deterministic hash of everything that affects the verification result:

fingerprint = SHA3-256( proof_bytes // the proof itself || verification_key // the circuit's verification key || public_inputs // any public inputs to the proof || constraint_set // the constraint system identifier || field_parameters // prime field, extension degree || verifier_version // software version of the verifier )

Two different proofs produce different fingerprints. The same proof verified with different parameters produces a different fingerprint. The same proof verified with the same everything produces the same fingerprint — and hits the cache.

STARK Deep Dive

The FRI Bottleneck — Eliminated

STARK verification has four steps. All four are eliminated by caching.

FRI layer checks9.2 us (37%)
Constraint evaluation7.8 us (31%)
Merkle path verification5.1 us (20%)
OOD consistency check2.9 us (12%)
Total STARK Verification 25 us
CACHED: 85 ns

The green sliver is 0.34% of the red bars above. That is the cached path.

SNARK Deep Dive

Pairing Elimination

SNARK verification (Groth16) requires bilinear pairing checks — expensive elliptic curve operations that take 1-3 milliseconds. While faster than STARK verification, the cost is meaningful at scale:

SNARK proofs are compact (200 bytes for Groth16, 400-800 bytes for PLONK), so the computation fingerprint is fast to compute. The cache entry is the same 33 bytes regardless of proof system.

cachee-zk-demo
[1] Proof arrives: ethSTARK, 47KB
[2] Compute fingerprint: SHA3-256(proof || vk || constraints)
[3] Cache miss Verify: 25us
[4] Cache result: signed with ML-DSA-65 + FALCON-512 + SLH-DSA
 
[5] Next request Cache hit: 85ns
 
    294x faster. Zero re-verification.

Run it yourself: brew install cachee && cachee-zk-demo

Architecture

Before and After

Without ZK Caching
Proof arrives
Deserialize proof
Verify (25us STARK / 1-3ms SNARK)
Return result
↻ Repeat on every request. Same proof. Same result. Same cost.
With ZK Caching
Proof arrives
Compute fingerprint
Check cache (31ns)
Hit? Return instantly. Miss? Verify, cache, return.
Verify once. Serve from cache forever.

After the first verification, every subsequent check is a cache lookup. The proof is never re-verified. The math is never re-executed. The result is a signed, fingerprinted truth claim served in nanoseconds.

Compatibility

7 Proof Systems. One Cache.

ethSTARK
STARK
294x
Verify: 25 us Cached: 85 ns 40-100 KB proofs
Plonky2
STARK
176-353x
Verify: 15-30 us Cached: 85 ns 20-50 KB proofs
Polygon Miden
STARK
235-471x
Verify: 20-40 us Cached: 85 ns 30-80 KB proofs
Cairo/StarkNet
STARK
294-588x
Verify: 25-50 us Cached: 85 ns 40-120 KB proofs
Groth16
SNARK
~50x
Verify: 1-3 ms Cached: 60 ns ~200 B proofs
PLONK
SNARK
~80x
Verify: 2-5 ms Cached: 60 ns 400-800 B proofs
Halo2
SNARK
~130x
Verify: 3-8 ms Cached: 60 ns 500-1500 B proofs
Trust Model

Three PQ Families. Break All Three.

A cached verification result is a truth claim signed by three independent post-quantum signature families. Forging it requires breaking all three simultaneously.

🔒
ML-DSA-65
Lattice-based (MLWE). NIST Level 3. Successor to Dilithium. 3,309-byte signatures.
🔐
FALCON-512
NTRU lattice-based. Compact signatures (656 bytes). Distinct mathematical basis from ML-DSA.
🔑
SLH-DSA
Stateless hash-based. No lattice assumptions. 17,088-byte signatures. Minimal attack surface.
Must break all three to forge a cached result
  • The computation fingerprint is deterministic. Anyone can recompute it from the proof and parameters. If the fingerprint matches, the cache entry corresponds to exactly this proof.
  • The result is signed. Three independent post-quantum signature families (ML-DSA-65, FALCON-512, SLH-DSA) attest the verification result. Forging a cached result requires breaking all three simultaneously.
  • The result is independently verifiable. The cachee-verify tool checks the cached result against the signatures and fingerprint with no network call, no Cachee account, and no trust in H33.

You don't trust the cache. You verify it once, then trust the math.

Applications

Where ZK Caching Applies

L2 Rollups
Every full node re-verifies the state transition proof. Cache the result at the sequencer, share it across all nodes.
🔗
ZK Bridges
Cross-chain proof verification on every transfer. Cache the verification of the source chain's state proof.
👤
Identity / Auth
Credential proofs (age verification, KYC, membership) verified on every login. Cache the verification result per credential hash.
💰
DeFi Protocols
Proof of reserves, solvency, collateral ratio — verified by every participant. Cache once, serve to all.
🔄
Recursive Proofs
Inner proofs verified during outer proof generation. Cache inner proof results to speed up recursive proving.
Install

Get Started

brew tap h33ai-postquantum/tap && brew install cachee cachee init && cachee start # Cache a STARK verification result SET stark:proof_abc123 verified FP <fingerprint_hex> # Retrieve at 85ns GETVERIFIED stark:proof_abc123

140+ Redis-compatible commands. Drop-in for existing infrastructure. The proof verification pipeline doesn't change — you add a cache check before and after.

Verify once. Cache the truth. Reuse it forever.

Install Cachee Computation Caching

Deep Dives