ZK Caching
Cache STARK and SNARK verification results. Verify a proof once.
Reuse the result forever. The layer that eliminates redundant proof verification.
ZK caching eliminates repeated zero-knowledge proof verification. A STARK or SNARK proof is verified once. The verification result — valid or invalid — is cached with a computation fingerprint that binds it to the exact proof, verification key, and parameters. Subsequent requests retrieve the cached result in nanoseconds instead of re-executing the verification. STARK verification: 25 microseconds uncached, 85 nanoseconds cached. 294x speedup.
25 microseconds vs 85 nanoseconds
The same verification result. One path recomputes it. The other remembers it.
ZK-STARK Caching
Transparent setup. Larger proofs (10-100KB). More expensive verification. Benefits most from caching. FRI protocol is the bottleneck (37% of verification time).
ZK-SNARK Caching
Trusted setup required. Compact proofs (200B for Groth16). Faster verification but still adds up at scale. Caching eliminates pairing checks entirely.
Why ZK Proofs Need Caching
Zero-knowledge proof verification is deterministic. The same proof, verified with the same key and parameters, always produces the same result. Yet every system that consumes ZK proofs re-verifies them on every request.
A rollup validator re-verifies the state transition proof on every block. A bridge validator re-verifies the cross-chain proof on every transfer. An identity system re-verifies the credential proof on every login. The computation is identical. The result is identical. The cost is not.
| System | Proof Type | Verifications/Day | Cost Without Cache | Cost With Cache |
|---|---|---|---|---|
| L2 Rollup | STARK | ~100K | 2.5 CPU-seconds/sec | 8.5ms/sec |
| ZK Bridge | STARK/SNARK | ~50K | 1.25 CPU-seconds/sec | 4.25ms/sec |
| Identity/Auth | SNARK | ~1M | 1,000 CPU-seconds/sec | 60ms/sec |
| DeFi Protocol | STARK | ~200K | 5 CPU-seconds/sec | 17ms/sec |
What Gets Cached
The cached value is the verification result — not the proof itself. A verification result is a boolean (valid/invalid) plus metadata: which proof was verified, with what key, at what time. Total size: ~33 bytes per cached entry.
The proof itself (10-100KB for STARKs, 200 bytes for Groth16) is not stored in the cache unless explicitly archived. This keeps the cache small and fast.
The Computation Fingerprint
The cache key is a computation fingerprint — a deterministic hash of everything that affects the verification result:
fingerprint = SHA3-256(
proof_bytes // the proof itself
|| verification_key // the circuit's verification key
|| public_inputs // any public inputs to the proof
|| constraint_set // the constraint system identifier
|| field_parameters // prime field, extension degree
|| verifier_version // software version of the verifier
)
Two different proofs produce different fingerprints. The same proof verified with different parameters produces a different fingerprint. The same proof verified with the same everything produces the same fingerprint — and hits the cache.
The FRI Bottleneck — Eliminated
STARK verification has four steps. All four are eliminated by caching.
The green sliver is 0.34% of the red bars above. That is the cached path.
Pairing Elimination
SNARK verification (Groth16) requires bilinear pairing checks — expensive elliptic curve operations that take 1-3 milliseconds. While faster than STARK verification, the cost is meaningful at scale:
- 1M verifications/day: 1,000-3,000 CPU-seconds of pure pairing computation
- Cached: 60 milliseconds total (60 nanoseconds per lookup)
- Speedup: ~50x
SNARK proofs are compact (200 bytes for Groth16, 400-800 bytes for PLONK), so the computation fingerprint is fast to compute. The cache entry is the same 33 bytes regardless of proof system.
Run it yourself: brew install cachee && cachee-zk-demo
Before and After
After the first verification, every subsequent check is a cache lookup. The proof is never re-verified. The math is never re-executed. The result is a signed, fingerprinted truth claim served in nanoseconds.
7 Proof Systems. One Cache.
Three PQ Families. Break All Three.
A cached verification result is a truth claim signed by three independent post-quantum signature families. Forging it requires breaking all three simultaneously.
- The computation fingerprint is deterministic. Anyone can recompute it from the proof and parameters. If the fingerprint matches, the cache entry corresponds to exactly this proof.
- The result is signed. Three independent post-quantum signature families (ML-DSA-65, FALCON-512, SLH-DSA) attest the verification result. Forging a cached result requires breaking all three simultaneously.
- The result is independently verifiable. The
cachee-verifytool checks the cached result against the signatures and fingerprint with no network call, no Cachee account, and no trust in H33.
You don't trust the cache. You verify it once, then trust the math.
Where ZK Caching Applies
Get Started
brew tap h33ai-postquantum/tap && brew install cachee
cachee init && cachee start
# Cache a STARK verification result
SET stark:proof_abc123 verified FP <fingerprint_hex>
# Retrieve at 85ns
GETVERIFIED stark:proof_abc123
140+ Redis-compatible commands. Drop-in for existing infrastructure. The proof verification pipeline doesn't change — you add a cache check before and after.