Proof Reuse
A verified proof is a mathematical fact. Re-verifying it is redundant computation.
Verify once. Serve the truth forever at 85 nanoseconds.
Proof reuse is the practice of caching the verification result of a cryptographic proof so that it is verified once and served to all subsequent consumers without re-execution. A STARK proof verified by one node becomes a reusable truth claim for every node. The cached result is bound to the exact proof via a computation fingerprint and signed by three independent post-quantum signature families. Verification: 25 microseconds. Cached truth: 85 nanoseconds. Every re-verification after the first is pure waste.
A Proof Becomes a Fact
Verification is the one-time cost of converting a proof into a fact. Once converted, the fact stands on its own.
(one-time cost)
Every system that consumes proofs re-verifies them. Every re-verification produces the same result. The computation is identical. The result is identical. The cost is not. Proof reuse eliminates the cost by caching the result of the first verification and serving it to every subsequent consumer.
Where Re-Verification Burns Compute
Every consumer re-verifies. None of them need to. Here is what that costs.
1,000 Nodes. One Verification.
Without proof reuse, every node does 25us of work. With it, one node does 25us and 999 do 85ns.
Without Proof Reuse: 1,000 independent verifications
With Proof Reuse: 1 verification + 999 cache lookups
How Consumers Trust the Cached Result
You do not trust the cache. You trust the math. Three layers make forgery computationally infeasible.
Computation Fingerprint (Proves Identity)
The cache key is a SHA3-256 hash of the proof bytes, verification key, public inputs, constraint set, field parameters, and verifier version. Anyone can recompute it from the original proof and parameters. If the fingerprint matches, the cache entry corresponds to exactly this proof. No collision is possible without breaking SHA3-256.
PQ Signatures (Proves Authenticity)
The verification result is signed by three independent post-quantum signature families: ML-DSA-65 (MLWE lattices), FALCON-512 (NTRU lattices), and SLH-DSA (stateless hash functions). Three independent mathematical hardness assumptions. Forging a cached result requires breaking all three simultaneously.
Independent Verification (Proves Correctness)
The cachee-verify tool checks any cached result against the signatures and fingerprint with no network call, no Cachee account, and no trust in H33. Anyone can independently verify that a cached truth claim is authentic and corresponds to the correct proof. The verifier is open-source.
// The computation fingerprint binds the cache entry to the exact proof
fingerprint = SHA3-256(
proof_bytes // the proof itself
|| verification_key // the circuit's verification key
|| public_inputs // any public inputs to the proof
|| constraint_set // the constraint system identifier
|| field_parameters // prime field, extension degree
|| verifier_version // software version of the verifier
)
// Two different proofs -> different fingerprints.
// Same proof, same params -> same fingerprint -> cache hit.
You don't trust the cache. You verify it once, then trust the math. The fingerprint proves identity. The signatures prove authenticity. The verifier proves correctness. Three layers, three independent guarantees.
Proof Reuse vs Proof Aggregation
They solve different problems. They are complementary. Use both.
Proof Aggregation
Reduces proof count. Takes N individual proofs and combines them into a single aggregated proof. The aggregated proof can be verified once to confirm all N constituent proofs are valid. The verification cost of N proofs becomes the verification cost of one aggregated proof.
- Input: N proofs
- Output: 1 aggregated proof
- Verification: once per aggregated proof
- Use case: reducing on-chain verification costs
Proof Reuse
Reduces verification count per proof. Takes a single proof (or a single aggregated proof), verifies it once, caches the verification result, and serves it to every subsequent consumer at cache-lookup speed. The verification cost per consumer drops from full verification to a hash lookup.
- Input: 1 proof (original or aggregated)
- Output: 1 cached verification result
- Verification: once ever, across all consumers
- Use case: eliminating redundant re-verification
The optimal pipeline: aggregate first (reduce N proofs to 1), then cache the aggregated proof's verification result (reduce M consumers to 1 verification). Aggregation eliminates N-1 proofs. Reuse eliminates M-1 verifications. Together: verify once, for all proofs, for all consumers.
The Proof Reuse Pipeline
Proof arrives. Fingerprint computed. Cache checked. Miss: verify, sign, cache. Hit: return the truth at 85ns.
2. Sign result: ML-DSA-65 + FALCON-512 + SLH-DSA
3. Store: fingerprint -> {result, signatures, timestamp}
4. Return verified result to consumer
2. Return signed truth claim to consumer
3. No verification. No field arithmetic. No Merkle traversal.
4. 294x faster. Zero re-computation.
After the first consumer triggers verification, every subsequent consumer for the lifetime of the proof receives the cached result at 85 nanoseconds.
The Cost of Not Reusing
Every re-verification burns CPU cycles that produce no new information. At scale, the waste is quantifiable. Here is what proof reuse saves across different verification volumes, assuming STARK verification at 25 microseconds and cached lookup at 85 nanoseconds.
| Verifications/Day | Unique Proofs | Without Reuse | With Reuse | CPU-Hours Saved |
|---|---|---|---|---|
| 100K | 1,000 | 2.5 CPU-sec/day | 0.025 sec + 8.4ms | ~2.5 sec |
| 1M | 5,000 | 25 CPU-sec/day | 0.125 sec + 84.6ms | ~24.8 sec |
| 10M | 10,000 | 250 CPU-sec/day | 0.25 sec + 849ms | ~248.9 sec |
| 100M | 50,000 | 2,500 CPU-sec/day | 1.25 sec + 8.5sec | ~2,490 sec (41.5 min) |
| 1B | 100,000 | 25,000 CPU-sec/day | 2.5 sec + 85sec | ~24,912 sec (6.9 hrs) |
At 1 billion verifications per day with 100,000 unique proofs, proof reuse saves nearly 7 CPU-hours daily. The savings scale linearly with the re-verification ratio (verifications per unique proof). A rollup with 1,000 validators re-verifying 100 proofs per day saves 1,000x the single-verification cost.
The compute savings are only half the story. Proof reuse also eliminates the tail latency spike from verification. Every consumer gets 85ns instead of a distribution from 25us to 50us (STARK) or 1ms to 8ms (SNARK). Consistent, predictable, cache-speed latency.
Run it yourself: brew install cachee && cachee-proof-reuse-demo
What Gets Cached
The cached value is the verification result -- not the proof itself. A verification result is a boolean (valid/invalid) plus attestation metadata. Total size: approximately 33 bytes per cached entry plus signatures.
struct CachedVerification {
fingerprint: [u8; 32], // SHA3-256 of proof + params
result: bool, // valid or invalid
verified_at: u64, // unix timestamp
verifier_version: [u8; 4], // software version
sig_mldsa: [u8; 3309], // ML-DSA-65 signature
sig_falcon: [u8; 656], // FALCON-512 signature
sig_slhdsa: [u8; 17088], // SLH-DSA signature
}
// Total: ~21KB per cached entry (signatures dominate).
// The proof itself (10-100KB) is NOT stored unless explicitly archived.
The signatures are the bulk of the cached entry. This is by design: the three PQ signature families provide independent mathematical guarantees that the cached result is authentic. At 10 million cached verifications, the signature storage is approximately 210 GB -- significant, but far less than storing the proofs themselves (100 TB+ for 100KB STARKs).
When Proof Reuse Does Not Apply
Proof reuse works because verification is deterministic. In certain edge cases, the assumptions break:
- Time-bound proofs: If a proof's validity depends on the current time (e.g., "this credential is valid until 2026-12-31"), the cached result must include the expiry and be invalidated accordingly. The computation fingerprint includes the public inputs, so a proof with a different expiry produces a different fingerprint.
- Verifier upgrades: If the verifier software is upgraded and the new version would reject a proof the old version accepted (due to a bug fix or constraint change), cached results from the old verifier must be invalidated. The verifier version is part of the fingerprint, so this happens automatically.
- Privacy-sensitive contexts: If the act of verifying a proof must not be observable (zero-knowledge about the verification itself), then caching the result reveals that a verification occurred. In most architectures this is not a concern, but it matters for some anonymous credential systems.
In all three cases, the computation fingerprint handles the edge case by construction: different inputs produce different fingerprints, and TTL-based eviction handles time-bound proofs. The architecture accounts for these boundaries without special-casing.
Get Started
brew tap h33ai-postquantum/tap && brew install cachee
cachee init && cachee start
# Cache a proof verification result
SET proof:0xa3f1 verified FP <fingerprint_hex>
# Retrieve the cached truth at 85ns
GETVERIFIED proof:0xa3f1
# Independently verify a cached result offline
cachee-verify --fingerprint 0xa3f1 --proof ./proof.bin --vk ./vk.bin
140+ Redis-compatible commands. Drop-in for existing infrastructure. The proof verification pipeline does not change -- you add a cache check before and after. One line of code to check, one line to store, zero lines to re-verify.
A verified proof is a fact. Stop re-verifying facts.
Verify once. Cache the result. Serve it to every consumer at 85 nanoseconds.
Install Cachee ZK Caching Guide