ZK Proof Caching: Stop Re-Verifying What You Proved
Every zero-knowledge proof system in production today has the same problem. Proofs get verified more than once. Sometimes hundreds of times. Sometimes millions. The proof does not change. The verification logic does not change. The result does not change. But the CPU cycles burn every single time, because nobody cached the answer.
This is not a theoretical concern. A STARK proof verification takes approximately 25 microseconds on modern hardware. That sounds fast until you multiply it by the number of times that identical proof gets re-verified across your system. A rollup verifier checking state transitions. A bridge validator confirming cross-chain proofs. A DeFi protocol gate-checking user eligibility. An identity system re-verifying credentials on every request. Each of these verification events produces the same boolean result for the same proof, and each of them burns the same 25 microseconds to arrive at a conclusion that was already known.
Cached, that same verification result returns in 0.085 microseconds. That is a 294x speedup. Not by making verification faster. By not doing it again.
This post is the engineering guide to ZK proof caching. What gets cached, how computation fingerprinting binds the cached result to the exact proof and verifier, why caching a verification result is safe, and where this pattern applies in production systems. If you run any system that verifies ZK proofs more than once, you are burning compute for no reason, and this post explains how to stop.
The Redundant Verification Problem
ZK proof verification is designed to be fast relative to proof generation. Generating a STARK proof might take seconds or minutes. Verifying it takes microseconds or milliseconds. This asymmetry is the entire point of ZK systems: the prover does the hard work once, and verifiers can efficiently check the result.
But "efficient" is relative. A single verification is cheap. The aggregate cost of repeated verification is not. Consider the lifecycle of a single ZK proof in a typical system.
A rollup operator generates a state transition proof. The proof is submitted to a verifier contract or a verification service. The verifier checks the proof and confirms validity. So far, one verification. Now a bridge validator needs to confirm that same state transition before releasing funds on the destination chain. Second verification. A DeFi protocol on the destination chain checks the proof before allowing the bridged assets to enter a lending pool. Third verification. An analytics service re-verifies the proof while indexing the transaction. Fourth. A compliance system re-verifies it for audit purposes. Fifth.
Five verifications of the same proof. Same inputs. Same circuit. Same public parameters. Same result every time. The proof was valid the first time it was verified. It was valid the second time. It will be valid every subsequent time, because proofs are deterministic mathematical objects. The verification function is a pure function: given the same proof, the same public inputs, and the same verification key, it always returns the same result.
This pattern repeats across every domain that uses ZK proofs. Identity systems re-verify credential proofs on every login. Privacy-preserving payment systems re-verify transaction proofs at every validation checkpoint. Machine learning systems that use ZK proofs for model integrity re-verify the same model attestation proof on every inference call. The computational waste scales linearly with the number of verification points in your architecture, and most architectures have far more verification points than they need.
What Gets Cached (And What Does Not)
The critical distinction in ZK proof caching is this: you cache the verification result, not the proof itself. The proof is a large mathematical object, often kilobytes or megabytes in size. The verification result is a boolean: valid or invalid. Caching the proof would save network transfer but not computation. Caching the verification result saves the computation, which is where the cost actually lives.
A cached entry for a ZK proof verification contains exactly three things:
The computation fingerprint. This is a cryptographic hash that uniquely identifies the exact verification computation. It incorporates the proof bytes, the public inputs, the verification key, and the verifier version. If any of these change, the fingerprint changes, and the cached result is not returned. This is what makes ZK proof caching safe: the fingerprint is a binding commitment to the exact computation that was performed.
The verification result. A boolean indicating whether the proof verified successfully. This is the value that gets returned on a cache hit, skipping the entire verification computation.
The verification metadata. Timestamp of when the verification was performed, the verifier identity, and optional audit fields. This metadata serves operational and compliance purposes but does not affect the correctness of the cached result.
The Computation Fingerprint in Detail
The fingerprint is the key to making ZK proof caching both safe and correct. It is constructed as follows:
fingerprint = SHA3-256(
proof_bytes || // The serialized proof
public_inputs_bytes || // The serialized public inputs
verification_key || // The circuit verification key
verifier_version || // Version of the verification library
parameter_set || // Any additional protocol parameters
domain_separator // Prevents cross-protocol collisions
)
This construction ensures that the cached result is returned only when every component of the verification matches exactly. If the proof changes by a single bit, the fingerprint changes. If the public inputs change, the fingerprint changes. If you upgrade your verifier library, the fingerprint changes and the proof is re-verified with the new code. If protocol parameters are updated, the fingerprint changes.
The domain separator prevents a subtle but important attack: cross-protocol cache poisoning. Without it, a proof verified in one protocol context could have its cached result served in a different protocol context where the proof might have different semantics. The domain separator binds the cached result to the specific protocol that performed the verification.
This fingerprinting approach is the same technique used in computation caching for any deterministic function. The inputs fully determine the output. If you can cryptographically bind a cache key to the exact inputs, you can safely cache the output. ZK proof verification is a pure function, which makes it a perfect candidate for this approach.
Why Caching Verification Results Is Safe
The most common objection to ZK proof caching is a security concern: "What if the proof was invalid? Would we cache an incorrect result?" The answer addresses a fundamental misunderstanding of what caching does in this context.
When you verify a ZK proof, the verifier performs a series of mathematical checks. It evaluates polynomial commitments, checks algebraic constraints, and validates that the proof satisfies the circuit's verification equation. At the end of this process, the verifier produces a result: valid or invalid. This result is the truth about that specific proof, given those specific inputs and that specific verification key.
Caching this result does not change the result. If the proof was invalid, the verification returned "invalid," and "invalid" is what gets cached. The next time someone asks about that same proof with those same inputs, the cached result correctly returns "invalid." The cache does not make invalid proofs appear valid. It stores whatever the verifier actually determined.
The deeper point is this: re-verification does not add security. It adds redundancy. If your verifier has a bug that causes it to accept an invalid proof, running the same buggy verifier again will accept it again. Running it a thousand times will accept it a thousand times. Repeated verification with the same code is not a security measure. It is a waste of compute masquerading as due diligence.
The Core Principle
A ZK proof verifier is a deterministic function. Given the same proof, the same public inputs, and the same verification key, it always returns the same result. Executing a deterministic function more than once on the same inputs is not security -- it is redundancy. Cache the result. Serve the truth. Stop re-computing what you already know.
If you are concerned about verifier bugs, the correct mitigation is not repeated verification. It is running multiple independent verifier implementations and comparing results. That is defense in depth. Running the same verifier twice on the same proof is not defense in depth. It is the same defense, applied twice, at the same depth.
There is one scenario where re-verification is warranted: when the verification key or verifier code has been updated. In this case, the computation fingerprint changes because the verifier version is part of the fingerprint. The cache naturally invalidates, and the proof is re-verified with the updated code. This is exactly the correct behavior. You re-verify when something about the verification has changed. You do not re-verify when nothing has changed.
The 294x Benchmark
The 294x speedup figure comes from measured performance on production workloads. Here is how the numbers break down.
STARK proof verification on a Graviton4 processor takes approximately 25 microseconds per proof. This includes deserializing the proof, computing the polynomial evaluations, checking the FRI commitments, and validating the algebraic constraints. It is highly optimized code running on modern hardware. There is no obvious way to make the raw verification significantly faster without changing the proof system.
A cached result lookup using an in-process cache takes 0.085 microseconds (85 nanoseconds). This is a hash computation on the fingerprint followed by a memory lookup in a concurrent hash map. The hash computation dominates at approximately 60 nanoseconds. The memory lookup adds approximately 25 nanoseconds. Total: 85 nanoseconds.
| Operation | Latency | Relative |
|---|---|---|
| Raw STARK verification | 25,000 ns | 1x (baseline) |
| Fingerprint computation | 60 ns | Part of lookup |
| Cache memory lookup | 25 ns | Part of lookup |
| Total cached lookup | 85 ns | 294x faster |
The 294x factor is the ratio: 25,000 / 85 = 294.1. This is not a synthetic benchmark. It is the measured difference between running a STARK verifier and looking up a precomputed result in a concurrent hash map. The proof sizes in the benchmark ranged from 8 KB to 128 KB, which covers the range of typical STARK proofs in production. The fingerprint computation time is constant regardless of proof size because SHA3-256 processes the proof bytes in streaming fashion.
At scale, this speedup translates directly into cost savings and throughput gains. A system verifying 100,000 proofs per second spends 2.5 seconds of CPU time per second on verification. With a 90% cache hit rate, verification CPU drops to 0.25 seconds per second -- a 10x reduction in dedicated verification compute. The remaining 10% of cache misses (new proofs, updated verifiers) are verified normally and their results are cached for subsequent lookups.
Architecture: Before and After
Understanding the architectural change helps clarify where ZK proof caching fits in a production system. Below are text representations of the before and after architectures for a typical system that verifies ZK proofs at multiple points.
Before: Redundant Verification at Every Point
Proof Submitted
|
v
[Verifier A] --- 25us ---> Valid/Invalid
|
v
[Verifier B] --- 25us ---> Valid/Invalid (same proof, same result)
|
v
[Verifier C] --- 25us ---> Valid/Invalid (same proof, same result)
|
v
[Verifier D] --- 25us ---> Valid/Invalid (same proof, same result)
|
v
[Audit Log] --- 25us ---> Valid/Invalid (same proof, same result)
Total: 125us per proof, 5 independent verifications
All 5 produce the same result for the same proof
After: Verify Once, Cache the Result
Proof Submitted
|
v
[Cache Lookup] --- 0.085us ---> Miss (first time)
|
v
[Verifier A] --- 25us ---> Valid/Invalid
|
v
[Cache Store] --- result cached with fingerprint
|
v
[Verifier B] --- Cache Hit --- 0.085us ---> Valid/Invalid
[Verifier C] --- Cache Hit --- 0.085us ---> Valid/Invalid
[Verifier D] --- Cache Hit --- 0.085us ---> Valid/Invalid
[Audit Log] --- Cache Hit --- 0.085us ---> Valid/Invalid
Total: 25.34us per proof (25us verify + 4 x 0.085us cache hits)
Savings: 99.7us per proof, 79.7% reduction
The first verification proceeds normally. Its result is cached with the computation fingerprint as the key. Every subsequent verification of the same proof hits the cache and returns in 85 nanoseconds. The more verification points in your architecture, the larger the savings. A system with 10 verification points saves 224.2 microseconds per proof. A system with 100 verification points saves 2,491.5 microseconds per proof.
Where ZK Proof Caching Applies
Four major categories of systems benefit from ZK proof caching. Each has a different access pattern, but the underlying principle is identical: the same proof gets verified more than once.
1. Rollup Verifiers
Layer 2 rollups generate state transition proofs that must be verified by the Layer 1 chain. But the proof is also verified by bridge operators, indexers, block explorers, and any service that needs to confirm the rollup state. A single rollup batch proof might be verified 50-200 times across the ecosystem before it is considered "settled." With ZK proof caching, it is verified once by the first verifier, and every subsequent check returns the cached result.
The cache hit rate for rollup verifiers is typically 95% or higher because the same batch proof is checked by many independent parties within a short time window. The proof does not change between checks. The verification key does not change between checks. The only thing that changes is which party is asking, and that is not part of the verification computation.
2. Bridge Validators
Cross-chain bridges use ZK proofs to attest to state on the source chain. Bridge validators must verify these proofs before releasing funds on the destination chain. In a decentralized bridge with 20 validators, each validator independently verifies the same proof. That is 20 verifications of the same proof producing the same result.
With a shared verification cache, the first validator verifies the proof and caches the result. The remaining 19 validators hit the cache. Total verification cost drops from 20 * 25 microseconds = 500 microseconds to 25 + 19 * 0.085 = 26.6 microseconds. A 94.7% reduction in verification compute for every bridge attestation. For a bridge processing 10,000 attestations per day with 20 validators each, that is 200,000 verifications reduced to approximately 10,000 verifications plus 190,000 cache lookups.
3. DeFi Protocol Checks
DeFi protocols increasingly use ZK proofs for compliance gates, eligibility verification, and privacy-preserving transaction validation. A user who proves their eligibility once should not need to re-prove it on every interaction with the protocol. With ZK proof caching, the eligibility proof is verified on first submission, and every subsequent protocol interaction checks the cached result.
This pattern is especially powerful for identity-based proofs. A ZK proof of KYC compliance, age verification, or jurisdiction eligibility does not change between transactions. The user's credential is the same. The circuit is the same. The result is the same. Caching the verification result allows the protocol to enforce compliance checks on every transaction without burning verification compute on every transaction.
4. Identity Verification
ZK-based identity systems generate proofs that attest to user attributes without revealing the attributes themselves. These proofs are verified on every authentication event. A user who logs in 10 times per day triggers 10 verification events for a proof that has not changed since it was issued. Over a month, that is 300 redundant verifications per user. For a system with 100,000 active users, that is 30 million redundant verifications per month.
With ZK proof caching and a cache TTL aligned to the credential validity period, each user's proof is verified once and cached for the credential's lifetime. The 30 million monthly verifications drop to approximately 100,000 (one per user, or slightly more if credentials are refreshed mid-period). Verification compute drops by 99.7%.
Addressing the Objections
Three objections come up repeatedly when engineering teams evaluate ZK proof caching. Each has a concrete answer.
Objection: "What if the proof was invalid?"
You verified it. The verifier returned its result. If the result was "invalid," then "invalid" is what gets cached. The cache stores the truth as determined by your verifier. If your verifier is correct, the cached result is correct. If your verifier has a bug, running it again will not fix the bug. The correct response to verifier bugs is fixing the verifier and invalidating the cache (which happens automatically because the verifier version is part of the fingerprint).
Objection: "What about proof expiration?"
Some ZK proofs have time-bounded validity. A proof of current account balance is only meaningful as of a specific block height. This is handled by including the relevant time or block height parameter in the public inputs. Since the public inputs are part of the computation fingerprint, a proof verified at block height 1000 has a different fingerprint than a proof verified at block height 1001. There is no risk of serving a stale cached result for a time-bounded proof, because the time parameter changes the fingerprint.
Additionally, the cache itself supports TTL-based expiration. For proofs with known validity periods, the cache entry can be set to expire when the proof's validity period ends. This provides a secondary safety net beyond the fingerprint mechanism.
Objection: "Does this break the security model?"
No. The security model of a ZK proof system is that the verifier accepts valid proofs and rejects invalid proofs. Caching the verifier's output does not change this model. The verifier still runs on every new proof. It still rejects invalid proofs. The only difference is that it does not re-run on proofs it has already checked. The security guarantee is identical. The computational cost is lower.
If your security model requires that every verification event independently executes the verifier (for example, in a consensus protocol where each node must independently verify), then ZK proof caching applies within each node, not across nodes. Each node verifies the proof once and caches the result for its own subsequent lookups. The inter-node verification independence is preserved.
Implementation With Cachee
Implementing ZK proof caching with Cachee requires three steps: computing the fingerprint, checking the cache, and storing the result on a miss.
# Install Cachee
brew tap h33ai-postquantum/tap
brew install cachee
# Initialize with ZK proof caching mode
cachee init --mode zkp
# Start Cachee
cachee start
The integration code follows a standard cache-aside pattern, with the fingerprint serving as the cache key.
// Pseudocode for ZK proof caching
fn verify_with_cache(proof: &Proof, inputs: &PublicInputs, vk: &VerifyingKey) -> bool {
// Step 1: Compute fingerprint
let fingerprint = sha3_256(
proof.to_bytes(),
inputs.to_bytes(),
vk.to_bytes(),
VERIFIER_VERSION,
DOMAIN_SEPARATOR,
);
// Step 2: Check cache
if let Some(result) = cachee.get(fingerprint) {
return result; // 0.085 microseconds
}
// Step 3: Verify (cache miss)
let result = stark_verify(proof, inputs, vk); // 25 microseconds
// Step 4: Cache the result
cachee.set(fingerprint, result, ttl: 86400);
return result;
}
The TTL of 86,400 seconds (24 hours) is a reasonable default for most ZK proof workloads. Proofs that remain valid for the lifetime of the system can use a longer TTL or no expiration. Proofs with known expiration times should use a TTL matched to their validity period. The CacheeLFU admission policy ensures that frequently verified proofs remain in cache while rarely accessed proofs are evicted when memory pressure requires it.
Cost Savings at Scale
The cost impact of ZK proof caching depends on three variables: the number of proofs verified per second, the cache hit rate, and the cost of the compute running the verifier.
| Scale | Verifications/sec | CPU Cost (no cache) | CPU Cost (90% hit rate) | Monthly Savings |
|---|---|---|---|---|
| Small | 1,000 | 0.025 cores | 0.0025 cores | $16 |
| Medium | 100,000 | 2.5 cores | 0.25 cores | $1,620 |
| Large | 10,000,000 | 250 cores | 25 cores | $162,000 |
At the large scale (10 million verifications per second), ZK proof caching with a 90% hit rate saves 225 CPU cores of verification compute. At $0.10/core-hour on cloud compute, that is $16,200 per month in direct compute savings. Over a year, $194,400 in verification compute alone. And this does not account for the throughput benefit: your system can process 10x more verification requests on the same hardware, because 90% of them resolve in 85 nanoseconds instead of 25 microseconds.
For systems where verification latency is on the critical path (bridge validators, DeFi protocol gates, real-time identity checks), the latency improvement is more valuable than the cost savings. Reducing verification latency from 25 microseconds to 85 nanoseconds can take an end-to-end transaction from "noticeable delay" to "instant" from the user's perspective. In DeFi, where transaction ordering matters, faster verification can directly impact execution quality.
When Not to Cache
Do not cache verification results for proofs in adversarial multi-prover settings where you need each prover's submission independently verified against a changing state root. In these cases, the public inputs (including the state root) change per submission, so the fingerprint changes naturally and the cache produces misses. The caching system handles this correctly -- it simply provides no speedup in this specific scenario, because each verification is genuinely unique.
The Pattern Generalizes
ZK proof caching is a specific instance of a general principle: deterministic computations should be cached. Any function where the same inputs always produce the same output is a candidate for computation caching. ZK proof verification is an especially good candidate because the computation is expensive (25 microseconds), the inputs are large but hashable (kilobytes of proof data), and the output is small (one boolean). The ratio of computation cost to cache lookup cost is nearly 300x, which means even modest cache hit rates produce significant savings.
The same principle applies to any cryptographic verification: signature verification, hash chain validation, Merkle proof checking, certificate chain verification. All of these are deterministic functions that get called repeatedly on the same inputs. All of them benefit from the same computation fingerprinting and result caching approach. ZK proofs are simply the most expensive of the group, which makes the savings most dramatic.
If your system verifies the same proof more than once, you are paying for security theater instead of security. The proof was verified. The result is known. Cache the truth. Serve it in 85 nanoseconds. Stop burning 25 microseconds to re-discover what you already know.
The Bottom Line
ZK proof verification is a pure function. Same proof, same inputs, same key, same result -- always. STARK verification takes 25 microseconds. A cached result lookup takes 0.085 microseconds. That is a 294x speedup with zero reduction in security. The computation fingerprint binds the cached result to the exact proof, inputs, verification key, and verifier version. Cache invalidation is automatic when any input changes. Stop re-verifying what you already proved.
294x faster ZK proof verification. Verify once, cache the result, serve in 85 nanoseconds.
brew install cachee Post-Quantum Caching Guide