PayPal processes over 25 billion transactions per year. Block (Square, Cash App) processes billions more. Both companies run ML fraud scoring on every single transaction. The scoring pipeline is identical in structure: collect features (user history, transaction patterns, device data, merchant risk), run the model, return a decision. And in both cases, the same bottleneck dominates: feature collection takes 15–30ms while model inference takes just 1–2ms. Feature fetching is 90%+ of the total fraud scoring latency. L1 caching collapses the pipeline from 20ms to 2.015ms — a 10x improvement that changes what is architecturally possible.
The Shared Architecture Problem
PayPal and Block have different business models, different user bases, and different product lines. But their fraud scoring architectures share the same fundamental design: a real-time feature pipeline feeding an ML model. When a Cash App user sends $50 to a friend, the fraud scoring service needs to assemble a context vector before the model can make a decision. That context vector includes the sender’s transaction history encoding (embedding), the recipient’s risk profile, the device fingerprint, velocity features (how many transactions this user has made in the last minute, hour, day), geographic anomaly scores, and relationship graph features (is this a known contact? first-time recipient? connected to flagged accounts?).
At PayPal, the same pattern applies at even larger scale. Every Venmo transfer, every PayPal checkout, every Braintree-processed transaction runs through the fraud pipeline. PayPal has publicly discussed their risk scoring architecture, which relies on hundreds of features computed in real time. But real-time computation is limited by the speed of feature retrieval. Each feature lives in a different storage system: user embeddings in a vector store, velocity counters in a time-series database, graph features in a relationship database, device data in a fingerprinting service. Each lookup is a network round-trip.
Breaking Down the 20ms Pipeline
A typical fraud scoring request at PayPal or Block follows this timeline: the transaction event arrives at the fraud service (0ms). The service issues parallel requests to multiple feature stores. The user embedding lookup takes 2–4ms (feature store network round-trip). The recipient risk profile takes 2–3ms. The device fingerprint lookup takes 1–3ms. Velocity feature aggregation takes 3–5ms (requires querying a time-series store and computing sliding window aggregates). Graph feature computation takes 4–8ms (requires traversing relationship edges). Merchant category risk takes 1–2ms. Account age and history features take 1–2ms.
Even with aggressive parallelization, the total feature assembly time is bounded by the slowest feature. Graph features at 4–8ms and velocity aggregates at 3–5ms are the long poles. The p50 feature assembly time is approximately 8ms. The p99 — which is what matters for SLA compliance — hits 18–25ms due to tail latency amplification across multiple parallel requests.
Current PayPal/Block fraud scoring pipeline
L1 Caching: 10 Features in 15 Microseconds
The critical insight is that fraud features at PayPal and Block exhibit extreme access locality. Active users — the ones actually making transactions right now — represent a small fraction of the total user base. PayPal has 430 million active accounts, but on any given day, perhaps 20–30 million users are transacting. The embeddings and features for those active users are accessed repeatedly throughout the day. Popular merchants (Amazon, Walmart, Target, Uber, Starbucks) have their risk profiles queried thousands of times per second. Device fingerprints repeat across sessions for the same user.
An L1 in-process cache stores these hot features directly in the fraud scoring service’s memory. Each lookup completes in 1.5 microseconds. Ten feature lookups take 15 microseconds. Not milliseconds — microseconds. The total fraud scoring pipeline becomes: 15µs for feature assembly + 2ms for model inference = 2.015ms total. That is a 10–15x reduction from the current 20–32ms p99.
With L1 caching: 10 features in 15 microseconds
What 10x Faster Means at PayPal/Block Scale
The downstream effects of a 10x latency reduction at this scale are substantial. First, more transactions per server per second. When fraud scoring takes 20ms, each server thread is occupied for 20ms per transaction. At 2ms, the same thread handles 10x more transactions. This translates directly into fewer servers needed to handle peak volume. At PayPal’s scale (25B+ transactions/year, with peaks during Black Friday, Cyber Monday, and holiday shopping), the infrastructure savings from 10x fewer fraud-scoring servers is measured in tens of millions of dollars annually.
| Metric | Current (20ms) | With L1 (2ms) | Improvement |
|---|---|---|---|
| Fraud scoring latency (p99) | 20-32ms | 2.015ms | 10-15x faster |
| Feature fetch time | 18-30ms | 0.015ms | 1,200-2,000x faster |
| Transactions/server/sec | ~50 | ~500 | 10x throughput |
| Server fleet size (peak) | N servers | N/10 servers | 90% reduction |
Second, latency headroom enables more complex models. When the fraud pipeline takes 20ms and the SLA is 100ms, there is limited room for model complexity. At 2ms total pipeline, there is 98ms of headroom. PayPal and Block can run ensemble models (multiple models voting on each transaction), add more features (graph features that were previously too slow to compute in real time), or implement multi-stage scoring (a fast first-pass model followed by a detailed second-pass for borderline cases). Each of these improvements directly increases fraud detection accuracy without violating latency SLAs.
Velocity Features: The Special Case
Velocity features — transaction counts and amounts over sliding time windows — are the most latency-sensitive fraud features because they change with every transaction. A user’s 1-minute velocity counter increments every time they transact. Traditional architectures compute these in real time by querying a time-series store, which adds 3–5ms per lookup. With L1 caching, the velocity counter lives in-process and updates atomically on each transaction. The current value is always available at memory speed. Asynchronous write-back to the persistence layer ensures durability without blocking the scoring path. This pattern eliminates the single slowest feature lookup in the entire pipeline.
For both PayPal and Block, the path from 20ms to 2ms fraud scoring is not a theoretical exercise. It is a concrete architectural change: move hot features from network storage into process memory, use L1 caching with predictive warming, and let the feature store serve as L2 for cold data. The model becomes the bottleneck. The feature pipeline disappears. And the savings — in infrastructure, in latency, in fraud losses prevented by better models — compound across billions of transactions.
Related Reading
Also Read
The Numbers That Matter
Cache performance discussions get philosophical fast. Here are the actual measured numbers from production deployments running on documented hardware, so you can compare against your own infrastructure instead of trusting marketing copy.
- L0 hot path GET: 28.9 nanoseconds on Apple M4 Max, single-threaded against pre-warmed in-memory cache. This is the floor — there's no faster way to read a key.
- L1 CacheeLFU GET: ~89 nanoseconds on AWS Graviton4 (c8g.metal-48xl). Sharded DashMap with admission filtering.
- Sustained throughput: 32 million ops/sec single-threaded on M4 Max, 7.41 million ops/sec at 16 workers on Graviton4 c8g.16xlarge.
- L2 fallback: Sub-millisecond hits against ElastiCache Redis 7.4 over same-AZ network when L1 misses cascade through.
The compounding effect matters more than any single number. A 28-nanosecond L0 hit means your application spends almost zero time on cache lookups in the hot path, leaving the CPU free for the actual business logic that generates revenue.
When Caching Actually Helps
Caching isn't free. It introduces a consistency problem you didn't have before. Before adding any cache layer, the question to answer is whether your workload actually benefits from caching at all.
Caching helps when three conditions hold simultaneously. First, your reads dramatically outnumber your writes — typically a 10:1 ratio or higher. Second, the same keys get read repeatedly within a window where a cached value remains valid. Third, the cost of computing or fetching the underlying value is meaningfully higher than the cost of a cache lookup. Database queries that hit secondary indexes, RPC calls to slow upstream services, expensive computed aggregations, and rendered template fragments all qualify.
Caching hurts when those conditions don't hold. Write-heavy workloads suffer because every write invalidates a cache entry, multiplying your work. Workloads with poor key locality suffer because the cache wastes memory storing entries that never get reused. Workloads where the underlying fetch is already fast — well-indexed primary key lookups against a properly tuned database, for example — gain almost nothing from caching and inherit the consistency complexity for no reason.
The honest first step before any cache deployment is measuring your actual read/write ratio, key access distribution, and underlying fetch latency. If your read/write ratio is below 5:1 or your underlying database is already returning results in single-digit milliseconds, the engineering time is better spent elsewhere.
Memory Efficiency Is The Hidden Cost Lever
Throughput numbers get the headlines but memory efficiency determines your monthly bill. A cache that stores the same hot data in less RAM lets you run a smaller instance class — and on AWS that's the difference between profitable and breakeven for a lot of services.
Redis stores each key as a Simple Dynamic String with 16 bytes of header overhead, plus dictEntry pointers in the main hashtable, plus embedded TTL metadata. For 1KB values, per-entry overhead lands around 1100-1200 bytes once you account for hashtable load factor and slab fragmentation. At a million keys, that's roughly 1.2 GB of resident memory just for the data.
Cachee's L1 layer uses sharded DashMap entries with compact packing — a 64-bit key hash, value bytes, an 8-byte expiry timestamp, and a small frequency counter for the CacheeLFU admission filter. Per-entry overhead lands at roughly 40 bytes of structural data on top of the value itself. For the same million-key workload, that's about 13% smaller resident memory. On AWS ElastiCache pricing, that gap is the difference between needing a cache.r7g.large versus a cache.r7g.xlarge for borderline workloads.
What This Actually Costs
Concrete pricing math beats hypothetical. A typical SaaS workload with 1 billion cache operations per month, average 800-byte values, and a 5 GB hot working set currently runs on AWS ElastiCache cache.r7g.xlarge primary plus a read replica — roughly $480 per month for the two nodes, plus cross-AZ data transfer charges that quietly add another $50-150 per month depending on access patterns.
Migrating the hot path to an in-process L0/L1 cache and keeping ElastiCache as a cold L2 fallback drops the dedicated cache spend to $120-180 per month. For workloads where the hot working set fits inside the application's existing memory budget, you can eliminate the dedicated cache tier entirely. The cache becomes a library you link into your binary instead of a separate service to operate.
Compounded over twelve months, that's $3,600 to $4,500 per year on a single small workload. Multiply across a fleet of services and the savings start showing up in finance team conversations. The bigger savings usually come from eliminating cross-AZ data transfer charges, which Redis-as-a-service architectures incur on every read that crosses an availability zone.
Cut Fraud Scoring From 20ms to 2ms.
L1 feature caching at 1.5µs per lookup eliminates the feature pipeline bottleneck. See the impact at your transaction volume.
Start Free Trial Schedule Demo