L1 in-process caching is fast. A DashMap lookup with Cachee-FLU admission completes in about 1.5 microseconds. For most workloads, that is more than enough. But there is a class of workload where 1.5 microseconds is not fast enough — and where the real problem is not speed but architecture. Python ML serving. Gunicorn workers. Multiprocessing pools. Every one of them runs into the same wall: the GIL forces multi-process architecture, and multi-process architecture forces cache duplication. L0 eliminates both problems with a single primitive: shared memory mapped across processes, readable at sub-nanosecond speed, with zero copies.
The GIL Problem Nobody Talks About
Every serious Python deployment runs multi-process. Not by choice — by necessity. Python's Global Interpreter Lock prevents true multi-threaded parallelism for CPU-bound work. If you need to serve ML inferences across 8 CPU cores, you run 8 processes. Gunicorn pre-forks them. Uvicorn does the same. Celery spawns worker processes. The multiprocessing module forks explicitly.
This works for compute. It does not work for caching.
Each process has its own virtual address space. Each process maintains its own copy of every cached value. If your feature store holds 2GB of embedding vectors, and you run 8 gunicorn workers, you are using 16GB of RAM for 2GB of data. The duplication scales linearly with worker count. At 32 workers — common on production inference servers — you are at 64GB for 2GB of features.
This is not a theoretical problem. It is the reason ML teams over-provision memory by 4–8x, the reason feature stores get moved to external services (adding network latency), and the reason some teams give up on caching features locally altogether and pay the Redis round-trip on every inference.
Current Solutions Are All Compromises
The standard approaches to multi-process caching in Python are all tradeoffs:
- Per-worker caches: Each worker maintains its own in-process cache. Memory usage scales linearly with workers. 8 workers = 8 copies. This is the default and the worst option for memory efficiency.
- Redis as shared cache: Workers connect to a shared Redis instance. Eliminates duplication, but every feature lookup is a network round-trip: 1–5ms per GET. At millions of inferences per second, the cumulative latency is enormous. You traded a memory problem for a latency problem.
- Unix domain sockets / pipes: Workers communicate through IPC to a shared cache process. Better than TCP (no network stack), but still 10–100 microseconds per operation due to context switches and kernel-space copying. Not fast enough for hot-path ML feature access.
- multiprocessing.shared_memory: Python 3.8 added basic shared memory support. It works, but it provides raw bytes with no cache semantics — no eviction, no concurrency control, no hash table. You would have to build the entire cache yourself on top of a byte buffer.
None of these give you what you actually need: a proper cache with eviction, concurrency, and hash-based lookups, shared across processes, readable at hardware speed.
L0: Shared Memory, Pointer Dereference Reads
L0 is Cachee's answer. At startup, Cachee allocates a memory-mapped region using shm_open + mmap with MAP_SHARED. The region contains a pre-allocated hash table with open addressing and linear probing. Keys and values are stored inline — no heap pointers, no indirection.
When gunicorn's master process forks workers, each worker inherits the memory mapping. No additional setup. The shared region is live the moment the worker starts.
The read path is three operations: hash the key, index into the shared memory region, return the value. No system call. No memory copy. No serialization. It is a pointer dereference into memory that is already in the process's virtual address space. On modern hardware, this completes in 0.3–0.8 nanoseconds — an L1 cache line hit.
Writes use the same MVCC architecture as the L1 engine. A new version is written to the next slot, the pointer is atomically swapped, and old versions remain readable until all readers advance their epoch. Readers are never blocked by writers. This is the same concurrency model that PostgreSQL uses for its shared buffer pool.
The Complete Memory Hierarchy
L0 is not a replacement for L1. It is the tier below it. Together, the four tiers form a complete caching hierarchy:
| Tier | Mechanism | Latency | Scope |
|---|---|---|---|
| L0: Zero-Copy Shared Memory | Pointer dereference (mmap) | <1ns | All processes, same machine |
| L1: In-Process RAM | DashMap + Cachee-FLU | ~1.5µs | Single process |
| L1.5: NVMe SSD | io_uring async read | 10–50µs | Single machine |
| L2: Redis / ElastiCache | Network round-trip | 1–5ms | Cluster-wide |
Data moves between tiers automatically based on access frequency. The hottest features live in L0, always available at pointer speed. The working set lives in L1 with admission control. Warm data spills to NVMe. The full keyspace is backed by L2. Promotion and demotion happen without application code.
NumPy Views: Zero-Copy All the Way Down
The Python bindings are designed for ML workloads specifically. get_numpy(key, shape, dtype) returns a NumPy array whose underlying buffer is the shared memory region. Not a copy of it. The actual bytes in shared memory become the backing store for the NumPy array.
from cachee import SharedCache
import numpy as np
cache = SharedCache(path="/dev/shm/cachee", size_gb=2)
# Store a 128-dim embedding
cache.set_numpy("embedding:user:123", embedding_vector)
# Read it back: zero copy, zero allocation
features = cache.get_numpy("embedding:user:123", shape=(128,), dtype=np.float32)
# Pass directly to PyTorch — still zero copy
import torch
tensor = torch.from_numpy(features)
The entire chain from shared memory through NumPy to PyTorch involves zero memory copies. The tensor that PyTorch receives shares the same physical memory as the cached value. This is critical for ML serving where you are accessing thousands of features per inference across dozens of workers.
When Every Nanosecond Is Revenue
L0 is not for every workload. If your cache reads are not in the hot path of a latency-critical loop, L1 at 1.5 microseconds is more than sufficient. L0 is for the workloads where the cache read is the hot path:
- Real-time ML inference: Feature lookups at millions of requests per second across worker processes. Every nanosecond between request and prediction is latency your users feel.
- Recommendation engines: Accessing user embeddings, item embeddings, and feature crosses for ranking. Hundreds of features per request, millions of requests per day.
- NLP serving: Tokenizer vocabularies, embedding lookup tables, and cached attention patterns shared across all worker processes.
- Financial feature stores: Real-time market features accessed at sub-microsecond speed for pricing and risk models.
For these workloads, L0 removes feature access from the latency budget entirely. The features are not "fetched" or "looked up." They are already in your address space. Reading them is indistinguishable from reading a local variable.
Related Reading
- Zero-Copy L0 Product Page
- MVCC: Multi-Version Concurrency Control
- Hybrid Tiering: L1 + L1.5 + L2
- AI Infrastructure Solutions
- L0 Technical Specification
Also Read
The Numbers That Matter
Cache performance discussions get philosophical fast. Here are the actual measured numbers from production deployments running on documented hardware, so you can compare against your own infrastructure instead of trusting marketing copy.
- L0 hot path GET: 28.9 nanoseconds on Apple M4 Max, single-threaded against pre-warmed in-memory cache. This is the floor — there's no faster way to read a key.
- L1 CacheeLFU GET: ~89 nanoseconds on AWS Graviton4 (c8g.metal-48xl). Sharded DashMap with admission filtering.
- Sustained throughput: 32 million ops/sec single-threaded on M4 Max, 7.41 million ops/sec at 16 workers on Graviton4 c8g.16xlarge.
- L2 fallback: Sub-millisecond hits against ElastiCache Redis 7.4 over same-AZ network when L1 misses cascade through.
The compounding effect matters more than any single number. A 28-nanosecond L0 hit means your application spends almost zero time on cache lookups in the hot path, leaving the CPU free for the actual business logic that generates revenue.
When Caching Actually Helps
Caching isn't free. It introduces a consistency problem you didn't have before. Before adding any cache layer, the question to answer is whether your workload actually benefits from caching at all.
Caching helps when three conditions hold simultaneously. First, your reads dramatically outnumber your writes — typically a 10:1 ratio or higher. Second, the same keys get read repeatedly within a window where a cached value remains valid. Third, the cost of computing or fetching the underlying value is meaningfully higher than the cost of a cache lookup. Database queries that hit secondary indexes, RPC calls to slow upstream services, expensive computed aggregations, and rendered template fragments all qualify.
Caching hurts when those conditions don't hold. Write-heavy workloads suffer because every write invalidates a cache entry, multiplying your work. Workloads with poor key locality suffer because the cache wastes memory storing entries that never get reused. Workloads where the underlying fetch is already fast — well-indexed primary key lookups against a properly tuned database, for example — gain almost nothing from caching and inherit the consistency complexity for no reason.
The honest first step before any cache deployment is measuring your actual read/write ratio, key access distribution, and underlying fetch latency. If your read/write ratio is below 5:1 or your underlying database is already returning results in single-digit milliseconds, the engineering time is better spent elsewhere.
Memory Efficiency Is The Hidden Cost Lever
Throughput numbers get the headlines but memory efficiency determines your monthly bill. A cache that stores the same hot data in less RAM lets you run a smaller instance class — and on AWS that's the difference between profitable and breakeven for a lot of services.
Redis stores each key as a Simple Dynamic String with 16 bytes of header overhead, plus dictEntry pointers in the main hashtable, plus embedded TTL metadata. For 1KB values, per-entry overhead lands around 1100-1200 bytes once you account for hashtable load factor and slab fragmentation. At a million keys, that's roughly 1.2 GB of resident memory just for the data.
Cachee's L1 layer uses sharded DashMap entries with compact packing — a 64-bit key hash, value bytes, an 8-byte expiry timestamp, and a small frequency counter for the CacheeLFU admission filter. Per-entry overhead lands at roughly 40 bytes of structural data on top of the value itself. For the same million-key workload, that's about 13% smaller resident memory. On AWS ElastiCache pricing, that gap is the difference between needing a cache.r7g.large versus a cache.r7g.xlarge for borderline workloads.
What This Actually Costs
Concrete pricing math beats hypothetical. A typical SaaS workload with 1 billion cache operations per month, average 800-byte values, and a 5 GB hot working set currently runs on AWS ElastiCache cache.r7g.xlarge primary plus a read replica — roughly $480 per month for the two nodes, plus cross-AZ data transfer charges that quietly add another $50-150 per month depending on access patterns.
Migrating the hot path to an in-process L0/L1 cache and keeping ElastiCache as a cold L2 fallback drops the dedicated cache spend to $120-180 per month. For workloads where the hot working set fits inside the application's existing memory budget, you can eliminate the dedicated cache tier entirely. The cache becomes a library you link into your binary instead of a separate service to operate.
Compounded over twelve months, that's $3,600 to $4,500 per year on a single small workload. Multiply across a fleet of services and the savings start showing up in finance team conversations. The bigger savings usually come from eliminating cross-AZ data transfer charges, which Redis-as-a-service architectures incur on every read that crosses an availability zone.
Cache Features at Hardware Speed. Across Every Worker.
Zero-copy shared memory. Sub-nanosecond reads. Native Python bindings with NumPy views. Purpose-built for ML inference at scale.
Start Free Trial Schedule Demo