Cachee CLI Is Live: brew install cachee
Starting today, Cachee is installable with a single command. No manual downloads. No Docker containers. No cloning a repository and building from source. Open your terminal, run brew install cachee, and you will have a post-quantum caching service running locally in under 60 seconds.
This is the moment we have been building toward. Cachee was designed to be the fastest, most secure caching engine on the planet, and now it is also the easiest to install. One line in your terminal gives you 31-nanosecond reads, full RESP protocol compatibility, and three-family post-quantum attestation -- all running natively on your machine.
brew tap h33ai-postquantum/tap
brew install cachee
That is it. Two commands and you are running a cache that speaks the same protocol as Redis, backed by cryptographic guarantees that no other caching system on the market can match.
Why Terminal-First
Developers do not evaluate tools by reading landing pages. They evaluate tools by installing them. Every minute spent navigating a signup flow or configuring a Docker Compose file is a minute closer to giving up and staying with the status quo. We built Cachee to be tried, not demoed.
The CLI gives you the full engine. Not a sandbox. Not a feature-gated trial. The same core that delivers 31ns GET latency on production Graviton4 hardware runs on your laptop. The same RESP protocol layer that lets any Redis client connect without modification. The same post-quantum attestation pipeline that signs every cache entry with three independent cryptographic families. You evaluate the real thing, and then you decide if it belongs in your stack.
We have seen too many infrastructure tools hide behind "request a demo" buttons and sales calls. If Cachee cannot convince you in the terminal, it does not deserve to be in your architecture. That is the bar we hold ourselves to.
Three Ways to Install
Homebrew (Recommended)
The fastest path for macOS developers. Apple Silicon native binaries, no Rosetta translation.
brew tap h33ai-postquantum/tap
brew install cachee
cachee --version
Direct Download from GitHub Releases
For environments where Homebrew is not available or when you need a specific version. Prebuilt binaries for macOS ARM64, with Linux and Intel Mac coming in v0.1.1.
# Download the latest release
curl -L https://github.com/H33ai-postquantum/cachee/releases/latest/download/cachee-darwin-arm64.tar.gz -o cachee.tar.gz
# Extract and install
tar xzf cachee.tar.gz
sudo mv cachee /usr/local/bin/
cachee --version
AWS Marketplace
For enterprise teams that need consolidated billing through their existing AWS account. Same engine, same performance, with procurement-friendly licensing.
# After subscribing on AWS Marketplace:
cachee init --license aws-marketplace
cachee start
What You Get
The Cachee CLI is not a wrapper around a remote service. It is the engine itself, compiled to a single binary. Here is what each command does.
cachee init generates a local configuration file and a Dilithium keypair for attestation. Your keys never leave your machine. The config file is human-readable TOML with sensible defaults that work out of the box.
$ cachee init
Generated config at ~/.cachee/config.toml
Generated Dilithium keypair at ~/.cachee/keys/
Ready to start. Run: cachee start
cachee start launches the daemon on port 6380. It speaks RESP, which means every Redis client library in every language already works. Connect with redis-cli -p 6380, your Python redis package, your Node ioredis client, or any other RESP-compatible tool. No SDK required.
$ cachee start
Cachee v0.1.0 listening on 127.0.0.1:6380
PQ attestation: disabled (enable with --attest)
Eviction policy: CacheeLFU
Memory limit: 256MB
cachee set and cachee get work exactly as you would expect. Or skip them entirely and use any Redis client -- the RESP layer handles everything.
$ cachee set user:1001 '{"name":"alice","role":"admin"}'
OK
$ cachee get user:1001
{"name":"alice","role":"admin"}
cachee bench runs a built-in benchmark against your local instance. No external tooling needed. You see your hardware's actual performance numbers in seconds.
$ cachee bench
Running 1M operations (50/50 read/write)...
Throughput: 32,441,208 ops/sec
P50 latency: 28ns
P99 latency: 41ns
P99.9 latency: 58ns
cachee attest --enable turns on three-family post-quantum signing. Every SET operation after this produces a 58-byte attestation receipt. This is the feature that makes Cachee fundamentally different from every other cache.
$ cachee attest --enable
PQ attestation enabled.
Signing families: ML-DSA-65, FALCON-512, SLH-DSA-SHA2-128f
Receipt size: 58 bytes per entry
Performance Out of the Box
These are not theoretical numbers from a benchmarking white paper. Run cachee bench on your own hardware and see them yourself. The 31ns GET latency comes from Cachee's lock-free L0 hot path, which keeps frequently accessed entries in CPU cache lines. The 32 million ops/sec single-thread throughput is a function of the same architecture -- no locks, no syscalls, no serialization overhead on the hot path.
The 99%+ hit rate comes from CacheeLFU, our adaptive eviction policy. Unlike traditional LFU or LRU, CacheeLFU combines frequency counts with recency weighting and access pattern analysis to make eviction decisions that reflect real workload behavior, not just textbook heuristics.
Post-Quantum Attestation Explained
Three Independent Cryptographic Families
Every SET operation produces a receipt signed by three independent post-quantum signature families: ML-DSA-65 (lattice-based), FALCON-512 (NTRU lattice-based), and SLH-DSA-SHA2-128f-simple (stateless hash-based). Cache poisoning -- an attacker modifying cached values without detection -- requires breaking all three families simultaneously. These rely on three independent mathematical hardness assumptions: MLWE lattices, NTRU lattices, and stateless hash functions. Compromising one family leaves the other two intact.
The 58-byte attestation receipt is compressed using H33 Substrate technology (patent pending -- H33 Substrate, Claims 124-125). The raw signatures from all three families total over 5,000 bytes. Substrate compresses them to 58 bytes while preserving independent verifiability, a 285x compression ratio that makes per-entry attestation practical at cache-line scale.
This is not a theoretical security model. It is running in production today. Every attested entry in Cachee carries cryptographic proof of its integrity, verifiable by any party with the public key, resistant to every known quantum attack vector.
Attestation is off by default because not every workload needs it. A session cache for a dev environment does not require post-quantum signatures. But when you are caching financial data, authentication tokens, or any value where integrity matters, one flag turns on a security guarantee that no other caching system provides.
RESP Compatibility: Your Redis Clients Already Work
Cachee speaks RESP (REdis Serialization Protocol) natively. This is a deliberate design decision. The RESP ecosystem is enormous -- client libraries exist in every major language, monitoring tools understand the protocol, and developers already know the command set. We did not invent a new protocol because we did not need to.
If your application currently talks to Redis, pointing it at Cachee requires changing one configuration value: the port number. Your application code stays the same. Your client library stays the same. Your monitoring stays the same. You get 31ns reads, CacheeLFU eviction, and optional post-quantum attestation without rewriting a single line of application code.
# Python - just change the port
import redis
cache = redis.Redis(host='localhost', port=6380)
cache.set('key', 'value')
print(cache.get('key'))
# Node.js - same story
const Redis = require('ioredis');
const cache = new Redis({ port: 6380 });
await cache.set('key', 'value');
console.log(await cache.get('key'));
What's Next
This v0.1.0 release targets macOS on Apple Silicon. Here is the roadmap for the next two releases:
- v0.1.1 -- Linux ARM64 and Intel Mac (x86_64) binaries. Docker image published to GitHub Container Registry. These are builds of the same engine; the core is already cross-platform.
- v0.2.0 --
crates.iopublish for Rust-native embedding. D-Cachee federation support (FIG 23 from the patent), enabling distributed Cachee deployments across multiple nodes with consistent attestation. This is where Cachee moves from single-node to cluster-scale.
We are also working with early enterprise adopters on managed deployment options. If you need Cachee in your VPC with SLA guarantees, reach out at support@h33.ai.
Get Cachee Running in 60 Seconds
One command. Post-quantum security. 31ns reads. No signup required.
Install Cachee NowRead the Documentation Subscribe on AWS Marketplace