Frequently Asked Questions

Get instant answers to common questions about Cachee.ai, integration guides, and troubleshooting help

How do I get started with Cachee.ai?

Getting started with Cachee.ai is simple and takes less than 5 minutes:

  1. Sign up for a free 14-day trial (no credit card required)
  2. Install our SDK using your preferred language
  3. Configure your first cache policy using our AI assistant
  4. Deploy and start seeing performance improvements immediately

Our SDK supports Node.js, Python, Go, Java, PHP, and more. Check our documentation for detailed integration guides.

What programming languages and frameworks do you support?

Cachee.ai supports all major programming languages and frameworks:

Languages:

  • JavaScript/Node.js
  • Python (Django, Flask, FastAPI)
  • Go (Gin, Echo, Fiber)
  • Java (Spring Boot, Quarkus)
  • PHP (Laravel, Symfony)
  • Ruby (Rails, Sinatra)
  • C# (.NET, ASP.NET Core)

Cloud Platforms:

  • AWS (Lambda, ECS, EC2)
  • Google Cloud (Cloud Run, GKE)
  • Microsoft Azure
  • Vercel, Netlify, Heroku
How does AI-powered cache optimization work?

Our proprietary AI engine continuously learns your application's data access patterns and automatically optimizes cache performance. It predicts what data will be requested next, pre-warms the cache, and dynamically adjusts caching strategies in real-time — achieving hit rates above 95%.

How do I integrate Cachee.ai with my existing application?

Integration is designed to be minimal and non-invasive. Here's a quick example for Node.js:

// Install the SDK
npm install @cachee/node

// Basic integration
const cachee = require('@cachee/node');

cachee.init({
  apiKey: 'your-api-key',
  region: 'us-west-2'
});

// Use intelligent caching
const result = await cachee.get('user:123', async () => {
  return await database.getUser(123);
}, { ttl: '1h', tags: ['user'] });

The SDK automatically handles cache misses, intelligent pre-warming, and performance optimization. No changes to your existing code structure required.

What happens if I exceed my plan limits?

We handle plan overages gracefully to ensure uninterrupted service:

Soft Limits (80% usage):

  • Email notification with usage summary
  • Dashboard alerts and recommendations
  • Option to upgrade before hitting limits

Plan Limits Exceeded:

  • Automatic upgrade to next tier for that billing cycle
  • No service interruption or throttling
  • Prorated billing adjustment
  • Option to downgrade for next month

Enterprise customers have custom limit arrangements and dedicated support for capacity planning.

Can I change or cancel my plan anytime?

Yes! We offer complete flexibility with your subscription:

Plan Changes:

  • Upgrades: Take effect immediately with prorated billing
  • Downgrades: Take effect at next billing cycle
  • Pause Service: Available for up to 6 months

Cancellation:

  • Cancel anytime from your dashboard
  • Service continues until end of billing period
  • Data export available for 30 days after cancellation
  • No cancellation fees or penalties

Money-Back Guarantee:

Annual plans include a 30-day money-back guarantee with full refund if you're not satisfied.

How secure is my data with Cachee.ai?

Security is our top priority. We implement enterprise-grade security measures:

Encryption:

  • In Transit: TLS 1.3 encryption for all data transmission
  • At Rest: AES-256 encryption for all stored data
  • Key Management: Hardware security modules (HSM) with automatic key rotation

Compliance:

  • SOC 2 Type II: Annual third-party security audits
  • GDPR: Full compliance with EU data protection regulations
  • ISO 27001: Information security management certification

Access Controls:

  • Multi-factor authentication required
  • Role-based access controls (RBAC)
  • IP whitelisting and VPN support
  • Audit logs for all access and changes
Do you support on-premises deployment?

Yes! Enterprise customers can deploy Cachee.ai in their own infrastructure:

Deployment Options:

  • On-Premises: Full installation in your data centers
  • Private Cloud: Dedicated instances in your VPC
  • Hybrid: Mix of cloud and on-premises components

What's Included:

  • Complete Cachee.ai platform with all features
  • White-label deployment support
  • Custom security configurations
  • Dedicated support team
  • Training and ongoing maintenance

Contact our enterprise team to discuss deployment options and pricing.

What's included in Enterprise support?

Enterprise customers receive comprehensive support designed for mission-critical applications:

Support Channels:

  • 24/7 Phone Support: Direct line to our engineering team
  • Dedicated Slack Channel: Real-time communication
  • Customer Success Manager: Dedicated point of contact
  • Technical Account Manager: Strategic guidance and optimization

Response Times:

  • Critical Issues: < 1 hour response
  • High Priority: < 4 hours response
  • Standard Issues: < 24 hours response

Additional Services:

  • Custom integration development
  • Performance optimization consulting
  • Architecture review and recommendations
  • Priority feature development
How do I troubleshoot common integration issues?

Here are solutions to the most common integration issues:

Connection Issues:

// Check your API key and region
const cachee = require('@cachee/node');

// Enable debug mode
cachee.init({
  apiKey: 'your-api-key',
  region: 'us-west-2',
  debug: true  // Shows detailed logs
});

// Test connection
const isConnected = await cachee.ping();
console.log('Connection status:', isConnected);

Performance Issues:

  • High Latency: Choose a region closer to your users
  • Low Hit Rate: Increase TTL values or add more cache tags
  • Memory Usage: Implement cache size limits and LRU eviction

Common Error Codes:

  • 401 Unauthorized: Invalid or expired API key
  • 429 Rate Limited: Exceeded plan limits (upgrade needed)
  • 503 Service Unavailable: Regional outage (check status page)

Still having issues? Contact our support team with your debug logs.

How does pricing work for high-volume applications?

We offer flexible pricing models for high-volume enterprise applications:

Enterprise Pricing Models:

  • Reserved Capacity: Fixed monthly fee for guaranteed performance
  • Hybrid Model: Base fee + overage charges
  • Custom Contracts: Multi-year agreements with special terms

Contact our sales team for volume pricing details.

Contact our sales team for a custom quote based on your specific usage patterns and requirements.

Is Cachee compatible with Redis? Can I use my existing Redis client?

Yes. Cachee speaks the full Redis RESP protocol with 133+ commands. Any Redis client library works out of the box — just point it at your Cachee endpoint:

Supported Client Libraries:

  • Python: redis-py, aioredis
  • Node.js: ioredis, node-redis
  • Java: Jedis, Lettuce
  • Go: go-redis, redigo
  • .NET: StackExchange.Redis

Supported Data Structures:

  • Strings: GET, SET, MSET, INCR, APPEND, GETDEL, GETEX + 20 more
  • Hashes: HSET, HGET, HMGET, HLEN, HEXISTS, HINCRBY, HSCAN + more
  • Lists: LPUSH, RPUSH, LPOP, RPOP, LRANGE, LTRIM, BLPOP, BRPOP, LMOVE
  • Sets: SADD, SREM, SISMEMBER, SMEMBERS, SINTER, SUNION, SDIFF
  • Sorted Sets: ZADD, ZRANGE, ZRANGEBYSCORE, ZINCRBY, ZPOPMIN/MAX, ZSCAN
  • Streams: XADD, XREAD, XRANGE, XREADGROUP, XACK, XPENDING, XCLAIM
  • HyperLogLog: PFADD, PFCOUNT, PFMERGE
  • Bitmaps: SETBIT, GETBIT, BITCOUNT, BITPOS, BITOP, BITFIELD

Plus full support for MULTI/EXEC/WATCH transactions, Lua scripting (EVAL/EVALSHA), Pub/Sub with pattern subscriptions, and SCAN/KEYS/TYPE for key discovery.

What eviction policies does Cachee support?

Cachee supports 8 eviction policies, all configurable at runtime via CONFIG SET maxmemory-policy:

  • tiny-cachee (default) — Our proprietary adaptive policy using a Count-Min Sketch for frequency estimation with Segmented LRU for recency. Delivers optimal hit rates without manual tuning.
  • allkeys-lru — Evict least recently used keys across all types
  • volatile-lru — Evict LRU keys that have an explicit TTL
  • allkeys-lfu — Evict least frequently used keys (best for hot/cold workloads)
  • volatile-lfu — Evict LFU keys with explicit TTLs
  • allkeys-random — Random eviction across all keys
  • volatile-random — Random eviction among keys with TTLs
  • noeviction — Return errors on writes when memory is full (for safety-critical data)

Memory limits are set with CONFIG SET maxmemory 512mb. Cachee enforces graduated watermarks: admission filters tighten at 80%, aggressive eviction begins at 90%, and only safety-critical writes are accepted at 95%.

Does Cachee support stream consumer groups and message queues?

Yes. Cachee provides full Redis Streams support for building reliable message queues and event-driven architectures:

Stream Operations:

  • XADD, XLEN, XRANGE, XREVRANGE, XREAD, XDEL, XTRIM

Consumer Groups:

  • XGROUP CREATE/DESTROY — Create and manage consumer groups
  • XREADGROUP — Read new messages as a consumer within a group
  • XACK — Acknowledge processed messages
  • XPENDING — Inspect unacknowledged messages (summary + detail)
  • XCLAIM — Transfer ownership of stuck messages to another consumer
  • XINFO STREAM/GROUPS — Introspect stream and group state

Consumer groups enable reliable at-least-once processing across multiple consumers with automatic pending entry tracking, idle detection, and ownership transfer for failed consumers.

How fast is Cachee's L1 cache compared to Redis?

Cachee's L1 in-process cache delivers sub-microsecond reads — approximately 500,000x faster than a Redis network round-trip:

  • L1 cache hit: <1 microsecond (in-process DashMap, zero TCP overhead)
  • Typical Redis round-trip: 100–500 microseconds (TCP + serialization)
  • Redis under load: 1–50 milliseconds (connection contention, background saves)

The L2 tier (your upstream Redis or Cachee's managed service) handles cache misses transparently. Hot data is served at memory speed while cold data falls through to the network layer automatically. The adaptive admission filter learns your access patterns and keeps high-value keys in L1.

Architecture:

  • L1 (in-process): Lock-free concurrent reads via DashMap, atomic operations, zero serialization
  • L2 (upstream): Connection pool with circuit breaker, automatic failover
  • Eviction: O(1) constant-time decisions regardless of cache size
What observability and debugging tools does Cachee provide?

Cachee includes comprehensive built-in observability with no external dependencies required:

Prometheus Metrics (/metrics endpoint):

  • Cache hit/miss rates and ratios
  • Command latency percentiles (p50, p95, p99)
  • Eviction counts by policy
  • Memory usage and key counts by data type
  • Connection pool utilization and circuit breaker state

Runtime Commands:

  • SLOWLOG GET — Commands exceeding a configurable latency threshold, with timestamps and full command details
  • CONFIG GET/SET — Runtime tuning without restarts (maxmemory, eviction policy, slowlog threshold)
  • CLIENT LIST — All active connections with age, idle time, and subscription counts
  • MEMORY USAGE key — Per-key memory consumption in bytes
  • OBJECT ENCODING key — Internal data representation for debugging
  • INFO — Server stats, replication status, keyspace summary

Still Have Questions?

Our expert support team is here to help you get the most out of Cachee.ai

Live Chat

Start a conversation
Available 24/7 for Enterprise

Email Support

support@cachee.ai
Response within 24 hours

Documentation

View full docs
Guides, tutorials, and API reference

Schedule a Call

Book expert consultation
Free technical review