Get Cachee.ai running in your AWS stack in minutes. Zero infrastructure changes, instant results.
Create your account and receive your API key instantly. No credit card required for the free trial.
curl https://api.cachee.ai/v1/signup \
-X POST \
-H "Content-Type: application/json" \
-d '{
"email": "your@email.com",
"company": "Your Company",
"name": "Your Name"
}'
# Response:
{
"api_key": "sk_live_xxxxxxxxxxxxx",
"dashboard_url": "https://dashboard.cachee.ai"
}
Instant Access: Your API key is active immediately. Access the dashboard to monitor performance in real-time.
Choose your language and install the Cachee.ai SDK with a single command.
pip install cachee-sdk
Add 2 lines of code to initialize the SDK with your API key.
from cachee_sdk import Cachee
import os
# Initialize Cachee.ai
cachee = Cachee(
api_key=os.getenv("CACHEE_API_KEY"),
region="us-east-1" # Auto-detects from AWS metadata
)
print("✅ Cachee.ai initialized!")
Environment Variables: Store your API key in environment variables for security. Never commit API keys to version control.
Wrap your existing functions with Cachee.ai decorators/annotations. The AI handles everything automatically.
from cachee_sdk import cachee
# Example: Database query with intelligent caching
@cachee.cache(
key="user:{user_id}",
ttl_strategy="adaptive", # AI adjusts TTL automatically
prefetch_related=True # Predicts and pre-fetches related data
)
async def get_user(user_id: int):
"""
Cachee.ai automatically:
- Predicts when to refresh (before expiry)
- Adjusts TTL based on access patterns
- Pre-fetches frequently accessed related data
- Detects anomalies in cache behavior
"""
result = await db.query(
"SELECT * FROM users WHERE id = $1",
user_id
)
return result
# Example: API call with fallback
@cachee.cache(
key="external_api:{endpoint}",
fallback_on_error=True, # Serve stale on API failure
stale_while_revalidate=60 # Return cached, refresh in background
)
async def call_external_api(endpoint: str):
response = await httpx.get(f"https://api.example.com/{endpoint}")
return response.json()
# Example: Complex aggregation with warming
@cachee.cache(
key="analytics:{user_id}:{date}",
warm_cache=True, # Pre-compute during off-peak hours
invalidate_on=["user_updated", "order_placed"]
)
async def get_analytics_dashboard(user_id: int, date: str):
# Expensive aggregation query
return await db.query("""
SELECT DATE(created_at) as date,
COUNT(*) as orders,
SUM(amount) as revenue
FROM orders
WHERE user_id = $1
GROUP BY DATE(created_at)
""", user_id)
AI-Powered: Cachee.ai's AI automatically optimizes TTL, predicts cache misses, and pre-fetches related data. No manual tuning required!
View real-time metrics in your dashboard and see immediate performance improvements.
Cache Hit Rate: Monitor your hit rate improving from ~60% to 94%+
Latency Reduction: See response times drop from 50ms to 5ms
Cost Savings: Track your 42% infrastructure cost reduction
AI Predictions: View prediction accuracy and optimization suggestions
Alerts: Configure Slack/email alerts for anomalies
cachee-cli show-metrics --period 30d
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 Performance Metrics (Last 30 Days)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Cache Hit Rate: 94.3% ↑ (from 62%)
Avg Latency: 6ms ↓ (from 48ms)
P99 Latency: 12ms ↓ (from 125ms)
💰 Cost Impact:
Monthly Savings: $4,200 (42% reduction)
ROI: 1,850%
Uptime: 99.97%
🤖 AI Performance:
Predictions: 98.7% accurate
Cache Warming: 1.2M pre-fetches
Auto-optimizations: 847 TTL adjustments
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
AWS Marketplace Deployment: Deploy Cachee.ai directly into your VPC with one-click CloudFormation. Full control, private networking, runs on your AWS account.
Find Cachee.ai on AWS Marketplace and click "Continue to Subscribe".
https://aws.amazon.com/marketplace/pp/cachee-ai-enterprise-caching
Search "Cachee.ai" on AWS Marketplace
Click "Continue to Subscribe"
Accept Terms & Conditions
Click "Continue to Configuration"
Select your region, version, and deployment options.
Region: us-east-1 (or your preferred region)
Version: Latest (1.0.0)
Fulfillment Option: CloudFormation Template
VPC Configuration:
VPC ID: vpc-xxxxx (your VPC)
Subnets: subnet-xxxxx, subnet-yyyyy (private)
Cache Configuration:
Cache Size: 100 GB
Instance Type: cache.r6g.xlarge
Multi-AZ: Enabled
Encryption: AES-256 (at rest & in transit)
AI Features:
Predictive Refresh: ✓ Enabled
Adaptive TTL: ✓ Enabled
Anomaly Detection: ✓ Enabled
Auto-scaling: ✓ Enabled
CloudFormation automatically provisions all resources in your VPC.
Resources Created:
Deployment Time: CloudFormation stack takes approximately 10-15 minutes to complete. You'll receive an email when ready.
Once deployed, retrieve your internal endpoint from CloudFormation outputs.
aws cloudformation describe-stacks \
--stack-name cachee-production \
--query 'Stacks[0].Outputs[?OutputKey==`CacheeEndpoint`].OutputValue' \
--output text
# Output:
# https://cachee-internal.us-east-1.elb.amazonaws.com
Point your application to the private Cachee.ai endpoint.
from cachee_sdk import Cachee
cachee = Cachee(
endpoint="https://cachee-internal.us-east-1.elb.amazonaws.com",
region="us-east-1",
# Use IAM authentication (no API key needed)
auth_mode="iam",
# Optional: Custom retry and timeout settings
retry_policy={"max_retries": 3, "backoff": "exponential"},
timeout=5000 # milliseconds
)
# Use as normal
@cachee.cache(key="user:{user_id}")
def get_user(user_id):
return db.query("SELECT * FROM users WHERE id = ?", user_id)
Self-Hosted Deployment: Full control over infrastructure. Deploy Cachee.ai agents as Docker containers, ECS tasks, or Kubernetes pods in your environment.
Get the production-ready CloudFormation template for self-hosted deployment.
curl -O https://downloads.cachee.ai/cloudformation/cachee-self-hosted.yaml
# Or via AWS CLI
aws s3 cp s3://cachee-templates/cachee-self-hosted.yaml .
Deploy the stack with your custom parameters.
aws cloudformation deploy \
--template-file cachee-self-hosted.yaml \
--stack-name cachee-production \
--parameter-overrides \
Environment=production \
VpcId=vpc-xxxxx \
PrivateSubnetIds=subnet-xxxxx,subnet-yyyyy \
CacheSizeGB=100 \
RedisInstanceType=cache.r6g.xlarge \
EnableMultiAZ=true \
EnableEncryption=true \
--capabilities CAPABILITY_IAM
# Monitor deployment
aws cloudformation wait stack-create-complete \
--stack-name cachee-production
echo "✅ Cachee.ai deployed successfully!"
Run Cachee.ai as Docker containers for maximum flexibility.
# Pull the latest Cachee.ai agent image
docker pull cachee.ai/agent:latest
# Run with environment variables
docker run -d \
--name cachee-agent \
--network host \
-e CACHEE_LICENSE_KEY=your-license-key \
-e REDIS_URL=redis://localhost:6379 \
-e AWS_REGION=us-east-1 \
-e ENABLE_AI_FEATURES=true \
cachee.ai/agent:latest
# Check logs
docker logs -f cachee-agent
Use Infrastructure as Code for repeatable deployments.
module "cachee" {
source = "cachee-ai/cachee/aws"
version = "1.0.0"
environment = "production"
# Network Configuration
vpc_id = aws_vpc.main.id
private_subnet_ids = aws_subnet.private[*].id
# Cache Configuration
cache_size_gb = 100
redis_instance_type = "cache.r6g.xlarge"
enable_multi_az = true
# AI Features
ai_features = {
predictive_refresh = true
adaptive_ttl = true
anomaly_detection = true
}
# Security
encryption_at_rest = true
encryption_in_transit = true
tags = {
Project = "Cachee.ai"
Team = "Platform"
}
}
output "cachee_endpoint" {
value = module.cachee.endpoint_url
}
terraform init
terraform plan
terraform apply
# Output:
# cachee_endpoint = "https://cachee-prod.internal.company.com"
Test connectivity and verify all components are running.
# Check Cachee.ai agent health
curl https://cachee-prod.internal.company.com/health
# Expected response:
{
"status": "healthy",
"version": "1.0.0",
"components": {
"redis": "connected",
"ai_engine": "ready",
"metrics": "collecting"
},
"uptime_seconds": 3600
}
# Test cache operation
cachee-cli test-connection --endpoint https://cachee-prod.internal.company.com
# Output:
✅ Connection successful
✅ Redis cluster healthy
✅ AI features enabled
✅ Ready to receive traffic
Join 500+ enterprises using Cachee.ai to reduce costs and improve performance.