5-Minute Integration

Quick Start Guide

Get Cachee.ai running in your AWS stack in minutes. Zero infrastructure changes, instant results.

1

Sign Up & Get Your API Key

Create your account and receive your API key instantly. No credit card required for the free trial.

cURL
curl https://api.cachee.ai/v1/signup \
  -X POST \
  -H "Content-Type: application/json" \
  -d '{
    "email": "your@email.com",
    "company": "Your Company",
    "name": "Your Name"
  }'

# Response:
{
  "api_key": "sk_live_xxxxxxxxxxxxx",
  "dashboard_url": "https://dashboard.cachee.ai"
}

Instant Access: Your API key is active immediately. Access the dashboard to monitor performance in real-time.

2

Install the SDK

Choose your language and install the Cachee.ai SDK with a single command.

Python
pip install cachee-sdk
3

Initialize Cachee.ai

Add 2 lines of code to initialize the SDK with your API key.

Python
from cachee_sdk import Cachee
import os

# Initialize Cachee.ai
cachee = Cachee(
    api_key=os.getenv("CACHEE_API_KEY"),
    region="us-east-1"  # Auto-detects from AWS metadata
)

print("✅ Cachee.ai initialized!")

Environment Variables: Store your API key in environment variables for security. Never commit API keys to version control.

4

Add Intelligent Caching

Wrap your existing functions with Cachee.ai decorators/annotations. The AI handles everything automatically.

Python Example
from cachee_sdk import cachee

# Example: Database query with intelligent caching
@cachee.cache(
    key="user:{user_id}",
    ttl_strategy="adaptive",     # AI adjusts TTL automatically
    prefetch_related=True         # Predicts and pre-fetches related data
)
async def get_user(user_id: int):
    """
    Cachee.ai automatically:
    - Predicts when to refresh (before expiry)
    - Adjusts TTL based on access patterns
    - Pre-fetches frequently accessed related data
    - Detects anomalies in cache behavior
    """
    result = await db.query(
        "SELECT * FROM users WHERE id = $1",
        user_id
    )
    return result

# Example: API call with fallback
@cachee.cache(
    key="external_api:{endpoint}",
    fallback_on_error=True,       # Serve stale on API failure
    stale_while_revalidate=60     # Return cached, refresh in background
)
async def call_external_api(endpoint: str):
    response = await httpx.get(f"https://api.example.com/{endpoint}")
    return response.json()

# Example: Complex aggregation with warming
@cachee.cache(
    key="analytics:{user_id}:{date}",
    warm_cache=True,              # Pre-compute during off-peak hours
    invalidate_on=["user_updated", "order_placed"]
)
async def get_analytics_dashboard(user_id: int, date: str):
    # Expensive aggregation query
    return await db.query("""
        SELECT DATE(created_at) as date,
               COUNT(*) as orders,
               SUM(amount) as revenue
        FROM orders
        WHERE user_id = $1
        GROUP BY DATE(created_at)
    """, user_id)

AI-Powered: Cachee.ai's AI automatically optimizes TTL, predicts cache misses, and pre-fetches related data. No manual tuning required!

5

Monitor & Optimize

View real-time metrics in your dashboard and see immediate performance improvements.

Cache Hit Rate: Monitor your hit rate improving from ~60% to 94%+

Latency Reduction: See response times drop from 50ms to 5ms

Cost Savings: Track your 42% infrastructure cost reduction

AI Predictions: View prediction accuracy and optimization suggestions

Alerts: Configure Slack/email alerts for anomalies

CLI Metrics
cachee-cli show-metrics --period 30d

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 Performance Metrics (Last 30 Days)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Cache Hit Rate:     94.3% ↑ (from 62%)
Avg Latency:        6ms ↓ (from 48ms)
P99 Latency:        12ms ↓ (from 125ms)

💰 Cost Impact:
Monthly Savings:    $4,200 (42% reduction)
ROI:                1,850%
Uptime:             99.97%

🤖 AI Performance:
Predictions:        98.7% accurate
Cache Warming:      1.2M pre-fetches
Auto-optimizations: 847 TTL adjustments

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Architecture: Managed SaaS

[Your Application (AWS Lambda/ECS)] [Cachee.ai SDK (2 lines of code)] HTTPS over TLS 1.3 [Cachee.ai API Gateway] [AI Engine (SageMaker)] [ElastiCache Redis] Predictions Cached Data [Your Database/API] (on cache miss) ✓ Zero infrastructure changes ✓ Automatic scaling ✓ 99.9% uptime SLA ✓ Managed by Cachee.ai

AWS Marketplace Deployment: Deploy Cachee.ai directly into your VPC with one-click CloudFormation. Full control, private networking, runs on your AWS account.

1

Subscribe on AWS Marketplace

Find Cachee.ai on AWS Marketplace and click "Continue to Subscribe".

Direct Link
https://aws.amazon.com/marketplace/pp/cachee-ai-enterprise-caching
1

Search "Cachee.ai" on AWS Marketplace

2

Click "Continue to Subscribe"

3

Accept Terms & Conditions

4

Click "Continue to Configuration"

2

Configure Deployment

Select your region, version, and deployment options.

Configuration Options
Region:              us-east-1 (or your preferred region)
Version:             Latest (1.0.0)
Fulfillment Option:  CloudFormation Template

VPC Configuration:
  VPC ID:            vpc-xxxxx (your VPC)
  Subnets:           subnet-xxxxx, subnet-yyyyy (private)

Cache Configuration:
  Cache Size:        100 GB
  Instance Type:     cache.r6g.xlarge
  Multi-AZ:          Enabled
  Encryption:        AES-256 (at rest & in transit)

AI Features:
  Predictive Refresh:    ✓ Enabled
  Adaptive TTL:          ✓ Enabled
  Anomaly Detection:     ✓ Enabled
  Auto-scaling:          ✓ Enabled
3

Launch with CloudFormation

CloudFormation automatically provisions all resources in your VPC.

Resources Created:

  • ElastiCache Redis Cluster (Multi-AZ)
  • ECS Cluster with Cachee.ai Agents
  • Application Load Balancer (internal)
  • Security Groups & IAM Roles
  • CloudWatch Dashboards & Alarms
  • VPC Endpoints for private connectivity

Deployment Time: CloudFormation stack takes approximately 10-15 minutes to complete. You'll receive an email when ready.

4

Get Your Private Endpoint

Once deployed, retrieve your internal endpoint from CloudFormation outputs.

AWS CLI
aws cloudformation describe-stacks \
  --stack-name cachee-production \
  --query 'Stacks[0].Outputs[?OutputKey==`CacheeEndpoint`].OutputValue' \
  --output text

# Output:
# https://cachee-internal.us-east-1.elb.amazonaws.com
5

Configure Your Application

Point your application to the private Cachee.ai endpoint.

Python Configuration
from cachee_sdk import Cachee

cachee = Cachee(
    endpoint="https://cachee-internal.us-east-1.elb.amazonaws.com",
    region="us-east-1",

    # Use IAM authentication (no API key needed)
    auth_mode="iam",

    # Optional: Custom retry and timeout settings
    retry_policy={"max_retries": 3, "backoff": "exponential"},
    timeout=5000  # milliseconds
)

# Use as normal
@cachee.cache(key="user:{user_id}")
def get_user(user_id):
    return db.query("SELECT * FROM users WHERE id = ?", user_id)

Architecture: AWS Marketplace (VPC Deployment)

[Your VPC - 10.0.0.0/16] [Your Application (ECS/Lambda)] Private Network (No Internet) [Internal ALB] [Cachee.ai Agent (ECS Tasks)] [ElastiCache Redis (Multi-AZ)] AI Module sends metrics → [VPC Endpoint → Cachee.ai Control Plane] AI predictions & optimizations sent back ✓ Runs in your VPC ✓ Private networking ✓ IAM authentication ✓ Full data control

Self-Hosted Deployment: Full control over infrastructure. Deploy Cachee.ai agents as Docker containers, ECS tasks, or Kubernetes pods in your environment.

1

Download CloudFormation Template

Get the production-ready CloudFormation template for self-hosted deployment.

Download
curl -O https://downloads.cachee.ai/cloudformation/cachee-self-hosted.yaml

# Or via AWS CLI
aws s3 cp s3://cachee-templates/cachee-self-hosted.yaml .
Download Template
2

Deploy with CloudFormation

Deploy the stack with your custom parameters.

AWS CLI
aws cloudformation deploy \
  --template-file cachee-self-hosted.yaml \
  --stack-name cachee-production \
  --parameter-overrides \
    Environment=production \
    VpcId=vpc-xxxxx \
    PrivateSubnetIds=subnet-xxxxx,subnet-yyyyy \
    CacheSizeGB=100 \
    RedisInstanceType=cache.r6g.xlarge \
    EnableMultiAZ=true \
    EnableEncryption=true \
  --capabilities CAPABILITY_IAM

# Monitor deployment
aws cloudformation wait stack-create-complete \
  --stack-name cachee-production

echo "✅ Cachee.ai deployed successfully!"
3

Alternative: Docker Deployment

Run Cachee.ai as Docker containers for maximum flexibility.

Docker
# Pull the latest Cachee.ai agent image
docker pull cachee.ai/agent:latest

# Run with environment variables
docker run -d \
  --name cachee-agent \
  --network host \
  -e CACHEE_LICENSE_KEY=your-license-key \
  -e REDIS_URL=redis://localhost:6379 \
  -e AWS_REGION=us-east-1 \
  -e ENABLE_AI_FEATURES=true \
  cachee.ai/agent:latest

# Check logs
docker logs -f cachee-agent
4

Alternative: Terraform Deployment

Use Infrastructure as Code for repeatable deployments.

Terraform
module "cachee" {
  source = "cachee-ai/cachee/aws"
  version = "1.0.0"

  environment = "production"

  # Network Configuration
  vpc_id              = aws_vpc.main.id
  private_subnet_ids  = aws_subnet.private[*].id

  # Cache Configuration
  cache_size_gb       = 100
  redis_instance_type = "cache.r6g.xlarge"
  enable_multi_az     = true

  # AI Features
  ai_features = {
    predictive_refresh  = true
    adaptive_ttl        = true
    anomaly_detection   = true
  }

  # Security
  encryption_at_rest  = true
  encryption_in_transit = true

  tags = {
    Project = "Cachee.ai"
    Team    = "Platform"
  }
}

output "cachee_endpoint" {
  value = module.cachee.endpoint_url
}
Deploy
terraform init
terraform plan
terraform apply

# Output:
# cachee_endpoint = "https://cachee-prod.internal.company.com"
5

Verify Installation

Test connectivity and verify all components are running.

Health Check
# Check Cachee.ai agent health
curl https://cachee-prod.internal.company.com/health

# Expected response:
{
  "status": "healthy",
  "version": "1.0.0",
  "components": {
    "redis": "connected",
    "ai_engine": "ready",
    "metrics": "collecting"
  },
  "uptime_seconds": 3600
}

# Test cache operation
cachee-cli test-connection --endpoint https://cachee-prod.internal.company.com

# Output:
✅ Connection successful
✅ Redis cluster healthy
✅ AI features enabled
✅ Ready to receive traffic

Ready to Get Started?

Join 500+ enterprises using Cachee.ai to reduce costs and improve performance.