Skip to main content
Why CacheeHow It Works
All Verticals5G TelecomAd TechAI InfrastructureFraud DetectionGamingTrading
PricingDocsBlogSchedule DemoLog InStart Free Trial
← Back to Blog

E-commerce Shopping Cart Caching Strategies

December 22, 2025 • 7 min read • E-commerce

Shopping carts are accessed constantly—add item, view cart, update quantity, check out. Every interaction must be instant. Slow carts kill conversions. Here's how to build a caching strategy that keeps carts fast and reliable.

The Cart Caching Challenge

Carts are tricky to cache because they:

The solution is a hybrid approach: cache for speed, database for durability.

Cart Data Structure

Use Redis hashes for efficient cart operations:

// Cart structure in Redis
// Key: cart:{userId}
// Hash fields: product IDs
// Hash values: JSON with quantity, price snapshot, metadata

await redis.hset(`cart:${userId}`, productId, JSON.stringify({
    quantity: 2,
    priceAtAdd: 29.99,
    addedAt: Date.now(),
    variant: 'blue-xl'
}));

// Get entire cart in one operation
const cart = await redis.hgetall(`cart:${userId}`);

// Get cart item count
const itemCount = await redis.hlen(`cart:${userId}`);
Why hashes? Update one item without touching others. Get the entire cart in one round-trip. Perfect for cart operations.

Inventory Integration

Validate inventory on every cart view, but cache the result briefly:

async function getCartWithInventory(userId) {
    const cart = await redis.hgetall(`cart:${userId}`);

    // Check inventory (with short cache)
    const items = await Promise.all(
        Object.entries(cart).map(async ([productId, data]) => {
            const item = JSON.parse(data);

            // Get inventory (cached for 30 seconds)
            const inventory = await getInventoryCached(productId);

            return {
                productId,
                ...item,
                inStock: inventory.available >= item.quantity,
                currentPrice: inventory.price,
                priceChanged: inventory.price !== item.priceAtAdd
            };
        })
    );

    return items;
}

async function getInventoryCached(productId) {
    const cacheKey = `inventory:${productId}`;
    let inventory = await redis.get(cacheKey);

    if (!inventory) {
        inventory = await db.query(
            'SELECT available, price FROM products WHERE id = $1',
            [productId]
        );
        await redis.set(cacheKey, JSON.stringify(inventory), 'EX', 30);
    }

    return typeof inventory === 'string' ? JSON.parse(inventory) : inventory;
}

Multi-Device Cart Sync

When users log in, merge anonymous cart with saved cart:

async function mergeCartsOnLogin(userId, anonymousCartId) {
    const anonCart = await redis.hgetall(`cart:anon:${anonymousCartId}`);
    const userCart = await redis.hgetall(`cart:${userId}`);

    // Merge: user cart items take priority, add new items from anon
    for (const [productId, data] of Object.entries(anonCart)) {
        if (!userCart[productId]) {
            await redis.hset(`cart:${userId}`, productId, data);
        }
    }

    // Delete anonymous cart
    await redis.del(`cart:anon:${anonymousCartId}`);

    // Trigger cart merge event for analytics
    await events.emit('cart:merged', { userId, itemsAdded: Object.keys(anonCart).length });
}

Price Update Handling

During sales events, prices change but cart should show current prices:

async function recalculateCartPrices(userId) {
    const cart = await redis.hgetall(`cart:${userId}`);
    const productIds = Object.keys(cart);

    // Batch fetch current prices
    const prices = await db.query(
        'SELECT id, price, sale_price FROM products WHERE id = ANY($1)',
        [productIds]
    );

    const priceMap = new Map(prices.rows.map(p => [p.id, p.sale_price || p.price]));

    // Calculate totals
    let subtotal = 0;
    const items = [];

    for (const [productId, data] of Object.entries(cart)) {
        const item = JSON.parse(data);
        const currentPrice = priceMap.get(productId);

        items.push({
            productId,
            quantity: item.quantity,
            originalPrice: item.priceAtAdd,
            currentPrice,
            savings: (item.priceAtAdd - currentPrice) * item.quantity
        });

        subtotal += currentPrice * item.quantity;
    }

    return { items, subtotal, itemCount: items.length };
}

Cart Persistence Strategy

Balance speed with durability:

// Write to cache immediately, database async
async function addToCart(userId, productId, quantity) {
    const item = {
        quantity,
        priceAtAdd: await getCurrentPrice(productId),
        addedAt: Date.now()
    };

    // Immediate: Update cache
    await redis.hset(`cart:${userId}`, productId, JSON.stringify(item));

    // Background: Persist to database
    setImmediate(async () => {
        await db.query(`
            INSERT INTO cart_items (user_id, product_id, quantity, added_at)
            VALUES ($1, $2, $3, NOW())
            ON CONFLICT (user_id, product_id)
            DO UPDATE SET quantity = $3
        `, [userId, productId, quantity]);
    });

    // Set cart expiration (7 days)
    await redis.expire(`cart:${userId}`, 7 * 24 * 60 * 60);
}

Abandoned Cart Recovery

Track cart activity for recovery campaigns:

// On each cart interaction
async function trackCartActivity(userId) {
    await redis.zadd('cart:activity', Date.now(), userId);
}

// Find abandoned carts (no activity for 1 hour, not checked out)
async function findAbandonedCarts() {
    const oneHourAgo = Date.now() - 60 * 60 * 1000;

    const abandonedUserIds = await redis.zrangebyscore(
        'cart:activity',
        0,
        oneHourAgo
    );

    return abandonedUserIds.filter(async (userId) => {
        const cartSize = await redis.hlen(`cart:${userId}`);
        return cartSize > 0;
    });
}

Faster carts, higher conversions

Cachee.ai powers sub-10ms cart operations for high-traffic e-commerce stores.

Start Free Trial

Related Reading

The Numbers That Matter

Cache performance discussions get philosophical fast. Here are the actual measured numbers from production deployments running on documented hardware, so you can compare against your own infrastructure instead of trusting marketing copy.

The compounding effect matters more than any single number. A 28-nanosecond L0 hit means your application spends almost zero time on cache lookups in the hot path, leaving the CPU free for the actual business logic that generates revenue.

When Caching Actually Helps

Caching isn't free. It introduces a consistency problem you didn't have before. Before adding any cache layer, the question to answer is whether your workload actually benefits from caching at all.

Caching helps when three conditions hold simultaneously. First, your reads dramatically outnumber your writes — typically a 10:1 ratio or higher. Second, the same keys get read repeatedly within a window where a cached value remains valid. Third, the cost of computing or fetching the underlying value is meaningfully higher than the cost of a cache lookup. Database queries that hit secondary indexes, RPC calls to slow upstream services, expensive computed aggregations, and rendered template fragments all qualify.

Caching hurts when those conditions don't hold. Write-heavy workloads suffer because every write invalidates a cache entry, multiplying your work. Workloads with poor key locality suffer because the cache wastes memory storing entries that never get reused. Workloads where the underlying fetch is already fast — well-indexed primary key lookups against a properly tuned database, for example — gain almost nothing from caching and inherit the consistency complexity for no reason.

The honest first step before any cache deployment is measuring your actual read/write ratio, key access distribution, and underlying fetch latency. If your read/write ratio is below 5:1 or your underlying database is already returning results in single-digit milliseconds, the engineering time is better spent elsewhere.

Memory Efficiency Is The Hidden Cost Lever

Throughput numbers get the headlines but memory efficiency determines your monthly bill. A cache that stores the same hot data in less RAM lets you run a smaller instance class — and on AWS that's the difference between profitable and breakeven for a lot of services.

Redis stores each key as a Simple Dynamic String with 16 bytes of header overhead, plus dictEntry pointers in the main hashtable, plus embedded TTL metadata. For 1KB values, per-entry overhead lands around 1100-1200 bytes once you account for hashtable load factor and slab fragmentation. At a million keys, that's roughly 1.2 GB of resident memory just for the data.

Cachee's L1 layer uses sharded DashMap entries with compact packing — a 64-bit key hash, value bytes, an 8-byte expiry timestamp, and a small frequency counter for the CacheeLFU admission filter. Per-entry overhead lands at roughly 40 bytes of structural data on top of the value itself. For the same million-key workload, that's about 13% smaller resident memory. On AWS ElastiCache pricing, that gap is the difference between needing a cache.r7g.large versus a cache.r7g.xlarge for borderline workloads.

What This Actually Costs

Concrete pricing math beats hypothetical. A typical SaaS workload with 1 billion cache operations per month, average 800-byte values, and a 5 GB hot working set currently runs on AWS ElastiCache cache.r7g.xlarge primary plus a read replica — roughly $480 per month for the two nodes, plus cross-AZ data transfer charges that quietly add another $50-150 per month depending on access patterns.

Migrating the hot path to an in-process L0/L1 cache and keeping ElastiCache as a cold L2 fallback drops the dedicated cache spend to $120-180 per month. For workloads where the hot working set fits inside the application's existing memory budget, you can eliminate the dedicated cache tier entirely. The cache becomes a library you link into your binary instead of a separate service to operate.

Compounded over twelve months, that's $3,600 to $4,500 per year on a single small workload. Multiply across a fleet of services and the savings start showing up in finance team conversations. The bigger savings usually come from eliminating cross-AZ data transfer charges, which Redis-as-a-service architectures incur on every read that crosses an availability zone.