← Back to Blog

GraphQL Caching: Solving the N+1 Problem

December 21, 2025 • 7 min read • GraphQL Performance

GraphQL's flexibility creates a notorious performance trap: the N+1 query problem. A single GraphQL query can trigger hundreds or thousands of database queries, turning a 50ms request into a 5-second nightmare. This guide shows you how to eliminate N+1 queries using intelligent caching and batching strategies.

Understanding the N+1 Problem

The N+1 problem occurs when you fetch a list of items (1 query), then fetch related data for each item (N queries). Consider this GraphQL query:

query GetAuthors {
  authors {
    id
    name
    books {
      id
      title
    }
  }
}
Without optimization, this generates:

With 100 authors and 10ms per query, your response time balloons to 1,010ms. Add more nested fields and you quickly reach thousands of queries per request.

Solution 1: DataLoader Pattern

DataLoader batches and caches requests within a single GraphQL operation:

const DataLoader = require('dataloader');

// Create a batching loader for books
const bookLoader = new DataLoader(async (authorIds) => {
    // Single query for all author IDs
    const books = await db.query(`
        SELECT * FROM books
        WHERE author_id IN (?)
        ORDER BY author_id
    `, [authorIds]);

    // Group books by author_id
    const booksByAuthor = authorIds.map(id =>
        books.filter(book => book.author_id === id)
    );

    return booksByAuthor;
});

// GraphQL resolver
const resolvers = {
    Author: {
        books: (author, args, context) => {
            // DataLoader automatically batches and caches
            return context.loaders.book.load(author.id);
        }
    }
};
With DataLoader:

Solution 2: Field-Level Caching

Cache individual fields across requests using directives:

const { ApolloServer } = require('apollo-server');
const responseCachePlugin = require('apollo-server-plugin-response-cache');

const typeDefs = `
  type Query {
    author(id: ID!): Author @cacheControl(maxAge: 300)
  }

  type Author {
    id: ID!
    name: String! @cacheControl(maxAge: 3600)
    books: [Book!]! @cacheControl(maxAge: 300)
  }

  type Book {
    id: ID!
    title: String!
    rating: Float @cacheControl(maxAge: 60)
  }
`;

const server = new ApolloServer({
    typeDefs,
    resolvers,
    plugins: [
        responseCachePlugin({
            // Use Redis for distributed caching
            cache: new RedisCache({
                host: 'localhost',
                port: 6379
            })
        })
    ]
});

Automatic Cache Key Generation

// Apollo automatically generates cache keys like:
// "author:123:name" - cached for 1 hour
// "author:123:books" - cached for 5 minutes
// "book:456:rating" - cached for 1 minute

// Responses use the shortest TTL of any field
// Full query cached based on most volatile field

Solution 3: Persistent Query Caching

Cache entire query responses with smart invalidation:

class GraphQLCache {
    constructor(cache) {
        this.cache = cache;
    }

    async executeQuery(query, variables, context) {
        // Generate cache key from query + variables
        const cacheKey = this.generateKey(query, variables);

        // Try cache first
        const cached = await this.cache.get(cacheKey);
        if (cached) {
            return {
                data: cached,
                extensions: { cacheHit: true }
            };
        }

        // Execute query
        const result = await graphql({
            schema,
            source: query,
            variableValues: variables,
            contextValue: context
        });

        // Cache with TTL based on query complexity
        const ttl = this.calculateTTL(query);
        const tags = this.extractTypes(query);

        await this.cache.set(cacheKey, result.data, ttl, { tags });

        return result;
    }

    generateKey(query, variables) {
        // Normalize query and hash with variables
        const normalized = this.normalizeQuery(query);
        return crypto
            .createHash('sha256')
            .update(normalized + JSON.stringify(variables))
            .digest('hex');
    }

    calculateTTL(query) {
        // Extract @cacheControl directives
        const directives = this.parseDirectives(query);

        // Use minimum TTL from all fields
        const ttls = directives.map(d => d.maxAge);
        return Math.min(...ttls, 300); // Max 5 minutes
    }

    extractTypes(query) {
        // Parse query to find all accessed types
        // Used for cache invalidation
        const ast = parse(query);
        return this.findTypes(ast); // ['Author', 'Book']
    }
}

Solution 4: Intelligent Prefetching

Analyze query patterns to prefetch likely-needed data:

class PrefetchingDataLoader extends DataLoader {
    constructor(batchFn, options) {
        super(batchFn, options);
        this.accessPatterns = new Map();
    }

    async load(key) {
        // Track access patterns
        this.recordAccess(key);

        // Prefetch related keys based on history
        const relatedKeys = this.predictRelatedKeys(key);
        if (relatedKeys.length > 0) {
            // Non-blocking prefetch
            this.loadMany(relatedKeys).catch(err =>
                console.error('Prefetch failed:', err)
            );
        }

        return super.load(key);
    }

    predictRelatedKeys(key) {
        // ML-powered prediction or simple pattern matching
        const pattern = this.accessPatterns.get(key);
        if (!pattern) return [];

        // If author:123 is accessed, books for that author
        // are accessed 85% of the time - prefetch them
        if (pattern.booksProbability > 0.7) {
            return [`books:author:${key}`];
        }

        return [];
    }

    recordAccess(key) {
        // Update access patterns for ML training
        const pattern = this.accessPatterns.get(key) || {
            accessCount: 0,
            booksProbability: 0
        };

        pattern.accessCount++;
        this.accessPatterns.set(key, pattern);
    }
}

Solution 5: Automatic Persisted Queries (APQ)

Cache queries by hash to reduce payload size and enable aggressive caching:

// Client sends hash instead of full query
const client = new ApolloClient({
    link: createPersistedQueryLink().concat(httpLink),
    cache: new InMemoryCache()
});

// Server implementation
const server = new ApolloServer({
    typeDefs,
    resolvers,
    persistedQueries: {
        cache: new RedisCache({
            host: 'localhost',
            port: 6379
        }),
        ttl: 900 // 15 minutes
    }
});

// First request: Send hash + full query
// Subsequent requests: Send only hash
// Saves bandwidth and enables query-level caching

Complete Example: Optimized GraphQL Server

const { ApolloServer } = require('apollo-server');
const DataLoader = require('dataloader');
const Redis = require('ioredis');

const redis = new Redis();

// Context factory with loaders
function createContext() {
    return {
        loaders: {
            author: new DataLoader(async (ids) => {
                const authors = await db.query(
                    'SELECT * FROM authors WHERE id IN (?)',
                    [ids]
                );
                return ids.map(id =>
                    authors.find(a => a.id === id)
                );
            }),

            books: new DataLoader(async (authorIds) => {
                const books = await db.query(
                    'SELECT * FROM books WHERE author_id IN (?)',
                    [authorIds]
                );
                return authorIds.map(id =>
                    books.filter(b => b.author_id === id)
                );
            })
        },
        redis
    };
}

const resolvers = {
    Query: {
        author: async (_, { id }, { redis, loaders }) => {
            // Try cache first
            const cached = await redis.get(`author:${id}`);
            if (cached) return JSON.parse(cached);

            // Use DataLoader
            const author = await loaders.author.load(id);

            // Cache for 1 hour
            await redis.setex(
                `author:${id}`,
                3600,
                JSON.stringify(author)
            );

            return author;
        }
    },

    Author: {
        books: (author, _, { loaders }) => {
            // DataLoader batches and caches
            return loaders.books.load(author.id);
        }
    }
};

const server = new ApolloServer({
    typeDefs,
    resolvers,
    context: createContext,
    plugins: [
        responseCachePlugin(),
        {
            requestDidStart() {
                const start = Date.now();
                return {
                    willSendResponse({ metrics, response }) {
                        metrics.duration = Date.now() - start;
                        console.log('Query time:', metrics.duration);
                    }
                };
            }
        }
    ]
});

Measuring Performance Improvements

// Before optimization
{
    "duration": 1247,
    "queries": 101,
    "cacheHits": 0,
    "cacheHitRate": 0
}

// After DataLoader + field caching
{
    "duration": 23,
    "queries": 2,
    "cacheHits": 0,
    "cacheHitRate": 0
}

// After warming cache
{
    "duration": 4,
    "queries": 0,
    "cacheHits": 2,
    "cacheHitRate": 1.0
}

// Performance improvement: 311x faster (1247ms → 4ms)

Cache Invalidation Strategies

Invalidate cached GraphQL data when underlying data changes:

// Type-based invalidation
async function updateAuthor(id, data) {
    await db.update('authors', id, data);

    // Invalidate all cached queries involving Author type
    await cache.invalidateByTag('Author');

    // Or specific author
    await cache.del(`author:${id}`);
}

// Smart invalidation with dependency tracking
class SmartCache {
    async invalidateType(typeName) {
        // Find all cached queries that include this type
        const pattern = `query:*:${typeName}:*`;
        const keys = await redis.keys(pattern);

        if (keys.length > 0) {
            await redis.del(...keys);
        }

        console.log(`Invalidated ${keys.length} queries for ${typeName}`);
    }
}

Best Practices Summary

Conclusion

GraphQL's N+1 problem can cripple application performance, but the solution combines DataLoader batching, field-level caching, and intelligent query caching. These patterns reduce database queries by 98%+ and improve response times from seconds to milliseconds.

Start with DataLoader for all relationship resolvers, add field-level caching using @cacheControl directives, and implement full query caching for your most expensive operations. The result: fast, scalable GraphQL APIs that handle millions of requests with minimal infrastructure.

Automatic GraphQL Query Optimization

Cachee AI automatically detects and optimizes GraphQL N+1 patterns with ML-powered prefetching and intelligent field-level caching.

Start Free Trial