Introduction

In this article, we'll explore ASP.NET Core's memory caching capabilities. ASP.NET Core Memory Caching (IMemoryCache) is a lightweight caching solution suitable for single-instance applications or local caching within distributed environments. It provides simple APIs for storing and retrieving data while supporting features like expiration policies, priority settings, and eviction callbacks.

Understanding how to properly configure and use memory caching can significantly improve your application's performance and responsiveness. However, improper usage can lead to memory pressure, performance degradation, and even application crashes. This guide will walk you through everything you need to know to use IMemoryCache effectively while avoiding common pitfalls.

What is Caching?

From the moment a user sends a request to when the database returns data, this represents a lengthy process (perhaps slightly exaggerated—typically ranging from tens to hundreds of milliseconds). However, it's not just one user accessing your system. Multiple users may be making requests simultaneously, or even the same user might initiate multiple similar requests within a short timeframe.

In such scenarios, completing the entire processing flow for every single request becomes wasteful. This is where caching proves invaluable.

Caching's Purpose: Store results from previous requests so that when identical requests arrive, cached results can be returned directly, eliminating redundant computations and database accesses.

The Bottom Line: Caching is a storage mechanism designed to improve performance and response times by avoiding repeated work.

Cache Types in ASP.NET Core

ASP.NET Core provides three commonly used caching solutions, each with distinct use cases and boundaries:

1. Memory Cache (IMemoryCache)

Best For: Single-instance applications or local caching within distributed environments.

Characteristics:

  • Stores data in the application's local memory
  • Extremely fast access speeds (typically nanoseconds to microseconds)
  • No network overhead
  • Data isolated to single application instance
  • Lost when application restarts

2. Distributed Cache (IDistributedCache)

Best For: Shared caching in distributed environments.

Common Implementations:

  • Redis (most popular)
  • SQL Server
  • NCache
  • Azure Cache for Redis

Characteristics:

  • Shared across multiple application instances
  • Survives application restarts
  • Network latency involved (typically milliseconds)
  • Requires external infrastructure
  • Better for session state and cross-instance data sharing

3. Hybrid Cache

Best For: Scenarios requiring both speed and distributed sharing.

How It Works: Combines memory cache and distributed cache in a two-tier approach:

  1. First check memory cache (fastest)
  2. If miss, check distributed cache (slower but shared)
  3. If still miss, fetch from original source and populate both caches

Characteristics:

  • Best of both worlds when configured correctly
  • More complex implementation
  • Requires careful cache invalidation strategies
  • Higher memory usage (data stored in two places)

Critical Decision: Choosing the appropriate caching solution for your specific scenario is crucial for optimal performance and resource utilization.

Understanding ASP.NET Core's IMemoryCache

ASP.NET Core's memory cache uses local memory for temporary data storage, making access speeds extremely fast—typically far quicker than network or database access (exact latency depends on data size and serialization overhead).

Advantages

  • Blazing Fast: In-memory access means nanosecond to microsecond latency
  • Simple API: Easy to implement with minimal configuration
  • No External Dependencies: Works out of the box without additional infrastructure
  • Rich Features: Supports expiration, priorities, callbacks, and size limits

Limitations

  • No Cross-Instance Sharing: Data exists only in the application instance where it was cached
  • Volatile Storage: All cached data is lost when the application restarts
  • Memory Pressure: Cached data consumes server memory resources
  • Scaling Challenges: Each instance maintains its own cache, potentially leading to inconsistency

Critical Considerations

When using memory caching, be aware of these important constraints:

Memory Resource Consumption: Data stored in local memory occupies server RAM. If cached data volume becomes too large or expiration policies are improperly configured, this can lead to:

  • Increased memory pressure
  • Frequent garbage collection cycles
  • Performance degradation
  • Potential out-of-memory exceptions

Security Warning: Never use external input directly as cache keys. Such inputs could consume unpredictable memory resources, potentially enabling cache poisoning attacks or accidental misuse.

Best Practices:

  • Set reasonable expiration times to limit cache growth
  • Implement size limits to prevent excessive memory consumption
  • Monitor cache hit rates and memory usage
  • Plan for cache warming strategies after application restarts

Using IMemoryCache in ASP.NET Core

Implementing IMemoryCache in ASP.NET Core is straightforward. Let's walk through the complete setup and usage process.

Step 1: Register the Service

First, register the memory cache service in Program.cs:

var builder = WebApplication.CreateBuilder(args);

// Add memory cache services
builder.Services.AddMemoryCache();

// Continue with other service configurations...
var app = builder.Build();

This single line adds all necessary services for in-memory caching to your application's dependency injection container.

Step 2: Inject and Use IMemoryCache

Wherever you need caching functionality, inject IMemoryCache through constructor injection:

public class MyService
{
    private readonly IMemoryCache _cache;
    
    public MyService(IMemoryCache cache)
    {
        _cache = cache;
    }
    
    public async Task<string> GetDataAsync(string key)
    {
        // Try to get data from cache
        if (_cache.TryGetValue(key, out string value))
        {
            return value; // Cache hit - return cached data
        }
        else
        {
            // Cache miss - fetch from database
            value = await FetchDataFromDatabaseAsync(key);
            
            // Store in cache with 5-minute expiration
            _cache.Set(key, value, TimeSpan.FromMinutes(5));
            
            return value;
        }
    }
    
    private Task<string> FetchDataFromDatabaseAsync(string key)
    {
        // Your database access logic here
        throw new NotImplementedException();
    }
}

Understanding the Try-Get-Set Pattern

The example above demonstrates the classic "try-get-set" caching pattern:

  1. Try: Attempt to retrieve data from cache using TryGetValue()
  2. Get: If cache hit (data exists), return it immediately
  3. Set: If cache miss (data doesn't exist), fetch from source and store in cache

This pattern ensures that expensive operations (like database queries) only execute when necessary.

Recommended Approach: GetOrCreateAsync

Beyond the manual try-get-set pattern, IMemoryCache provides the GetOrCreateAsync() method for more elegant implementation:

public async Task<string> GetDataAsync(string key)
{
    return await _cache.GetOrCreateAsync(key, async entry =>
    {
        // Configure cache entry
        entry.AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(5);
        
        // This only executes on cache miss
        return await FetchDataFromDatabaseAsync(key);
    });
}

Why This is Better:

  • Cleaner Code: Eliminates manual if-else logic
  • Atomic Operation: Prevents race conditions where multiple threads might simultaneously fetch the same data
  • Automatic Caching: Only executes the factory function on cache misses
  • Flexible Configuration: Easy to set expiration and other options per entry

Recommendation: Prefer GetOrCreateAsync() for most caching scenarios unless you need explicit control over the caching flow.

IMemoryCache Optimization Techniques

When using memory caching, several optimization techniques can help you manage cache more effectively and avoid common pitfalls.

Technique 1: Sliding Expiration Policy

Purpose: Extend cache item lifespan based on access patterns.

How It Works: Sliding expiration resets the expiration timer each time the cache item is accessed. This ensures frequently accessed data doesn't expire prematurely.

Use Case: Perfect for data that should remain cached as long as it's actively being used.

Example:

_cache.Set(key, value, new MemoryCacheEntryOptions
{
    SlidingExpiration = TimeSpan.FromMinutes(5) // Reset to 5 minutes on each access
});

Behavior:

  • Item accessed at T=0: Expires at T=5 minutes
  • Item accessed at T=3 minutes: Expires at T=8 minutes (reset)
  • Item accessed at T=7 minutes: Expires at T=12 minutes (reset again)
  • Item NOT accessed: Expires at originally scheduled time

Caution: Without an absolute expiration, frequently accessed items could theoretically remain cached indefinitely. Consider combining with absolute expiration for safety.

Technique 2: Absolute Expiration Policy

Purpose: Set maximum lifespan for cache items regardless of access frequency.

How It Works: Absolute expiration causes cache items to expire at a specific point in time, regardless of whether they've been accessed.

Use Case: Essential for data that must refresh periodically, such as configuration settings or time-sensitive information.

Example:

_cache.Set(key, value, new MemoryCacheEntryOptions
{
    AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(30) // Expires in 30 minutes
});

Behavior:

  • Item created at T=0: Always expires at T=30 minutes
  • Access at T=15 minutes: Still expires at T=30 minutes (no reset)
  • Access at T=29 minutes: Still expires at T=30 minutes

Technique 3: Combined Expiration Strategy

Best Practice: Combine sliding and absolute expiration for optimal results:

_cache.Set(key, value, new MemoryCacheEntryOptions
{
    SlidingExpiration = TimeSpan.FromMinutes(5),      // Reset on access
    AbsoluteExpirationRelativeToNow = TimeSpan.FromHours(1) // Max 1 hour lifetime
});

Benefits:

  • Frequently accessed data stays cached (sliding)
  • Stale data eventually refreshes (absolute)
  • Prevents indefinite caching of any item

Technique 4: Cache Size Limits

Purpose: Prevent cache from consuming excessive memory.

Configuration: Enable SizeLimit during service registration:

builder.Services.AddMemoryCache(options =>
{
    options.SizeLimit = 1024; // Total cache capacity (units defined by your business logic)
});

Important: Once SizeLimit is enabled, ALL cache entries must explicitly set their Size, otherwise runtime exceptions will occur.

Usage:

_cache.Set(key, value, new MemoryCacheEntryOptions
{
    Size = 1 // This cache item occupies 1 unit
});

Sizing Strategies:

  • Simple: Each item = 1 unit (count-based limiting)
  • Memory-based: Size = object size in KB or MB
  • Priority-based: Larger sizes for less important items (evicted first)
  • Custom: Business-specific sizing (e.g., estimated query cost)

Example with Memory-Based Sizing:

// Estimate size based on serialized length
var serializedSize = Encoding.UTF8.GetByteCount(JsonSerializer.Serialize(value));
var sizeInKB = serializedSize / 1024;

_cache.Set(key, value, new MemoryCacheEntryOptions
{
    Size = sizeInKB // Size in kilobytes
});

Technique 5: Cache Item Priority

Purpose: Ensure important data isn't removed prematurely during memory pressure.

Priority Levels:

  • CacheItemPriority.NeverRemove: Never automatically remove (use sparingly!)
  • CacheItemPriority.High: Remove only after lower priority items
  • CacheItemPriority.Normal: Default priority
  • CacheItemPriority.Low: Remove before normal priority items

Example:

_cache.Set(key, value, new MemoryCacheEntryOptions
{
    Size = 1,
    Priority = CacheItemPriority.NeverRemove // Set to never remove
});

Use Cases:

  • NeverRemove: Critical configuration data, essential reference data
  • High: Frequently accessed user data, session information
  • Normal: Standard cached query results
  • Low: Expensive-to-fetch but easily-recomputable data

Warning: Overuse of NeverRemove can defeat the purpose of size limits. Use judiciously.

Technique 6: Eviction Callbacks

Purpose: Execute custom logic when cache items are removed.

Use Cases:

  • Logging cache evictions for monitoring
  • Cleaning up related resources
  • Triggering cache refresh for dependent data
  • Auditing cache behavior

Example:

_cache.Set(key, value, new MemoryCacheEntryOptions
{
    PostEvictionCallbacks =
    {
        new PostEvictionCallbackRegistration
        {
            EvictionCallback = (k, v, reason, state) =>
            {
                Console.WriteLine($"Cache item {k} was removed, reason: {reason}");
                
                // Additional cleanup logic here
                // Could log to file, send metrics, refresh dependent caches, etc.
            }
        }
    }
});

Eviction Reasons (EvictionReason enum):

  • Removed: Explicitly removed via code
  • Replaced: Replaced by new value with same key
  • Expired: Expired based on expiration policy
  • TokenExpired: Associated change token expired
  • Capacity: Removed due to cache size limit

Technique 7: Data Compression (Advanced)

Purpose: Reduce memory footprint for large cached objects.

When to Use: Only suitable for storing large data objects where access performance requirements aren't critical.

Trade-off: Compression and decompression add CPU overhead. Sacrificing computation for memory savings may not be worthwhile for small or frequently accessed data.

Implementation:

using System.IO.Compression;

// Compress before caching
var compressedValue = Compress(value);
_cache.Set(key, compressedValue, new MemoryCacheEntryOptions
{
    Size = compressedValue.Length // Size based on compressed length
});

// Decompress after retrieval
var compressed = _cache.Get<byte[]>(key);
var decompressedValue = Decompress(compressed);

// Helper methods
private byte[] Compress(string value)
{
    var bytes = Encoding.UTF8.GetBytes(value);
    using var output = new MemoryStream();
    using (var compress = new GZipStream(output, CompressionMode.Compress))
    {
        compress.Write(bytes, 0, bytes.Length);
    }
    return output.ToArray();
}

private string Decompress(byte[] compressed)
{
    using var input = new MemoryStream(compressed);
    using var output = new MemoryStream();
    using (var decompress = new GZipStream(input, CompressionMode.Decompress))
    {
        decompress.CopyTo(output);
    }
    return Encoding.UTF8.GetString(output.ToArray());
}

Performance Considerations:

  • Compression Ratio: Typically 50-90% size reduction for text data
  • CPU Cost: Adds milliseconds to each cache operation
  • Best For: Large objects (>100KB), infrequently accessed data
  • Avoid For: Small objects, high-frequency access patterns

Common Pitfalls and How to Avoid Them

Pitfall 1: Cache Stampede (Thundering Herd)

Problem: When a popular cache item expires, multiple simultaneous requests all miss the cache and simultaneously hit the database.

Solution: Use GetOrCreateAsync() which provides built-in protection, or implement cache locking:

private readonly SemaphoreSlim _lock = new SemaphoreSlim(1, 1);

public async Task<string> GetDataAsync(string key)
{
    if (_cache.TryGetValue(key, out string value))
        return value;
    
    await _lock.WaitAsync();
    try
    {
        // Double-check after acquiring lock
        if (_cache.TryGetValue(key, out value))
            return value;
        
        value = await FetchDataFromDatabaseAsync(key);
        _cache.Set(key, value, TimeSpan.FromMinutes(5));
        return value;
    }
    finally
    {
        _lock.Release();
    }
}

Pitfall 2: Memory Leaks

Problem: Cache grows indefinitely without proper expiration, eventually causing out-of-memory exceptions.

Solution: Always set expiration policies and consider size limits:

// BAD: No expiration
_cache.Set(key, value); // Never expires!

// GOOD: With expiration
_cache.Set(key, value, TimeSpan.FromHours(1));

// BETTER: With size limits configured
builder.Services.AddMemoryCache(options =>
{
    options.SizeLimit = 10000;
});

Pitfall 3: Caching Sensitive Data

Problem: Accidentally caching user-specific or sensitive information, leading to data leakage between users.

Solution: Never cache user-specific data without proper key isolation:

// DANGEROUS: User ID not in cache key
_cache.Set("UserProfile", userProfile); // Shared across all users!

// SAFE: Include user ID in key
_cache.Set($"UserProfile:{userId}", userProfile);

// EVEN BETTER: Use cache tags for user-specific data
_cache.Set($"UserProfile:{userId}", userProfile, new MemoryCacheEntryOptions
{
    ExpirationTokens = { GetUserInvalidationToken(userId) }
});

Pitfall 4: Over-Caching

Problem: Caching everything "just in case," leading to wasted memory and stale data issues.

Solution: Cache strategically based on actual usage patterns:

  • Cache expensive computations (database queries, API calls, complex calculations)
  • Don't cache cheap operations (simple property access, trivial computations)
  • Monitor cache hit rates and adjust accordingly
  • Implement cache metrics and alerting

Monitoring and Diagnostics

Effective cache management requires visibility into cache behavior. Consider implementing:

Cache Metrics to Track

  • Hit Rate: Percentage of requests served from cache
  • Miss Rate: Percentage requiring source fetch
  • Eviction Rate: How often items are removed before expiration
  • Memory Usage: Current cache memory consumption
  • Item Count: Number of items currently cached

Example: Simple Cache Statistics

public class CacheStatistics
{
    private int _hits = 0;
    private int _misses = 0;
    
    public void RecordHit() => Interlocked.Increment(ref _hits);
    public void RecordMiss() => Interlocked.Increment(ref _misses);
    
    public double HitRate => _hits + _misses > 0 
        ? (double)_hits / (_hits + _misses) 
        : 0;
}

Summary

ASP.NET Core memory caching is powerful but comes with limitations that must be understood and respected. Key takeaways:

Do's

  • ✅ Set reasonable expiration policies (both sliding and absolute)
  • ✅ Implement size limits to prevent memory exhaustion
  • ✅ Use GetOrCreateAsync() for cleaner, safer code
  • ✅ Monitor cache performance and adjust strategies
  • ✅ Include proper key namespacing to avoid collisions

Don'ts

  • ❌ Cache without expiration (leads to memory leaks)
  • ❌ Use external input directly as cache keys (security risk)
  • ❌ Cache sensitive user data without proper isolation
  • ❌ Over-cache trivial operations (wastes resources)
  • ❌ Ignore cache metrics and monitoring

When to Use Memory Cache

Ideal Scenarios:

  • Single-instance applications
  • Read-heavy workloads with infrequent writes
  • Expensive-to-compute, stable data
  • Session state for sticky sessions
  • Reference data that rarely changes

Consider Alternatives When:

  • Multiple application instances need shared cache
  • Data must survive application restarts
  • Cache size requirements exceed available memory
  • Strong consistency across instances is required

By following these guidelines and understanding both the capabilities and limitations of IMemoryCache, you can significantly improve your application's performance and responsiveness while avoiding common pitfalls that lead to production issues.


This practical guide was originally published on 2026-04-11 and provides comprehensive coverage of ASP.NET Core memory caching implementation, optimization techniques, and best practices.