ASP.NET Core Memory Cache in Practice: Complete Configuration Guide and Pitfall Avoidance
Introduction
In this article, we'll dive deep into ASP.NET Core's memory caching capabilities. ASP.NET Core memory cache (IMemoryCache) represents a lightweight caching solution suitable for single-instance applications or local caching within distributed environments. It provides simple APIs for storing and retrieving data while supporting expiration policies, priority settings, and other advanced features.
Understanding how to properly configure and use memory cache can significantly improve your application's performance and response times. Let's explore everything you need to know.
What Is Caching and Why Does It Matter
The journey from a user request to database response can be lengthy (though we're exaggerating slightly — typically it ranges from tens to hundreds of milliseconds). However, consider that multiple users are accessing your application, and even the same user might initiate several similar requests within a short timeframe.
Executing the complete workflow for every single request becomes wasteful. This is where caching proves invaluable.
Caching serves as a storage mechanism that saves previous request results. When identical requests arrive subsequently, the system returns cached results directly, eliminating redundant calculations and database access overhead.
The ultimate purpose of caching: improve performance and response speed.
Cache Types in ASP.NET Core
ASP.NET Core provides three commonly used caching solutions, each with distinct use cases and boundaries:
1. Memory Cache (IMemoryCache)
Best for: Single-instance applications or local caching within distributed environments.
Characteristics:
- Stores data in local memory
- Extremely fast access speeds (typically much faster than network and database access)
- Data cannot be shared across multiple instances
- Data is lost upon application restart
- Suitable for temporary data or non-persistent information
2. Distributed Cache (IDistributedCache)
Best for: Shared caching in distributed environments.
Common Implementations:
- Redis
- SQL Server
- Other distributed cache providers
Characteristics:
- Data shared across multiple application instances
- Survives application restarts (depending on implementation)
- Slightly slower than memory cache due to network overhead
- Essential for scaled, multi-instance deployments
3. Hybrid Cache
Best for: Combining the benefits of both approaches.
How It Works:
- First checks memory cache
- If cache miss occurs, queries distributed cache
- Provides optimal balance between speed and data sharing
Critical Insight: Each cache category has its own usage scenarios and applicable boundaries. Selecting the appropriate caching solution is crucial for application success.
Understanding IMemoryCache Limitations and Considerations
While ASP.NET Core's memory cache uses local memory for temporary data storage — providing access speeds typically far exceeding network and database operations (specific latency depends on data size and serialization overhead) — it comes with important limitations.
Key Limitations
Cannot Share Data Across Multiple Instances
Memory cache resides on individual servers. In multi-instance deployments, each instance maintains its own separate cache, potentially leading to inconsistency.
Data Loss on Application Restart
Since data lives in volatile memory, application restarts result in complete cache loss. This makes memory cache unsuitable for data requiring persistence.
Memory Resource Consumption
Storing data in local memory consumes server memory resources. Excessive cached data or improperly configured expiration policies can lead to:
- Increased memory pressure
- Frequent garbage collection
- Performance degradation
- Potential application instability
Important Usage Guidelines
When implementing memory cache, pay careful attention to these critical points:
⚠️ Never Use External Input as Cache Keys
External input can consume unpredictable memory resources, potentially leading to cache attacks or misuse. Malicious users could exploit this to exhaust server memory.
✅ Set Reasonable Expiration Times
Always configure expiration policies to limit cache growth. Without expiration, caches can grow indefinitely, consuming all available memory.
✅ Limit Cache Size
Implement size limits to prevent excessive memory consumption. This protects your application from memory-related issues.
Using IMemoryCache in ASP.NET Core
Implementing IMemoryCache in ASP.NET Core is straightforward. Let's walk through the complete process.
Step 1: Register the Service
In your Program.cs file, register the memory cache service:
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddMemoryCache();This single line adds IMemoryCache to your dependency injection container.
Step 2: Inject and Use IMemoryCache
In services or controllers where you need caching, inject IMemoryCache through constructor injection:
public class MyService
{
private readonly IMemoryCache _cache;
public MyService(IMemoryCache cache)
{
_cache = cache;
}
public async Task<string> GetDataAsync(string key)
{
if (_cache.TryGetValue(key, out string value))
{
return value; // Return data from cache
}
else
{
value = await FetchDataFromDatabaseAsync(key); // Fetch from database
_cache.Set(key, value, TimeSpan.FromMinutes(5)); // Cache for 5 minutes
return value;
}
}
}In this example:
- First attempt to retrieve data from cache
- If cache hit occurs, return immediately
- If cache miss, fetch from database and store result in cache with 5-minute expiration
- Subsequent identical requests retrieve data directly from cache
Step 3: Use GetOrCreateAsync (Recommended Approach)
Beyond the manual try-get-set pattern shown above, IMemoryCache provides the GetOrCreateAsync method for more concise implementation. This approach is strongly recommended:
public async Task<string> GetDataAsync(string key)
{
return await _cache.GetOrCreateAsync(key, async entry =>
{
entry.AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(5); // Set expiration
return await FetchDataFromDatabaseAsync(key); // Fetch from database
});
}Benefits of GetOrCreateAsync:
- More concise code
- Atomic operation (prevents cache stampede)
- Cleaner separation of cache logic from business logic
- Easier to maintain and modify
IMemoryCache Optimization Techniques
When working with memory cache, several optimization techniques help manage cache more effectively.
1. Sliding Expiration Strategy
Purpose: Extend cache item lifecycle based on access patterns.
How It Works: Sliding expiration resets the expiration timer each time the cache item is accessed. This ensures frequently accessed data doesn't expire prematurely.
Implementation:
_cache.Set(key, value, new MemoryCacheEntryOptions
{
SlidingExpiration = TimeSpan.FromMinutes(5) // Reset to 5 minutes after each access
});Best Use Cases:
- Data accessed irregularly but should remain cached while actively used
- Session-related information
- User-specific preferences
2. Absolute Expiration Strategy
Purpose: Set maximum lifecycle for cache items regardless of access frequency.
How It Works: Absolute expiration causes cache items to expire at a specific time point, whether accessed or not.
Implementation:
_cache.Set(key, value, new MemoryCacheEntryOptions
{
AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(30) // Expire after 30 minutes
});Best Use Cases:
- Time-sensitive data (prices, availability)
- Data that must refresh periodically
- Preventing indefinite cache retention
3. Limit Total Cache Size
Purpose: Prevent cache from consuming excessive memory.
Implementation:
builder.Services.AddMemoryCache(options =>
{
options.SizeLimit = 1024; // Total cache capacity (units defined by your business logic)
});
_cache.Set(key, value, new MemoryCacheEntryOptions
{
Size = 1 // This cache item occupies 1 unit
});Critical Note: After enabling SizeLimit, all written cache items must explicitly set Size, otherwise runtime exceptions will occur.
Sizing Strategy:
- Define what one "unit" represents in your context (e.g., 1MB, 1000 objects, etc.)
- Assign appropriate sizes to different cache items based on their memory footprint
- Monitor cache size metrics in production
4. Set Cache Item Priority
Purpose: Ensure important data isn't removed prematurely during memory pressure.
Implementation:
_cache.Set(key, value, new MemoryCacheEntryOptions
{
Size = 1,
Priority = CacheItemPriority.NeverRemove // Set to never remove priority
});Available Priorities:
Low: Removed first when memory pressure occursNormal: Default priorityHigh: Removed after Low and Normal itemsNeverRemove: Never automatically removed (use sparingly)
5. Utilize Post-Eviction Callbacks
Purpose: Execute logic when cache items are removed, such as logging or resource cleanup.
Implementation:
_cache.Set(key, value, new MemoryCacheEntryOptions
{
PostEvictionCallbacks =
{
new PostEvictionCallbackRegistration
{
EvictionCallback = (k, v, reason, state) =>
{
Console.WriteLine($"Cache item {k} was removed, reason: {reason}");
// Additional cleanup logic here
}
}
}
});Eviction Reasons:
Removed: Explicitly removed by applicationExpired: Expired based on expiration policyTokenExpired: Associated cancellation token triggeredMemoryPressure: Removed due to memory constraints
6. Compress Cache Data (Advanced)
Purpose: Reduce memory footprint for large data objects.
Implementation:
var compressedValue = Compress(value); // Compress data
_cache.Set(key, compressedValue, new MemoryCacheEntryOptions
{
Size = compressedValue.Length // Set size to compressed length
});When to Use:
- Storing large data objects
- Memory is more constrained than CPU
- Access performance requirements aren't extremely high
⚠️ Warning: Compression and decompression increase CPU overhead. Trading computation for memory savings isn't always worthwhile. Carefully evaluate your specific scenario before implementing compression.
Best Practices Summary
To maximize the benefits of ASP.NET Core memory cache while avoiding common pitfalls:
Do's ✅
- Set appropriate expiration policies (sliding, absolute, or both)
- Limit cache size to prevent memory exhaustion
- Use meaningful cache keys (avoid user input)
- Monitor cache hit/miss ratios in production
- Implement cache warming for critical data
- Use priority levels strategically
Don'ts ❌
- Don't cache everything — be selective
- Don't use external input directly as cache keys
- Don't set extremely long or infinite expiration times
- Don't cache sensitive data without encryption
- Don't ignore memory pressure indicators
- Don't forget to handle cache misses gracefully
Conclusion
ASP.NET Core memory cache is a powerful tool, but it comes with important limitations that must be understood and respected.
Key Takeaways:
- Choose the right cache type for your scenario (memory, distributed, or hybrid)
- Configure expiration policies appropriately to balance freshness and performance
- Limit cache size to prevent memory exhaustion
- Monitor cache performance in production environments
- Understand the trade-offs between speed, consistency, and resource usage
When used correctly, memory cache can significantly improve application performance and response speed. However, improper usage can lead to memory issues, stale data, and unexpected behavior.
By following the guidelines and techniques outlined in this article, you'll be well-equipped to leverage ASP.NET Core memory cache effectively in your applications.
Remember: caching is not a silver bullet. It's one tool in your performance optimization toolkit, and like any tool, it works best when applied thoughtfully and appropriately to the problem at hand.
Happy caching!