Quick Overview:
The blog helps you understand the basics of in-memory caching in the .NET core application. You will gain a brief overview of its definition, working, advantages, and disadvantages. In addition, the implementation process is also provided to help you gain insight into real-world practical knowledge

In-Memory Caching in .NET Core

Caching is a powerful mechanism that helps in reducing the latency and improving the application performance. For .NET applications, multiple caching configurations are available. However, one of the most popular and reliable strategies is the in-memory caching strategy.

Here, we’ll discover the fundamentals of in-memory caching, including a brief overview of its pros, cons, and its implementation.  

What is In-Memory Caching?

Among the multiple types of caching strategies available for .NET Core applications, one is the in-memory caching. It’s the most significant and easiest way of configuring caching in an application. However, it’s only considered for small- and medium-scale business solutions with an expected minimal load and traffic.

The in-memory caching technique uses the application’s local storage space, which is mostly the storage of the web server. It helps the application to efficiently check the data in the cache and provide it immediately for the process execution.

Mainly, in memory caching is used with a single server deployment. However, if your .NET applications are deployed on multi-server infrastructure, you need to implement sticky sessions to utilize this caching strategy.

The Working of In-Memory Caching in a Software

The in-memory cache maintains a connection with the database and works in the following manner.

Step 1: The application receives a user request for SQL, ACID transaction, or computing.

Step 2: Data required in the request is first checked with in-memory cache storage.

Step 3: If the data is available in the cache, it retrieves itself directly from it. Otherwise, cache memory communicates with the database, fetches the data, and provides it for the required process.

The Pros and Cons of In-Memory Caching in .NET Applications

Similar to other development technologies and components, in-memory caching also has its own set of pros and cons.

Pros of in-memory caching

1: Quick Data Retrieval

In-memory caching is quite popular for its exceptional speed. It immediately provides the data as soon as the application receives the request. If the data is not present, it communicates with the data store on the spot, retrieves data, and delivers it within minimal time.

2: Better User Experience

The end-users prefer an application with faster responses. With cache, the app performance is optimized, which contributes as a primary source in increasing retention and conversions. Also, the customers have a smooth experience while navigating and executing their desired procedures.

3: Optimized Dotnet App Performance

High performance is one of the primary objectives of developing a dotnet application, and in-memory caching helps to achieve it impeccably. It aids the app in instantly fetching the repeatedly required data and reduces the latency that occurs while communicating with the database.

4: Time and Cost Saving

It’s obvious that when the performance is increased, the time will be saved when executing the user requests. Additionally, the resource cost will also be reduced, as cache helps use limited processing power and storage space as compared to direct communication with data stores.

Cons of in-memory caching

1: Not Suitable Large Applications

The in-memory cache is only suitable for small to medium-scale dotnet applications. It uses the same storage space as the software, due to which high traffic or load can reverse its benefits.

2: Can be Costly in Some Scenarios

In use cases where the cache has to hold the data for a longer period or has to work with a scalable application, the in-memory strategy can prove expensive. It will require additional resources and server upgrades, impacting your overall project budget.

Pre-requisites To Understand In-Memory Caching Implementation

Before we begin the implementation, you should understand the following entry options for better clarification.

1: Absolute Expiration: The defined value ensures that the data in the cache is deleted once the timer expires.

2: Priority: It determines the hierarchy or order to delete the data for freeing storage.

3: Expiration Tokens: It’s only a token instance used for cache entry deletion or expiration purposes.

4: Size: As the name suggests, the cache entry size is defined using this term.

5: Sliding Expiration: It helps to define the time until which an inactive or inaccessible entry can be retained in cache memory. However, even if the entry is accessed, its lifetime will not be extended. It has to be deleted according to the absolute expiration timer.

Looking to Effortlessly Boost Your .NET Project?

Equip your team with exceptional .NET expertise. Bring our skilled .NET Developer on board for your projects now!

The Procedure To Configure In-Memory Caching in Dotnet Core Apps

To enable in-memory caching in a .NET application, you can follow the below procedure after creating the project in Visual Studio or any other IDE of your choice.

Step 1: Go to the “Startup.cs” file, and under the “ConfigureServices” method, add “services.AddmemoryCache()”.

public void ConfigreServices(IserviceCollection services)
	services.TryAddSingleton<IActionContextAccessor, ActionContextAccessor>();

Step 2: Now, open the controller and add the “IMemoryCache” interface under the constructor.

public class HomeController : Controller
	private readonly IMemoryCache _memoryCache;

	public HomeController(IMemoryCache memoryCache)
		_memoryCache = memoryCache;

Until now, the in-memory cache has been initialized in your application. Further, you can configure different entry options, such as sliding expiration, priority, and more.

Step 3: Set the different in-memory cache methods as below.

Method #1: Data Storage in Cache

Firstly, the cache expiry options are configured as objects of “MemoryCacheEntryOptions” and expirations are defined with priority.

private Void SetValueToCache(string city, int plateCode)
	var cacheExpiryOptions = new MemoryCacheEntryOptions
		AbsoluteExpiration = DateTime.Now.AddHours(1),
		SlidingExpiration = TimeSpan.FromMinutes(10),
		Priority = CacheItemPriority.High
	_memoryCache.Set(plateCode, city, cacheExpiryOptions);

Method #2: Data Fetching from Cache

Now, when the request is sent, the app will look into the cache. If the data will be available, it’ll be returned.

private string  GetValueFromCache(int plateCode)
	var value = string.Empty;
	_memoryCache.TryGetValue(plateCode, out value);
	return value;

Method #3: Data Removal from Cache

Lastly, to delete data from the cache before its self-deletion, use the following code.

private void RemoveFromCache(int plateCode)

Wrapping Up

The in memory caching strategy is quite popular among the .NET developers. It helps to improve the performance and optimize the digital experience. However, it can only be utilized for small and medium sized .NET core applications. Further, to implement it, you are only required to add services. AddmemoryCache()” and “IMemoryCache”. Once the in memory is configured, your application will absolutely be faster than before.

Parag Mehta

Verified Expert in Software & Web App Engineering

Parag Mehta, the CEO and Founder of Positiwise Software Pvt Ltd has extensive knowledge of the development niche. He is implementing custom strategies to craft highly-appealing and robust applications for its clients and supporting employees to grow and ace the tasks. He is a consistent learner and always provides the best-in-quality solutions, accelerating productivity.

Related Posts