Shorten your cache keys, please

In a recent customer engagement, I have looked into a customer where their memory usage is abnormally high. Among other findings, I think one was not very well known. But as they say, small details can make a big difference – and that small detail in today’s post is the cache key.

Let’s put it to test. This is what I asked Copilot to scaffold it, and then just some small adjustments. The test is to add 10.000 items to cache, and then read each entry 10 times. One test with a very short prefix, and one with a long (but not very long) one.

using System;
using System.Runtime.Caching;
using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

namespace MemoryCacheBenchmarkDemo
{
    // Use MemoryDiagnoser to capture memory allocation metrics during benchmarking.
    [MemoryDiagnoser]
    public class MemoryCacheBenchmark
    {
        private const int Iterations = 10_000;
        private const int ReadIterations = 10;

        [Benchmark(Description = "MemoryCache with Short Keys")]
        public void ShortKeysBenchmark()
        {
            using (var cache = new MemoryCache("ShortKeysCache"))
            {
                const string Prefix = "K";
                // Insertion phase using short keys (e.g., "K0", "K1", ...)
                for (int i = 0; i < Iterations; i++)
                {
                    string key = Prefix + i;
                    cache.Add(key, i, DateTimeOffset.UtcNow.AddMinutes(5));
                }

                // Retrieval phase for short keys.
                
                for (int j = 0; j < Iterations; j++)
                {
                    int sum = 0;
                    for (int i = 0; i < ReadIterations; i++)
                    {
                        string key = Prefix + i;
                        if (cache.Get(key) is int value)
                        {
                            sum += value;
                        }
                    }
                    // Use the result to prevent dead code elimination.
                    if (sum == 0)
                    {
                        throw new Exception("Unexpected sum for short keys.");
                    }
                }
            }
        }

        [Benchmark(Description = "MemoryCache with Long Keys")]
        public void LongKeysBenchmark()
        {
            using (var cache = new MemoryCache("LongKeysCache"))
            {
                const string Prefix = "ThisIsAVeryLongCacheKeyPrefix_WhichAddsExtraCharacters_IsThisLongEnoughIAmNotSure";
                // Insertion phase using long keys.
                // Example: "ThisIsAVeryLongCacheKeyPrefix_WhichAddsExtraCharacters_0", etc.
                for (int i = 0; i < Iterations; i++)
                {
                    string key = Prefix + i;
                    cache.Add(key, i, DateTimeOffset.UtcNow.AddMinutes(5));
                }

                // Retrieval phase for long keys.
                for (int j = 0; j < Iterations; j++)
                {
                    int sum = 0;
                    for (int i = 0; i < ReadIterations; i++)
                    {
                        string key = Prefix + i;
                        if (cache.Get(key) is int value)
                        {
                            sum += value;
                        }
                    }
                    // Use the result to prevent dead code elimination.
                    if (sum == 0)
                    {
                        throw new Exception("Unexpected sum for short keys.");
                    }
                }
            }
        }
    }

    public class Program
    {
        public static void Main(string[] args)
        {
            // Executes all benchmarks in MemoryCacheBenchmark.
            BenchmarkRunner.Run<MemoryCacheBenchmark>();
        }
    }
}

And the difference. Not only the short key is faster, you end up with considerably less allocations (and therefore, Garbage collection)

The only requirement for cache key is that it’s unique. It might make sense to code it in a way that if you have to ever look into memory dumps (I wish you never have to, but it’s painful yet fun experience) – you know what cache entry is this. For example in Commerce we have this prefix for order cache key:

EP:EC:OS:

And that’s it. EP is short hand for EPiServer (so we know it’s us), EC is for eCommerce, and OS is for Order System. (I know, it’s been that way for a very long time for historical reasons, and nobody bothers to change it)

So next time you adding some cache to your class, make sure to use the shortest cache key as possible. It’s not micro optimization. If you know it’s better, why not?

One thought on “Shorten your cache keys, please

Leave a Reply

Your email address will not be published. Required fields are marked *