Today when I was tracking down some changes, I came across this commit comment
The bugfix for COM-xxxx seems to make the importing
of metafields too fast and causing too many events raised,
potentially flood the event system. This avoids raising the events
unnecessarily.
Who wrote this? Me, almost 5 years ago. I did have a chuckle in my head, for my younger self being brutally honest (I was the one who fixed the original bug, so I was responsible for making it “too fast”.). Because it was sending too many events in a short time, the event system was overwhelmed and some other events (like cache invalidation of other parts) did not get through.
Well, the bug fix makes it even faster by reducing the number of events raised. In the end, everyone was happy.
Moral of the story:
The system can only be as fast as the slowest component. Keep that in mind when you put your effort. In most cases, the slowest part is where you should spend your time.
It is important to validate any optimization is end-to-end. Faster is not always better if some other parts (that part) could not keep up with this part becomes unexpectedly faster.
You can’t predict everything, something will happen in the most unexpected way. Move fast, break things, learn from it, fix it. Rinse and Repeat.
As string is the most common data type in an application, nvarchar and its variant varchar are probably the most common column types in your database. (We almost always use nvarchar because nchar is meant for fixed length columns which we don’t have). The difference is that nvarchar has encoding of UTF-16/USC-2 while varchar has UTF-8
Starting with SQL Server 2012 (11.x), when a Supplementary Character (SC) enabled collation is used, these data types store the full range of Unicode character data and use the UTF-16 character encoding. If a non-SC collation is specified, then these data types store only the subset of character data supported by the UCS-2 character encoding.
But varchar can be harmful in a way that you don’t expect it to. Let’s assume we have this simple table with two columns (forgive naming, I can’t come up with better names)
Each will be inserted with same random values, almost unique. We will add a non clustered index on each of these columns, and as we know, the index should be very efficient on querying based on those values.
Let’s try with out varchar column first. It should work pretty well right. Nope!
SELECT *
FROM dbo.[Demo]
where varcharColumn = N'0002T9'
Instead of a highly efficient Index seek, it does an Index scan on the entire table. This is of course not what you want to.
But, why? Good question. You might have noticed that I used N’0002T9′ which is a nvarchar type – which is what .NET would pass to your query if your parameter is of type string. If you look closer to the execution plan, you’ll see that SQL Server has to do a CONVERT_IMPLICIT on each row of this column, effectively invalidates the index.
If we pass ‘0002T9’ without the notion though, it works as it should, this can cause the confusion as it works during development, but once deployed it is much slower
To see the difference we can run the queries side by side. Note that this is for a very simple table with 130k rows. If you have a few millions or more rows, the difference will be even bigger.
What’s about the vice versa? If we have data as nvarchar(100) but the parameter is passed as varchar ? SQL Server can handle it with ease. It simply converts the parameters to nvarchar and does an index seek, as it should
So moral of the story? Unless you have strong reasons to use varchar (or char ), stick with nvarchar (or nchar ) to avoid complications with data type conversion which can, and will hurt your database performance.
Don’t get me wrong, execution plan is one of the best tools at your disposal if you want to optimize a SQL query. No, it is the must have tool. It is not the only tool you will need, but if you have to pick only one, pick it.
But it is important to know that execution plan can be misleading. It is very useful to see where is the bottleneck is within a statement. It is not exactly useful when you need to compare two statements.
Let’s compare these two queries that I am working to optimize
SELECT OG.OrderGroupId
FROM OrderGroup OG
INNER JOIN OrderGroup_PurchaseOrder PO ON OG.OrderGroupId = PO.ObjectId WHERE 1 = 1 AND OG.Status IN(SELECT Item FROM ecf_splitlist('Cancelled')) ORDER BY OG.OrderGroupId DESC
OFFSET 0 ROWS
FETCH NEXT 50 ROWS ONLY
versus
SELECT OG.OrderGroupId
FROM OrderGroup OG
INNER JOIN OrderGroup_PurchaseOrder PO ON OG.OrderGroupId = PO.ObjectId WHERE 1 = 1 AND OG.Status IN('Cancelled') ORDER BY OG.OrderGroupId DESC
OFFSET 0 ROWS
FETCH NEXT 50 ROWS ONLY
These are 99% similar, except for the statement OG.Status IN ..., with and without calling the split function.
If you look at the execution plan only, it seems the former is much faster than the latter. It takes only 14% of the time, while the latter takes 86%, so if based on those figures only, we might think the first one is ~6 times faster than the second one.
Except it is not. If we turn on the IO statistics, it is a very different story
The first query has significantly more IO operations than the second
The first has slightly more logical reads on OrderGroup and OrderGroup_PurchaseOrder, but significantly more in a temp table (which is, inside the ecf_splitlist function).
The moral of the story? Execution plan is helpful, but not to compare query to query. In most cases, IO statistics are much more useful.
Let’s take a break from the memory allocation, and do some optimization on another aspect, yet as important (if not even more important) – database.
We all know that database queries play an essential part in any serious app. It’s almost a given that if you want your app to perform well, your database queries must also perform well. And for them to perform well, you need things like proper design (normalization, references etc.), properly written queries, and proper indexes. In this post, we will explore how an index can improve query performance, and how can we do it better.
Let’s start with a fairly simple table design
CREATE TABLE [dbo].[UniqueCoupon](
[Id] [int] identity primary key clustered,
[PromotionId] [int] NOT NULL,
[Code] [nvarchar](10) NOT NULL,
[ExpiredOn] [datetime] NULL,
[Redeemed] [bit] NULL
) ON [PRIMARY]
Nothing extraordinary here, pretty common if you ask me. Now for testing purpose, let’s insert 1.000.000 rows into it
INSERT INTO dbo.[UniqueCoupon] (PromotionId, Code)
SELECT
FLOOR(RAND()*(100)+1),
SUBSTRING(CONVERT(varchar(255), NEWID()), 0, 7)
GO 1000000
We need to query data by the code, so let’s create an user defined type
CREATE TYPE CouponTable AS TABLE (
Code NVARCHAR(10));
Time to run some query against data, let’s go with this
SELECT Id, PromotionId, Code, ExpiredOn, Redeemed FROM dbo.UniqueCoupons
WHERE PromotionId = @PromotionId AND Code in (SELECT Code FROM @Data)
This is the complete query as we need some data
declare @data CouponTable
insert into @data
select top 10 code from dbo.UniqueCoupon
where promotionid = 36
SELECT Id, PromotionId, Code, ExpiredOn, Redeemed FROM dbo.UniqueCoupon
WHERE PromotionId = 36 AND Code in (SELECT Code FROM @Data)
As we learned that execution plan is not a good way to compare performance, let’s use the statistics, our trusted friends
set statistics io on
set statistics time on
And this is how it takes with our default setting (i.e. no index)
If you are somewhat experienced with SQL Server, you might guess it would not be exactly happy because of, obviously an index is needed. As we query on PromotionId, it does makes sense to add an index for it, SQL Server does give you that
If we just blindly add the index suggested by SQL Server
If we look at the index, there’s something not very optimized about it – we are query by both PromotionId and Code, so not really makes senses to have Code as included. How’s about we have the index on both PromotionId and Code?
Yet we can make it better! From 53 to 30 logical reads might not sound a lot, but if you have thousands of queries every hour, it will be fairly significant.
Prepare yourself for some pleasant surprises – when we eventually applied the change on an actual database, the change was staggering, much more than what we hoped for. The query that were run for 24h in total, every day, now takes less than 10 minutes (yes you read it right, 10 minutes).
At this point you can certainly be happy and move on. But can we do better? For the sake of curiosity ? Yes we do.
SQL Server is rather smart that it knows we are getting the other columns as well, so those will be included in the index, to avoid a key lookup. Let’s see if we can remove that and see how it performs
So it was indeed worse, a key lookup is performed for every row (SQL Server uses the index to track down the rows and read the other columns from there)
There are two way to get rid of those key lookup – includes the columns in the index itself, or, more dramatic, make the index the clustered. As we can see the data should be accessed by PromotionId and Code, it makes perfect senses.
It is a commonly belief that Identity column should be clustered index – it is unique, it is not null. However, it only makes senses if it is the most heavily accessed column. In this case, Id only serves as an Identity column, it does not need to be the clustered index (although being an unique means it will has a non clustered index for it)
ALTER TABLE [dbo].[UniqueCoupon] DROP CONSTRAINT [PK__UniqueCo__3214EC0744C2FF38] WITH ( ONLINE = OFF )
GO
ALTER TABLE [dbo].[UniqueCoupon] ADD PRIMARY KEY NONCLUSTERED
(
[Id] ASC
)
Does this bring dramatically performance change? Unlikely. My test show no improvement in statistic. However, there is one critical impact here: we significantly reduced the size of indexes in the table. (data incoming)
Moral of the story
Indexes are crucial.
You can almost always do better than the auto suggested indexes.
“In 99% of the cases, premature optimization is the root of all devil”
This quote is usually said to be from Donald Knuth, usually regarded as “father of the analysis of algorithms”. His actual quote is a bit difference
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
Yet we should not pass up our opportunities in that critical 3%.
If you have read my posts, you know that I always ask for measuring your application before diving in optimization. But that’s not all of the story. Without profiling, your optimization effort might be futile. But there are things you can “optimize” right away without any profiling – because – they are easy to do, they make your code simpler, easier to follow, and you can be certain they are faster.
Let’s see if you can spot the potential problematic piece of code from this snippet
public Something GetData()
{
var market = list.FirstOrDefault(x => x.MarketId == GetCurrentMarket().MarketId)
{
//do some stuffs
}
}
If you are writing similar code, don’t be discouraged. It’s easy to overlook the problem – when you call FirstOrDefault, you actually iterate over the list until you find the first matching element. And for each and every of that, GetCurrentMarket() will be called.
Because we can’t be sure when we will find the matching element, it might be the first element, or the last, or it does not exist, or anywhere in between. The median is that GetCurrentMarket will be half called half the size of list
We don’t know if GetCurrentMarket is a very lightweight implementation, or list is a very small set, but we know that if this is in one very hot path, the cost can be (very) significant. These are the allocations made by said GetCurrentMarket
This is a custom implementation of IMarketService – the default implementation is much more lightweight and should not be of concern. Of course, fewer calls are always better – no matter how quick something is.
In this specific example, a simple call to get the current market and store it in a local variable to be used in the scope of the entire method should be enough. You don’t need profiling to make such “optimization” (and as we proved, profiling only confirm our suspect )
Moral of the story
For optimization, less is almost always, more
You definitely should profile before spending any considerable amount optimizing your code. But there are things that can be optimized automatically. Make them your habit.
Earlier we started a new series about performance optimization, here Performance optimization – the hardcore series – part 1 – Quan Mai’s blog (vimvq1987.com) . There are ton of places where things can go wrong. A seasoned developer can, from experience, avoid some obvious performance errors. But as we will soon learn, a small thing can make a huge impact if it is called repeatedly, and a big thing might be OK to use as long as it is called once.
Let’s take this example – how would you think about this snippet – CategoryIds is a list of string converted from ContentReference
if (CategoryIds.Any(x => new ContentReference(x).ToReferenceWithoutVersion() == contentLink))
{
//do stuff
}
If this is in any “cool” path that run a few hundred times a day, you will be fine. It’s not “elegant”, but it works, and maybe you can get away with it. However, if it is in a hot path that is executed every time a visitor visits a product page in your website, it can create a huge problem.
And can you guess what it is?
new ContentReference(string) is fairly lightweight, but if it is called a lot, this is what happen. This is allocations from the constructor alone, and only within 220 seconds of the trace
A lot of allocations which should have been avoided if CategoryIds was just an IEnumerable<ContentReference> instead of IEnumerable<string>
For comparison, this is how 10.000 and 1000.000 new ContentReference would allocate
Thing is similar if you use .ToReferenceWithoutVersion() to compare to another ContentReference (although to a lesser extend as ToReferenceWithoutVersion would return the same ContentReference if the WorkId is 0, and it use cloning instead of new). The correct way to compare two instances of ContentReference without caring about versions, is to use .Compare with ContentReferenceComparer.IgnoreVersion
Moral of the story
It is not only what you do, but also how you do it
Small things can make big impacts, don’t guess, measure!
Hi again every body. New day – new thing to write about. today we will talk about memory allocation, and effect it has on your website performance. With .NET, memory allocations are usually overlooked because CLR handles that for you. Except in rare cases that you need to handle unmanaged resources, that you have to be conscious about releasing that said resources yourself, it’s usually fire and forget approach.
Truth is, it is more complicated than that. The more objects you created, the more memory you need, and the more time CLR needs to clean it up after you. When you might have written code that is executed blazing fast in your benchmarks, in reality, your website might still struggle to perform well in long run – and that’s because of Garbage collection. Occasional GC is not of a concern – because it’s nature of .NET CLR, but frequent GC, especially Gen 2 GC, is definitely something you should look into and should fix, because it natively affects your website performance.
The follow up question – how do you fix that.
Of course, the first step is always measuring the memory allocations of your app. Locally you can use something like Jetbrains dotMemory to profile your website, but that has a big caveat – you can’t really mimic the actual traffic to your website. Sure, it is very helpful to profile something like a scheduled job, but it is less than optimal to see how your website performs in reality. To do that, we need another tool, and I’ve found nothing better than Application Insights Profiler trace on Azure. It will sample your website periodically, taking ETL ( event trace log) files in 220 seconds (Note, depends on your .NET version, you might download a .diagsession or a .netperf.zip file from Application Insights, but they are essentially the same inside (zipped .ETL)). Those files are extremely informative, they contains whole load of information which might be overwhelming if you’re new, but take small steps, you’ll be there.
Note that once extracted ETL can be very big – in 1GB or more range often. Perfview has to go through all that event log so it’s extremely memory hungry as well, especially if you open multiple ETL files at once. My perfview kept crashing when I had a 16GB RAM machine (I had several Visual Studio instances open), and that was solved when I switched to 32GB RAM
The first step is to confirm the allocation problems with GCStats (this is one of the extreme ones, but it does happen)
Two main things to look into – Total Allocs, i.e. the total size of objects allocated, and then the time spent in Garbage collection. They are naturally closely related, but not always. Total allocation might not be high but time for GC might be – in case of large objects allocation (we will talk about it in a later post). Then for the purpose of memory allocation analysis, this is where you should look at
What you find in there, might surprise you. And that’s the purpose of this series, point out possible unexpected allocations that are easy – or fairly easy – to fix.
In this first post, we will talk about a somewhat popular feature – Injected<T>.
We all know that in Optimizely Content/Commerce, the preferred way of dependency injection is constructor injection. I.e. if your class has a dependency on a certain type, that dependency should be declared as a parameter of the constructor. That’s nice and all, but not always possible. For example you might have a static class (used for extension methods) so no constructor is available. Or in some rare cases, that you can’t added a new parameter to the constructor because it is a breaking change.
Adding Injected<T> as a hidden dependency in your class is at least working, so can you forget about it?
Not quite!
This is how the uses of Injected<T> result in allocation of Structuremap objects – yes every time you call Injected<T>.Service the whole dependency tree must be built again.
And that’s not everything, during that process, other objects need to be created as well. You can right click on a path and select “Include item”. The allocations below are for anything that were created by `module episerver.framework episerver.framework!EPiServer.ServiceLocation.Injected1[System.__Canon].get_Service() i.e. all object allocations, related to Injected<T>
You can expand further to see what Injected<T>(s) have the most allocations, and therefore, are the ones should be fixed.
How can one fix a Injected<T> then? The best fix is to make it constructor dependency, but that might not always be possible. Alternative fix is to use ServiceLocator.GetInstance, but to make that variable static if possible. That way you won’t have to call Injected<T>.Service every time you need the instance.
There are cases that you indeed need a new instance every time, then the fix might be more complicated, and you might want to check if you need the whole dependency tree, or just a data object.
Moral of the story
Performance can’t be guessed, it must be measured
Injected<T> is not your good friend. You can use it if you have no other choice, but definitely avoid it in hot paths.
It’s not a secret, I love optimizing things. In a sense, I am both an Optimizer (literally) and an optimizer. And today we will be back to basic – optimizing a tricky SQL query.
The query in question is this particular stored procedure ecf_CatalogNode_GetAllChildNodes, this is used to get all children nodes of specific nodes. It is used in between to find all entries that are direct, or indirect children of specific nodes. Why, you might ask, because when you change the url segment of the node, you want to make sure that all entries that are under that node, will have their indexed object refreshed.
Let’s take a look at this stored procedure, this is how it looks like
CREATE PROCEDURE [dbo].[ecf_CatalogNode_GetAllChildNodes]
@catalogNodeIds udttCatalogNodeList readonly
AS
BEGIN
WITH all_node_relations AS
(
SELECT ParentNodeId, CatalogNodeId AS ChildNodeId FROM CatalogNode
WHERE ParentNodeId > 0
UNION
SELECT ParentNodeId, ChildNodeId FROM CatalogNodeRelation
),
hierarchy AS
(
SELECT
n.CatalogNodeId,
'|' + CAST(n.CatalogNodeId AS nvarchar(4000)) + '|' AS CyclePrevention
FROM @catalogNodeIds n
UNION ALL
SELECT
children.ChildNodeId AS CatalogNodeId,
parent.CyclePrevention + CAST(children.ChildNodeId AS nvarchar(4000)) + '|' AS CyclePrevention
FROM hierarchy parent
JOIN all_node_relations children ON parent.CatalogNodeId = children.ParentNodeId
WHERE CHARINDEX('|' + CAST(children.ChildNodeId AS nvarchar(4000)) + '|', parent.CyclePrevention) = 0
)
SELECT CatalogNodeId FROM hierarchy
END
I previously wrote about the relations between entities in Commerce catalog, here Commerce relation(ship), a story – Quan Mai’s blog (vimvq1987.com) , so relations between nodes can be a bit complicated – a node can have one true parent defined in CatalogNode table, and then other “linked” nodes in CatalogNodeRelation . So to find all children – and grand children of a node, you need to get from both.
Getting children of a node from CatalogNode or CatalogNodeRelation is simple, but things become more complicated when you have to get grandchildren, then great-grandchildren, and so on, and so forth. with that, CTE needs to be used in a recursive way. But then there is a problem arises – there is a chance, small, but still, that the data was added in a correct way, so circular reference is possible. i.e. A is a parent of B, which is a parent of C, and itself is a parent of A. To stop the SP from running forever, a check needs to be added to make sure any circular reference is cut short.
This brings back memory as the first ever support case I worked on at Optimizely (then Episerver) was with a circular reference. The site would crash whenever someone visited the catalog management in Commerce Manager. That was around June, 2012 (feeling old now?). My “boss” at that time involuntarily volunteered me for the case. See what you made me do, boss.
Now you can grasp the basic of what the SP does – let’s get back to the original problem. it’s slow to run especially with big catalog and complex node structure. As always, to optimize everything you need to find the bottleneck – time to fire up SQL Server Management Studio and turn on the Actual Execution Plan
I decided to go with 66, the “root” catalog node. this query yield around 18k rows
Mind you, this is on my machine with pretty powerful CPU (AMD Ryzen 7 5800x, 8 cores 16 threads), and a very fast nvme PCIe SSD (Western Digital Black SN850 2TB). If this was executed on Azure Sql database for example, a timeout is almost certainly guaranteed. So time of execution should only be compared relatively with each other.
If we look at the execution plan, it is quite obvious where the bottleneck is. A scan on CatalogNode table is heavy (it read 79M rows on that operation). As suggest by Anders from Timeout when deleting CatalogNodes from a large catalog (optimizely.com), adding a non clustered index on ParentNodeId column would improve it quite a lot. And indeed it does. The execution time is reduced to 5 second.
And the number of rows read on CatalogNode reduced to just 17k
This is of course a very nice improvement. But the customer reported that it is not enough and the SP is still giving timeout, i.e. further optimization is needed.
Naturally, the next step would be to see if we can skip the circular check. It was added as a safe measure to avoid bad data. It should not be there, as the check should be performed at data modification. But it is there for historical reasons and we can’t just change it, not trivially. So let’s try it for our curiousity.
The modified query looks like this (basically just commented out any code related to the CyclePrevention
ALTER PROCEDURE [dbo].[ecf_CatalogNode_GetAllChildNodes]
@catalogNodeIds udttCatalogNodeList readonly
AS
BEGIN
WITH all_node_relations AS
(
SELECT ParentNodeId, CatalogNodeId AS ChildNodeId FROM CatalogNode
WHERE ParentNodeId > 0
UNION
SELECT ParentNodeId, ChildNodeId FROM CatalogNodeRelation
),
hierarchy AS
(
SELECT
n.CatalogNodeId
--, '|' + CAST(n.CatalogNodeId AS nvarchar(4000)) + '|' AS CyclePrevention
FROM @catalogNodeIds n
UNION ALL
SELECT
children.ChildNodeId AS CatalogNodeId
--, parent.CyclePrevention + CAST(children.ChildNodeId AS nvarchar(4000)) + '|' AS CyclePrevention
FROM hierarchy parent
JOIN all_node_relations children ON parent.CatalogNodeId = children.ParentNodeId
--WHERE CHARINDEX('|' + CAST(children.ChildNodeId AS nvarchar(4000)) + '|', parent.CyclePrevention) = 0
)
SELECT CatalogNodeId FROM hierarchy
END
And the improvement is quite impressive (more than I expected), the query completes almost instantly (less than 1s). The read on CatalogNodeRelation significantly reduced
A word of warning here, execution plan can’t be simply compared as-is. If I run two versions side by side, it gives quite misleading comparison
Even though the top one (without the circular reference check) is much faster than the original (the bottom one), SQL Server estimates that the first is slower (almost 2x slower than the second). So execution plan should be used to see what has been done and what is likely the bottleneck inside a query, it should not be used as comparison between queries. In most cases, comparing statistics using set statistics io on is the best way to compare.
If not for the fact that we are changing the behavior of the stored procedure, I would be happy with this approach. The chance of running into circular reference is small, but it is not zero. As we said, we can in theory gating the relation during insert/updating, but that would be too big a change to start with. This is one of constraint as we work at framework level – we have to step carefully to not break anything. A breaking change is bad, but a data corruption is simply unacceptable. I spent a few hours (probably more than I should) trying to optimize the circular reference check, but no better solution is found.
The next approach would be – as we can guess, to make sure that we get rid of the Clustered Index Scan happened on the CatalogNodeRelation table. The solution would be quite simple, a non clustered index on the `ParentNodeId should be enough.
Great success. The performance is comparable with the “non circular reference check” approach.
As adding an index is a non breaking change (and albeit in some cases it can cause performance regression, like in A curious case of SQL execution plan – Quan Mai’s blog (vimvq1987.com) , but it is rare, also, in this case the cardinality of the ParentNodeId is most likely quite well distributed).
That is all for today. Hopefully you learn one thing or two about optimizing queries in your daily works.
From time to time, I have to dig into some customers’ profiler traces to figure out why their site is slow (yeah, if you follow me, you’d know that’s kind of my main job). There are multiple issues that can eat your website performance for breakfast, from loading too much content, to unmaintained database indexes. While my blog does not cover everything, I think you can get a good grasp of what mistakes to avoid.
The library we’ll be investigating today would be Maxmind.db. To be honest, I’ve never used it my own, but it seems to be a very popular choice to geography-map the visitors. It’s usually used by Optimizely sites for that purpose, using VisitorGroup (which is why it came under my radar).
For several sites that use it, it seems more often than not stuck in this stack
It’s essentially to think that CreateActivator is doing something heavy here (evidently with the LambdaCompiler.Compile part. A peek from decompiling actually shows that yes, it’s heavy. I’m not quite sure I can post the decompiled code here without violating any agreement (I did, in fact, accepted no agreement at this point), but it’s quite straightforward code: TypeActivatorCreator uses reflection to get the constructors of the Type passed to it, to sees if there is any constructor decorated with MaxMind.Db.Constructor attribute, then prepares some parameters, and creates an LambdaExpression that would create an instance of that Type, using found constructor (which is a good thing because a compiled expression would perform much better than just a reflection call).
(I’m using Mindmax.db 2.0.0, for the record)
The code is straightforward, but it is also slow – as any code which involves reflection and lambda compilation would be. The essential step would be to cache any result of this. This is actually a very good place to cache. The number of types are fixed during runtime (except for very edge cases where you dynamically create new types), so you won’t have to worry about cache invalidation. The cache would significantly improve the performance of above code.
And in TypeActivatorCreator there is a cache for it. It is a simple ConcurrentDictionary<Type, TypeActivator> , which would return an TypeActivator if the Type was requested before, or create one and cache it, it it hasn’t been. As I said, this is a very good place to add cache to this.
There is a cache for that, which is good. However, the very important tidbit here is that the dictionary is not static. That means, the cache only works, if the class is registered as Singleton (by itself, or by another class down the dependency chain), meaning, only one of the instance is created and shared between thread (which is why the ConcurrentDictionary part is important).
But except it’s not.
When I look at a memory dump that collected for a customer that is using Maxmind.db, this is what I got:
0:000> !dumpheap -stat -type TypeAcivatorCreator Statistics: MT Count TotalSize Class Name 00007ffa920f67e0 1 24 MaxMind.Db.TypeAcivatorCreator+<>c 00007ffa920f6500 147 3528 MaxMind.Db.TypeAcivatorCreator Total 148 objects
So there were 147 instances of TypeAcivatorCreator. Note that this is only the number of existing instances. There might be other instances that were disposed and garbaged by CLR.
Now it’s clear why it has been performing bad. For supposedly every request, a new instance of TypeActivatorCreator is created, and therefore its internal cache is simply empty (it is just newly created, too). Therefore each of request will go through the expensive path of CreateActivator, and performance suffers.
The obvious fix here is to make the dictionary static, or making the TypeActivatorCreator class Singleton. I don’t have the full source code of Mindmax.Db to determine which is better, but I’d be leaning toward the former.
Moral of the story:
Caching is very, very important, especially when you are dealing with reflection and lambda compilation
You can get it right 99%, but the 1% left could still destroy performance.
Update:
I reached out to Maxmind.db regarding this issue on November 9th, 2021
About 6h later they replied with this
I was at first confused, then somewhat disappointed. It is a small thing to fix to improve overall performance, rather than relying on/expecting customers to do what you say in documentation. But well, let’s just say we have different opinions.
But because of its power and flexibility, it can be complicated to get right. People usually stop at making the query works. Performance is usually an after thought, as it is only visible on production environment when there are enough requests to bring your database to its knees.
Let me be very clear about it: during my years helping customers with performance issues (and you can guess, that is a lot of customers), order search is one of the most, if not the most common cause of database spikes.
As your commerce database is brought to its knees, your entire website performance suffers. Your response time suffers. Your visitors are unhappy and that makes your business suffer.
But what is so bad about order search?
Order search allows you to find orders by almost any criteria. And to do that, you often join with different tables in the database. Search for orders with specific line items? Join with LineItem table on a match of CatalogEntryId column. Search for orders with a specific shipping method? Join with Shipment table on a match of ShippingMethodId etc. etc. SqlWhereClause and SqlMetaWhereClause of OrderSearchParameters are extremely flexible, and that is both a cure, and a curse.
Let’s examine the first example in closer details. The query is easy to write. But don’t you know that there is no index on the CatalogEntryId column? That means every request to search order, end up in a full table scan of LineItem.
There are two bad news into that: your LineItem table usually have many rows already, which makes that scan slow, and resource intensive. And as it’s an ever growing table, the situation only gets worse over time.
That is only a start, and a simple one, because that can be resolved by adding an index on CatalogEntryId , but there are more complicated cases when adding an index simply can’t solve the problem – because there is no good one. For example if you search for orders with custom fields, but only of type bit . Bit is essentially the worst type when it comes to index-ability, so your indexes will be much less effective than you want it to be. A full table scan will likely be used.
In short:
Order search is flexible, and powerful. But, “With great power come great responsibility”. Think about what you join on your SqlWhereClause and SqlMetaWhereClause statements, and if your query is covered by an index, or if adding an index will make senses in this case (I have a few guidelines here for a good index https://vimvq1987.com/index-or-no-index-thats-the-question/). Or if you can limit the number of the orders you search for.