“In 99% of the cases, premature optimization is the root of all devil”
This quote is usually said to be from Donald Knuth, usually regarded as “father of the analysis of algorithms”. His actual quote is a bit difference
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
Yet we should not pass up our opportunities in that critical 3%.
If you have read my posts, you know that I always ask for measuring your application before diving in optimization. But that’s not all of the story. Without profiling, your optimization effort might be futile. But there are things you can “optimize” right away without any profiling – because – they are easy to do, they make your code simpler, easier to follow, and you can be certain they are faster.
Let’s see if you can spot the potential problematic piece of code from this snippet
public Something GetData()
{
var market = list.FirstOrDefault(x => x.MarketId == GetCurrentMarket().MarketId)
{
//do some stuffs
}
}
If you are writing similar code, don’t be discouraged. It’s easy to overlook the problem – when you call FirstOrDefault, you actually iterate over the list until you find the first matching element. And for each and every of that, GetCurrentMarket() will be called.
Because we can’t be sure when we will find the matching element, it might be the first element, or the last, or it does not exist, or anywhere in between. The median is that GetCurrentMarket will be half called half the size of list
We don’t know if GetCurrentMarket is a very lightweight implementation, or list is a very small set, but we know that if this is in one very hot path, the cost can be (very) significant. These are the allocations made by said GetCurrentMarket
This is a custom implementation of IMarketService – the default implementation is much more lightweight and should not be of concern. Of course, fewer calls are always better – no matter how quick something is.
In this specific example, a simple call to get the current market and store it in a local variable to be used in the scope of the entire method should be enough. You don’t need profiling to make such “optimization” (and as we proved, profiling only confirm our suspect )
Moral of the story
For optimization, less is almost always, more
You definitely should profile before spending any considerable amount optimizing your code. But there are things that can be optimized automatically. Make them your habit.
Making espressos, and espresso-based drinks at home is not about the joy of a hobby, but also an economic way of drinking high quality coffee. Let’s talk about it.
An espresso at a cafe costs around 30kr, while a big latte costs around 45kr.
if you drink twice a day, your and your partner would cost between 120kr and 180kr
Assuming you drink 300 days a year – then each year, it’s around 36.000kr and 54.000kr for coffee ๐ฎ
Now if you are making espressos at home.
Each double shot espresso needs about 18gr of coffee, but we have to consider waste and throw away (for example when you dial a new coffee), so let’s be conservative and assume that 1kg of coffee makes around 45 shots.
A good 1kg of coffee is between 250kr to 400kr (specialty grade – and that is usually much better than what you are served in a normal cafe). So it’s about 5.5 to 8.9kr for coffee for each drink.
A big latte needs around 250ml of milk (including waste and throw away), so each 1.5l of milk can make 6 latte. A 1.5l of Arla standard 3% milk costs 17.9kr (as we always buy at Willys), so it’s 3kr per drink for milk.
Of course you need electricity for heating up the machine. My machine which is an E61 uses around 0.6 kwh-0.7 kwh per day for 4 lattes. Electricity price has gone up a bit, we are quite lucky to only have to pay a fixed price of 1.3kr/kwh, but let’s say you have to pay a bit more, 1.5kr/kwh, it’s 1kr per day for the machine.
And you need other things for cleaning and maintenance – you need water softener. I used Lelit 70l water softener which costs around 110kr/each, and I change every 2 months, which means almost 2kr/day. I also need pulycaff for cleaning machines and other stuffs, but after 2 years I haven’t gone through 1 bottle of 900gr yet (costs around 150kr), so the cost is very minimal.
Basically, it’s 22-36kr per coffee per day, 12kr per milk per day, 1kr electricity per day, and 2kr per cleaning per day, it’s around 37kr- 51kr per day for 4 lattes.
Now you have coffees at home and you will drink more often, let’s say it’s 365 days per year because you also have friends come over, it’s 12,410kr to 18,615kr.
Even with some fancy machines and equipment to start with, you would be break even in one year. That includes things like fancy cups, WDT, scale etc.
Cost
Buying
Making
Machine cost
N/A
10.000kr – โ
Per drink
30-45kr
8.5-12.5kr
Drinks per year
4x per day, 300 days
4x per day, 365 days
Cost for coffee
36000-54000kr
12410kr-18615kr
Some might argue that the making espressos also costs time, but you also need to walk down the street (assuming that you have a cafe right around corner) and wait for your coffee. Also need to factor the time to put on/off clothes.
Not to mention the relaxing feeling when brewing espressos is priceless.
Of course those numbers only apply if you drink coffees frequently. Things will change if you drink less, or more, or without milk.
Earlier we started a new series about performance optimization, here Performance optimization โ the hardcore series โ part 1 โ Quan Mai’s blog (vimvq1987.com) . There are ton of places where things can go wrong. A seasoned developer can, from experience, avoid some obvious performance errors. But as we will soon learn, a small thing can make a huge impact if it is called repeatedly, and a big thing might be OK to use as long as it is called once.
Let’s take this example – how would you think about this snippet – CategoryIds is a list of string converted from ContentReference
if (CategoryIds.Any(x => new ContentReference(x).ToReferenceWithoutVersion() == contentLink))
{
//do stuff
}
If this is in any “cool” path that run a few hundred times a day, you will be fine. It’s not “elegant”, but it works, and maybe you can get away with it. However, if it is in a hot path that is executed every time a visitor visits a product page in your website, it can create a huge problem.
And can you guess what it is?
new ContentReference(string) is fairly lightweight, but if it is called a lot, this is what happen. This is allocations from the constructor alone, and only within 220 seconds of the trace
A lot of allocations which should have been avoided if CategoryIds was just an IEnumerable<ContentReference> instead of IEnumerable<string>
For comparison, this is how 10.000 and 1000.000 new ContentReference would allocate
Thing is similar if you use .ToReferenceWithoutVersion() to compare to another ContentReference (although to a lesser extend as ToReferenceWithoutVersion would return the same ContentReference if the WorkId is 0, and it use cloning instead of new). The correct way to compare two instances of ContentReference without caring about versions, is to use .Compare with ContentReferenceComparer.IgnoreVersion
Moral of the story
It is not only what you do, but also how you do it
Small things can make big impacts, don’t guess, measure!
Hi again every body. New day – new thing to write about. today we will talk about memory allocation, and effect it has on your website performance. With .NET, memory allocations are usually overlooked because CLR handles that for you. Except in rare cases that you need to handle unmanaged resources, that you have to be conscious about releasing that said resources yourself, it’s usually fire and forget approach.
Truth is, it is more complicated than that. The more objects you created, the more memory you need, and the more time CLR needs to clean it up after you. When you might have written code that is executed blazing fast in your benchmarks, in reality, your website might still struggle to perform well in long run – and that’s because of Garbage collection. Occasional GC is not of a concern – because it’s nature of .NET CLR, but frequent GC, especially Gen 2 GC, is definitely something you should look into and should fix, because it natively affects your website performance.
The follow up question – how do you fix that.
Of course, the first step is always measuring the memory allocations of your app. Locally you can use something like Jetbrains dotMemory to profile your website, but that has a big caveat – you can’t really mimic the actual traffic to your website. Sure, it is very helpful to profile something like a scheduled job, but it is less than optimal to see how your website performs in reality. To do that, we need another tool, and I’ve found nothing better than Application Insights Profiler trace on Azure. It will sample your website periodically, taking ETL (ย event trace log) files in 220 seconds (Note, depends on your .NET version, you might download a .diagsession or a .netperf.zip file from Application Insights, but they are essentially the same inside (zipped .ETL)). Those files are extremely informative, they contains whole load of information which might be overwhelming if you’re new, but take small steps, you’ll be there.
Note that once extracted ETL can be very big – in 1GB or more range often. Perfview has to go through all that event log so it’s extremely memory hungry as well, especially if you open multiple ETL files at once. My perfview kept crashing when I had a 16GB RAM machine (I had several Visual Studio instances open), and that was solved when I switched to 32GB RAM
The first step is to confirm the allocation problems with GCStats (this is one of the extreme ones, but it does happen)
Two main things to look into – Total Allocs, i.e. the total size of objects allocated, and then the time spent in Garbage collection. They are naturally closely related, but not always. Total allocation might not be high but time for GC might be – in case of large objects allocation (we will talk about it in a later post). Then for the purpose of memory allocation analysis, this is where you should look at
What you find in there, might surprise you. And that’s the purpose of this series, point out possible unexpected allocations that are easy – or fairly easy – to fix.
In this first post, we will talk about a somewhat popular feature – Injected<T>.
We all know that in Optimizely Content/Commerce, the preferred way of dependency injection is constructor injection. I.e. if your class has a dependency on a certain type, that dependency should be declared as a parameter of the constructor. That’s nice and all, but not always possible. For example you might have a static class (used for extension methods) so no constructor is available. Or in some rare cases, that you can’t added a new parameter to the constructor because it is a breaking change.
Adding Injected<T> as a hidden dependency in your class is at least working, so can you forget about it?
Not quite!
This is how the uses of Injected<T> result in allocation of Structuremap objects – yes every time you call Injected<T>.Service the whole dependency tree must be built again.
And that’s not everything, during that process, other objects need to be created as well. You can right click on a path and select “Include item”. The allocations below are for anything that were created by `module episerver.framework episerver.framework!EPiServer.ServiceLocation.Injected1[System.__Canon].get_Service() i.e. all object allocations, related to Injected<T>
You can expand further to see what Injected<T>(s) have the most allocations, and therefore, are the ones should be fixed.
How can one fix a Injected<T> then? The best fix is to make it constructor dependency, but that might not always be possible. Alternative fix is to use ServiceLocator.GetInstance, but to make that variable static if possible. That way you won’t have to call Injected<T>.Service every time you need the instance.
There are cases that you indeed need a new instance every time, then the fix might be more complicated, and you might want to check if you need the whole dependency tree, or just a data object.
Moral of the story
Performance can’t be guessed, it must be measured
Injected<T> is not your good friend. You can use it if you have no other choice, but definitely avoid it in hot paths.
With Find-backed IEntrySearchService in the previous post , we can now put SearchProvider to rest. There are, however, parts of the framework that still rely on SearchManager, and it expects a configured, working SearchProvider. The Full search index job, and the Incremental search index job are two examples. To make sure we don’t break the system, we might want to give SearchManager something to chew on. A do nothing SearchProvider that is!
And we need a DoNothingSearchProvider
public class DoNothingSearchProvider : SearchProvider
{
public override string QueryBuilderType => GetType().ToString();
public override void Close(string applicationName, string scope) { }
public override void Commit(string applicationName) { }
public override void Index(string applicationName, string scope, ISearchDocument document) { }
public override int Remove(string applicationName, string scope, string key, string value)
{ return 42; }
public override void RemoveAll(string applicationName, string scope)
{
}
public override ISearchResults Search(string applicationName, ISearchCriteria criteria)
{
return new SearchResults(new SearchDocuments(), new CatalogEntrySearchCriteria());
}
}
And a DoNothingIndexBuilder
public class DoNothingIndexBuiler : ISearchIndexBuilder
{
public SearchManager Manager { get; set; }
public IndexBuilder Indexer { get; set; }
public event SearchIndexHandler SearchIndexMessage;
public void BuildIndex(bool rebuild) { }
public bool UpdateIndex(IEnumerable<int> itemIds) { return true; }
}
What remains is simply register them in your appsettings.json
If you have been using Find, you might be surprised to find that CSR UI uses the SearchProvider internally. This is a bit unfortunate because you likely are using Find, and that creates unnecessary complexity. For starter, you need to configure a SearchProvider, then you need to index the entries, separately from the Find index. If you install EPiServer.CloudPlatform.Commerce, it will setup the DXPLucenceSearchProvider for you, which is basically a wrapper of LuceneSearchProvider to let it work on DXP (i.e. Azure storage). But even with that, you have to index your entries anyway. You can use FindSearchProvider, but that actually just creates another problem – it uses a different index compared to Find, so you double your index count, yet you have still make sure to index your content. Is there a better way – to use the existing Find indexed content?
Yes, there is
Searches for entries in CSR is done by IEntrySearchService which the default implementation uses the configured SearchProvider internally . Fortunately for us, as with most thing in Commerce, we can create our own implementation and inject it. Now that’s with a caveat – IEntrySearchService is marked as BETA remark, so prepare for some breaking changes without prior notice. However it has not changed much since its inception (funny thing, when I checked for its history, I was the one who created it 6 years ago, in 2017. Feeling old now), and if it is changed, it would be quite easy to adapt for such changes.
IEntrySearchService is a simple with just one method:
It is a bit weird to return an IEnumerable<int> (what was I thinking ? ), but it was likely created as a scaffolding of SearchManager.Search which returns an IEnumerable<int>, and was not updated later. Anyway, an implementation using Find should look like this:
Note that I am not an expert on Find, especially on NestedFilterExpression, so my FilterPriceAvailableForCurrency might be wrong. Feel free to correct it, the code is not copyrighted and is provided as-is.
As always, you need to register this implementation for IEntrySearchService. You can add it anywhere you like as long as it’s after .AddCommerce.
If you are looking for an espresso machine with range of $3000 (or around โฌ2500 if you are in EU – this is one of the wins for European), you will most likely come to battle of these three. They are probably the most popular options in this price range, and rightly so. The prices are fairly comparable, with Profitec Pro 700 is the cheapest in the US (around $200), and Lelit Bianca v3 is the cheapest in the EU (also around โฌ200). I did quite intensive research on the topic, and finally come to the conclusion (spoiler alert, in the end of this post).
If you haven’t known already, Profitec is a subsidiary of ECM. Pro 700 is still made in Milan, Italy, but it shares a lot of design with ECM Synchronika. Basically two sibling except for some cosmetic difference. I would expect them to perform very similar. For easier comparison, I will compare Bianca and Synchronika. Let’s go through pros and cons of each, and hopefully it will help you come to a decision
They are very similar espresso machines on definitions. Both are E61, dual boiler machines that target home enthusiasts.
Build quality
This is no contest. Synchronika is a clear winner, Pro 700 a second and Bianca comes last. It is not only that Synchronika has better fit and finish, it has clear internal layout which is like an engineer’s dream. Whole Latte Love has several dive in videos for that, and it means if you ever need to service your Synchronika yourself, you will easily know where to go and what to check/change
Bianca has less fit and finish, and its internal is pretty cramped – more on that below
To be very clear Bianca’s build quality is definitely more than decent, and it would last you a very long time with proper care. The cramped inside has two reasons – due to its smaller size and more features.
Size and look
Of all three, Lelit Bianca is smallest, and is the only one come with wood (walnut) knobs and wands finish by default, while the others come with hard black plastic . It is only 29cm wide and only 40cm deep. Both ECM and Profitec are noticeable larger, with the former is 33.5cm wide and 49cm deep, and the latter 34cm and 47cm, respectively.
While look is definitely subjective – make sure each of these machine can fit into your coffee station, either that is under your cupboard or otherwise. One of the biggest selling point of Bianca is the moveable water tank, you can put it behind, left or right. All three can be plumbed and you can put your days of refilling water behind you, but sometimes plumbing is not an option, and being able to move the tank is a huge plus. As it’s my biggest complaint of Lelit Elizabeth. Now it can be solved easily.
Start up time
If you have very stable schedule every day, start up time might not be of your concern, you can use a smart power plug and schedule it to turn on your coffee machine every day at a fixed time. but let’s be very clear here: all these machines take a significantly long time to be fully heated. Not only they have to heat up both boilers to temp and let them stabilize, they also need to heat up the E61 group head via thermosiphon (baristahustle explains it in great detail here EM 3.04 How the E61 Thermosyphon Works – Barista Hustle – but basically, let hot water flow through the head to heat it up). The E61 is very heavy, like 4kg heavy, so it’s important to make it hot, so the water does not lose too much temperature during brewing.
Synchronika takes significantly longer to heat up. By the test of kaffemacher, it takes a whoping 35 minutes to be able to pull 5 shots without failing (not reaching targeted temp)
That is double of what Bianca v3 needs
That means you can start pulling shorts 16 minutes faster on Bianca. That’s impressive. If you want to brew lighter roasts which need higher temps, say, 96*C, unofficial and unscientific tests showed that Bianca is ready in even shorter time (12 minutes), based on the indication on the PID. It’s not breaking any records, but for E61, that’s nothing to be sniffed at.
Temperature stability
This is one interesting test. Thanks to kaffemacher we have measurements from both machines, and it’s a tie
ECM Synchronika has more stability during the shot. i.e. with a 25s shot, the temp between 5s and 25s remains a more straight line (albeit hotter toward the end). With shot after shot however, it tends to be under temp after being idle for some time
Lelit Bianca has more stability between shots. Temp within shot is fluctuated a bit, but does not rise up as much as with Synchronika. You can, however, adjust the PID with settings like temp offset to have even better temp stability, especially after your machine has been idling for a while.
Features
Bianca hand down.
Bianca comes with the default flow control by default. ECM Synchronika and Profitec Pro 700 can be retrofitted with the E61 flow control package, which cost you somewhere $200 more, plus installation. As most people has commented, Bianca flow control feels natural and nicer to use. That is of course subjective, but it is not too surprising. The main difference is that Bianca flow control has ~200 degree travel from fully open to fully close, while the E61 flow control is ~720 degree. The former allows some more fine tuning, but it is less intuitive to use.
Bianca can pre-infuse even with water tank, while ECM and Profitec need plumbed in to pre-infuse (using line pressure). Bianca v3 has low flow settings which make pre infusion even more flexible. You can pre-infuse in any way you like.
Lelit is also known to make their Lelit Control Center – LCC settings available to end users and you can fine tune your machine even further. Most notably, the temp offset (between the boiler, and the targeted temp at group head), so you can fine tune your brew temp to what you would like.
Conclusion
When I bought my Lelit Elizabeth, I thought about Bianca as something I wanted but couldn’t get, and if I upgrade, I would pick it. After two years, when I finally decided to upgrade, for some reasons I skipped Bianca. I almost decided to go with Synchronika but slowly and steadily Bianca won me back. And I will be soon one of its owners.
With that said, you can’t go wrong with each option. Those three are the most popular options in their price range, and there’s reason for that – they are that good.
It’s not a secret, I love optimizing things. In a sense, I am both an Optimizer (literally) and an optimizer. And today we will be back to basic – optimizing a tricky SQL query.
The query in question is this particular stored procedure ecf_CatalogNode_GetAllChildNodes, this is used to get all children nodes of specific nodes. It is used in between to find all entries that are direct, or indirect children of specific nodes. Why, you might ask, because when you change the url segment of the node, you want to make sure that all entries that are under that node, will have their indexed object refreshed.
Let’s take a look at this stored procedure, this is how it looks like
CREATE PROCEDURE [dbo].[ecf_CatalogNode_GetAllChildNodes]
@catalogNodeIds udttCatalogNodeList readonly
AS
BEGIN
WITH all_node_relations AS
(
SELECT ParentNodeId, CatalogNodeId AS ChildNodeId FROM CatalogNode
WHERE ParentNodeId > 0
UNION
SELECT ParentNodeId, ChildNodeId FROM CatalogNodeRelation
),
hierarchy AS
(
SELECT
n.CatalogNodeId,
'|' + CAST(n.CatalogNodeId AS nvarchar(4000)) + '|' AS CyclePrevention
FROM @catalogNodeIds n
UNION ALL
SELECT
children.ChildNodeId AS CatalogNodeId,
parent.CyclePrevention + CAST(children.ChildNodeId AS nvarchar(4000)) + '|' AS CyclePrevention
FROM hierarchy parent
JOIN all_node_relations children ON parent.CatalogNodeId = children.ParentNodeId
WHERE CHARINDEX('|' + CAST(children.ChildNodeId AS nvarchar(4000)) + '|', parent.CyclePrevention) = 0
)
SELECT CatalogNodeId FROM hierarchy
END
I previously wrote about the relations between entities in Commerce catalog, here Commerce relation(ship), a story โ Quan Mai’s blog (vimvq1987.com) , so relations between nodes can be a bit complicated – a node can have one true parent defined in CatalogNode table, and then other “linked” nodes in CatalogNodeRelation . So to find all children – and grand children of a node, you need to get from both.
Getting children of a node from CatalogNode or CatalogNodeRelation is simple, but things become more complicated when you have to get grandchildren, then great-grandchildren, and so on, and so forth. with that, CTE needs to be used in a recursive way. But then there is a problem arises – there is a chance, small, but still, that the data was added in a correct way, so circular reference is possible. i.e. A is a parent of B, which is a parent of C, and itself is a parent of A. To stop the SP from running forever, a check needs to be added to make sure any circular reference is cut short.
This brings back memory as the first ever support case I worked on at Optimizely (then Episerver) was with a circular reference. The site would crash whenever someone visited the catalog management in Commerce Manager. That was around June, 2012 (feeling old now?). My “boss” at that time involuntarily volunteered me for the case. See what you made me do, boss.
Now you can grasp the basic of what the SP does – let’s get back to the original problem. it’s slow to run especially with big catalog and complex node structure. As always, to optimize everything you need to find the bottleneck – time to fire up SQL Server Management Studio and turn on the Actual Execution Plan
I decided to go with 66, the “root” catalog node. this query yield around 18k rows
Mind you, this is on my machine with pretty powerful CPU (AMD Ryzen 7 5800x, 8 cores 16 threads), and a very fast nvme PCIe SSD (Western Digital Black SN850 2TB). If this was executed on Azure Sql database for example, a timeout is almost certainly guaranteed. So time of execution should only be compared relatively with each other.
If we look at the execution plan, it is quite obvious where the bottleneck is. A scan on CatalogNode table is heavy (it read 79M rows on that operation). As suggest by Anders from Timeout when deleting CatalogNodes from a large catalog (optimizely.com), adding a non clustered index on ParentNodeId column would improve it quite a lot. And indeed it does. The execution time is reduced to 5 second.
And the number of rows read on CatalogNode reduced to just 17k
This is of course a very nice improvement. But the customer reported that it is not enough and the SP is still giving timeout, i.e. further optimization is needed.
Naturally, the next step would be to see if we can skip the circular check. It was added as a safe measure to avoid bad data. It should not be there, as the check should be performed at data modification. But it is there for historical reasons and we can’t just change it, not trivially. So let’s try it for our curiousity.
The modified query looks like this (basically just commented out any code related to the CyclePrevention
ALTER PROCEDURE [dbo].[ecf_CatalogNode_GetAllChildNodes]
@catalogNodeIds udttCatalogNodeList readonly
AS
BEGIN
WITH all_node_relations AS
(
SELECT ParentNodeId, CatalogNodeId AS ChildNodeId FROM CatalogNode
WHERE ParentNodeId > 0
UNION
SELECT ParentNodeId, ChildNodeId FROM CatalogNodeRelation
),
hierarchy AS
(
SELECT
n.CatalogNodeId
--, '|' + CAST(n.CatalogNodeId AS nvarchar(4000)) + '|' AS CyclePrevention
FROM @catalogNodeIds n
UNION ALL
SELECT
children.ChildNodeId AS CatalogNodeId
--, parent.CyclePrevention + CAST(children.ChildNodeId AS nvarchar(4000)) + '|' AS CyclePrevention
FROM hierarchy parent
JOIN all_node_relations children ON parent.CatalogNodeId = children.ParentNodeId
--WHERE CHARINDEX('|' + CAST(children.ChildNodeId AS nvarchar(4000)) + '|', parent.CyclePrevention) = 0
)
SELECT CatalogNodeId FROM hierarchy
END
And the improvement is quite impressive (more than I expected), the query completes almost instantly (less than 1s). The read on CatalogNodeRelation significantly reduced
A word of warning here, execution plan can’t be simply compared as-is. If I run two versions side by side, it gives quite misleading comparison
Even though the top one (without the circular reference check) is much faster than the original (the bottom one), SQL Server estimates that the first is slower (almost 2x slower than the second). So execution plan should be used to see what has been done and what is likely the bottleneck inside a query, it should not be used as comparison between queries. In most cases, comparing statistics using set statistics io on is the best way to compare.
If not for the fact that we are changing the behavior of the stored procedure, I would be happy with this approach. The chance of running into circular reference is small, but it is not zero. As we said, we can in theory gating the relation during insert/updating, but that would be too big a change to start with. This is one of constraint as we work at framework level – we have to step carefully to not break anything. A breaking change is bad, but a data corruption is simply unacceptable. I spent a few hours (probably more than I should) trying to optimize the circular reference check, but no better solution is found.
The next approach would be – as we can guess, to make sure that we get rid of the Clustered Index Scan happened on the CatalogNodeRelation table. The solution would be quite simple, a non clustered index on the ParentNodeId should be enough.
Great success. The performance is comparable with the “non circular reference check” approach.
As adding an index is a non breaking change (and albeit in some cases it can cause performance regression, like in A curious case of SQL execution plan โ Quan Mai’s blog (vimvq1987.com) , but it is rare, also, in this case the cardinality of the ParentNodeId is most likely quite well distributed).
That is all for today. Hopefully you learn one thing or two about optimizing queries in your daily works.
I was asked this question: we have about 3TB of assets, any way to clean it up.
These days, storage is cheap, but still not free. and big storage means you need space for back up. and with that, bandwidth and time.
Is there away to clean up things you no longer need?
Yes!
Optimizely Content already has a scheduled job named Remove Abandoned BLOBs, but this job only removes the blobs that have no content associated. I.e. the content is deleted by IContentRepository.Delete but the blob was left behind. The job uses the log to find out which content were deleted, then find those blobs.
How’s about the assets that still have contents associated with them, but not used anywhere? Time to get your hands dirty!
Due to the nature of this task, it is best to make it a scheduled job.
All of the assets are children under the global asset root. By iterating over them, we can check if each of them is being used by another content. If not, we will add them to a list for later delete. Before deleting the content, we will find the blob and then delete it as well. Easy, right?
To get the content recursively we use this little piece of code
public virtual IEnumerable<T> GetAssetRecursive<T>(ContentReference parentLink, CultureInfo defaultCulture) where T : MediaData
{
foreach (var folder in LoadChildrenBatched<ContentFolder>(parentLink, defaultCulture))
{
foreach (var entry in GetAssetRecursive<T>(folder.ContentLink, defaultCulture))
{
yield return entry;
}
}
foreach (var entry in LoadChildrenBatched<T>(parentLink, defaultCulture))
{
yield return entry;
}
}
private IEnumerable<T> LoadChildrenBatched<T>(ContentReference parentLink, CultureInfo defaultCulture) where T : IContent
{
var start = 0;
while (!_isStopped)
{
var batch = _contentRepository.GetChildren<T>(parentLink, defaultCulture, start, 50);
if (!batch.Any())
{
yield break;
}
foreach (var content in batch)
{
// Don't include linked products to avoid including them multiple times when traversing the catalog
if (!parentLink.CompareToIgnoreWorkID(content.ParentLink))
{
continue;
}
yield return content;
}
start += 50;
}
}
And we will start from SiteDefinition.Current.GlobalAssetsRoot, and use IContentRepository.GetReferencesToContent to see if it is used in any content (both CMS and Catalog). If not, we add it to a list. Later, we use IPermanentLinkMapper to see if it has any blob associated, and delete that as well
foreach (var asset in GetAssetRecursive<MediaData>(SiteDefinition.Current.GlobalAssetsRoot, CultureInfo.InvariantCulture))
{
totalAsset++;
if (!_contentRepository.GetReferencesToContent(asset.ContentLink, false).Any())
{
toDelete.Add(asset.ContentLink.ToReferenceWithoutVersion());
}
if (toDelete.Count % 50 == 0)
{
var maps = _permanentLinkMapper.Find(toDelete);
foreach (var map in maps)
{
deletedAsset++;
_contentRepository.Delete(map.ContentReference, true, EPiServer.Security.AccessLevel.NoAccess);
var container = Blob.GetContainerIdentifier(map.Guid);
//Probably redundency, can just delete directly
var blob = _blobFactory.GetBlob(container);
if (blob != null)
{
_blobFactory.Delete(container);
}
OnStatusChanged($"Deleting asset with id {map.ContentReference}");
}
toDelete.Clear();
}
}
We need another round of delete after the while loop to clean up the left over (or if we have less than 50 abandoned assets)
And we’re done!
Testing this job is simple – uploading a few assets to your cms and do not use it anywhere, then run the job. it should delete those assets.
Things to improve: we might want to make sure only assets that created more than a certain number of days ago are deleted. This allows editors to upload assets for later uses without having to use them immediately.
Carts are meant to be validated. Prices changed, customers add more quantity than allowed, promotions expired, stock ran out, etc.. All kinds of stuffs that make the items in carts need validation to make sure they up to date, and be ready to be converted to an order.
However, there are cases when you don’t want your carts to be validated. The most common case is of course, wish list – a special cart that allows customer to add items to, just to keep track of. You certainly don’t want to touch it. Another example is quote – when you give a specific item at specific price for a customer, and you don’t want it to be automatically changed to the public prices, which is different from that said price.
By default, when it is called to validate a cart, these things will be done:
Remove items that no longer available (either deleted or end of line)
Update prices of the items to the latest applicable prices, or remove items that have no prices.
Update quantity of the items (to comply with the settings or in stock quantity), or remove items that are out of stock
Apply promotions
Update quantity again (As promotions could do things like adding free items to the cart)
There are two ways you can avoid carts being validated, let’s see what we can do.
The “Wish list names” route
With OrderOptions you can set certain wishlist names to be exempted from the validation, using WishListCartNames. By default, it’s only “Wishlist”, but you can set several using the comma separator, like this
However there is a caveat, with this approach, carts in those names will not be shown in the Order Management (If you want, you can change that, however it is not an easy or quick one)
The OrderValidationService route
The validation of carts (or rather, order types in general) is done by OrderValidationService. And that class is meant to be extended if necessary. Here is how you would avoid validation carts with name “Quote”, using OrderValidationService
public class CustomOrderValidationService : OrderValidationService
{
public CustomOrderValidationService(ILineItemValidator lineItemValidator, IPlacedPriceProcessor placedPriceProcessor, IPromotionEngine promotionEngine, IInventoryProcessor inventoryProcessor, OrderOptions orderOptions) : base(lineItemValidator, placedPriceProcessor, promotionEngine, inventoryProcessor, orderOptions)
{
}
public override IDictionary<ILineItem, IList<ValidationIssue>> ValidateOrder(IOrderGroup orderGroup)
{
if (orderGroup.Name.Equals("Quote", System.StringComparison.OrdinalIgnoreCase))
{
return new Dictionary<ILineItem, IList<ValidationIssue>>();
}
return base.ValidateOrder(orderGroup);
}
}
And as OrderValidationService is registered by ServiceConfiguration attribute, you can register yours by
One caveat though, making changes to OrderValidationService means those changes will apply to the entire website, so make sure the changes are actually the ones you want site-wide, not just in specific places.