Performance optimization – the hardcore series – part 1

Hi again every body. New day – new thing to write about. today we will talk about memory allocation, and effect it has on your website performance. With .NET, memory allocations are usually overlooked because CLR handles that for you. Except in rare cases that you need to handle unmanaged resources, that you have to be conscious about releasing that said resources yourself, it’s usually fire and forget approach.

Truth is, it is more complicated than that. The more objects you created, the more memory you need, and the more time CLR needs to clean it up after you. When you might have written code that is executed blazing fast in your benchmarks, in reality, your website might still struggle to perform well in long run – and that’s because of Garbage collection. Occasional GC is not of a concern – because it’s nature of .NET CLR, but frequent GC, especially Gen 2 GC, is definitely something you should look into and should fix, because it natively affects your website performance.

The follow up question – how do you fix that.

Of course, the first step is always measuring the memory allocations of your app. Locally you can use something like Jetbrains dotMemory to profile your website, but that has a big caveat – you can’t really mimic the actual traffic to your website. Sure, it is very helpful to profile something like a scheduled job, but it is less than optimal to see how your website performs in reality. To do that, we need another tool, and I’ve found nothing better than Application Insights Profiler trace on Azure. It will sample your website periodically, taking ETL ( event trace log) files in 220 seconds (Note, depends on your .NET version, you might download a .diagsession or a .netperf.zip file from Application Insights, but they are essentially the same inside (zipped .ETL)). Those files are extremely informative, they contains whole load of information which might be overwhelming if you’re new, but take small steps, you’ll be there.

To open a ETL file, common tool is Perfview (microsoft/perfview: PerfView is a CPU and memory performance-analysis tool (github.com)). Yes it has certain 2000 look like other analysis tool (remember Windbg), but it is fast, efficient, and gets the job done

Note that once extracted ETL can be very big – in 1GB or more range often. Perfview has to go through all that event log so it’s extremely memory hungry as well, especially if you open multiple ETL files at once. My perfview kept crashing when I had a 16GB RAM machine (I had several Visual Studio instances open), and that was solved when I switched to 32GB RAM

The first step is to confirm the allocation problems with GCStats (this is one of the extreme ones, but it does happen)

Two main things to look into – Total Allocs, i.e. the total size of objects allocated, and then the time spent in Garbage collection. They are naturally closely related, but not always. Total allocation might not be high but time for GC might be – in case of large objects allocation (we will talk about it in a later post). Then for the purpose of memory allocation analysis, this is where you should look at

What you find in there, might surprise you. And that’s the purpose of this series, point out possible unexpected allocations that are easy – or fairly easy – to fix.

In this first post, we will talk about a somewhat popular feature – Injected<T>.

We all know that in Optimizely Content/Commerce, the preferred way of dependency injection is constructor injection. I.e. if your class has a dependency on a certain type, that dependency should be declared as a parameter of the constructor. That’s nice and all, but not always possible. For example you might have a static class (used for extension methods) so no constructor is available. Or in some rare cases, that you can’t added a new parameter to the constructor because it is a breaking change.

Adding Injected<T> as a hidden dependency in your class is at least working, so can you forget about it?

Not quite!

This is how the uses of Injected<T> result in allocation of Structuremap objects – yes every time you call Injected<T>.Service the whole dependency tree must be built again.

And that’s not everything, during that process, other objects need to be created as well. You can right click on a path and select “Include item”. The allocations below are for anything that were created by `module episerver.framework episerver.framework!EPiServer.ServiceLocation.Injected1[System.__Canon].get_Service() i.e. all object allocations, related to Injected<T>

You can expand further to see what Injected<T>(s) have the most allocations, and therefore, are the ones should be fixed.

How can one fix a Injected<T> then? The best fix is to make it constructor dependency, but that might not always be possible. Alternative fix is to use ServiceLocator.GetInstance, but to make that variable static if possible. That way you won’t have to call Injected<T>.Service every time you need the instance.

There are cases that you indeed need a new instance every time, then the fix might be more complicated, and you might want to check if you need the whole dependency tree, or just a data object.

Moral of the story

  • Performance can’t be guessed, it must be measured
  • Injected<T> is not your good friend. You can use it if you have no other choice, but definitely avoid it in hot paths.

The do nothing SearchProvider

With Find-backed IEntrySearchService in the previous post , we can now put SearchProvider to rest. There are, however, parts of the framework that still rely on SearchManager, and it expects a configured, working SearchProvider. The Full search index job, and the Incremental search index job are two examples. To make sure we don’t break the system, we might want to give SearchManager something to chew on. A do nothing SearchProvider that is!

And we need a DoNothingSearchProvider

    public class DoNothingSearchProvider : SearchProvider
    {
        public override string QueryBuilderType => GetType().ToString();

        public override void Close(string applicationName, string scope) { }
        public override void Commit(string applicationName) { }
        public override void Index(string applicationName, string scope, ISearchDocument document) { }
        public override int Remove(string applicationName, string scope, string key, string value)
        { return 42; }

        public override void RemoveAll(string applicationName, string scope)
        {
        }
        public override ISearchResults Search(string applicationName, ISearchCriteria criteria)
        {
            return new SearchResults(new SearchDocuments(), new CatalogEntrySearchCriteria());
        }
    }

    

And a DoNothingIndexBuilder

public class DoNothingIndexBuiler : ISearchIndexBuilder
    {
        public SearchManager Manager { get; set; }
        public IndexBuilder Indexer { get; set; }

        public event SearchIndexHandler SearchIndexMessage;

        public void BuildIndex(bool rebuild) { }
        public bool UpdateIndex(IEnumerable<int> itemIds) { return true; }
    }

What remains is simply register them in your appsettings.json

                                       "SearchOptions":  {
                                                             "DefaultSearchProvider":  "DoNothingSearchProvider",
                                                             "MaxHitsForSearchResults":  1000,
                                                             "IndexerBasePath":  "[appDataPath]/Quicksilver/SearchIndex",
                                                             "IndexerConnectionString":  "",
                                                             "SearchProviders":  [
                                                              {
                                                                "Name": "DoNothingSearchProvider",
                                                                "Type": "EPiServer.Reference.Commerce.Site.Infrastructure.Indexing.DoNothingSearchProvider, EPiServer.Reference.Commerce.Site",
                                                                "Parameters": {
                                                                  "queryBuilderType": "EPiServer.Reference.Commerce.Site.Infrastructure.Indexing.DoNothingSearchProvider, EPiServer.Reference.Commerce.Site",
                                                                  "storage": "[appDataPath]/Quicksilver/SearchIndex",
                                                                  "simulateFaceting": "true"
                                                                }
                                                              }
                                                                                 ],
                                                             "Indexers":  [
                                                                              {
                                                                                  "Name":  "catalog",
                                                                                  "Type":  "EPiServer.Reference.Commerce.Site.Infrastructure.Indexing.DoNothingIndexBuilder, EPiServer.Reference.Commerce.Site"
                                                                              }
                                                                          ]
                                                         },

And that’s it.

Use Find for CSR UI

If you have been using Find, you might be surprised to find that CSR UI uses the SearchProvider internally. This is a bit unfortunate because you likely are using Find, and that creates unnecessary complexity. For starter, you need to configure a SearchProvider, then you need to index the entries, separately from the Find index. If you install EPiServer.CloudPlatform.Commerce, it will setup the DXPLucenceSearchProvider for you, which is basically a wrapper of LuceneSearchProvider to let it work on DXP (i.e. Azure storage). But even with that, you have to index your entries anyway. You can use FindSearchProvider, but that actually just creates another problem – it uses a different index compared to Find, so you double your index count, yet you have still make sure to index your content. Is there a better way – to use the existing Find indexed content?

Yes, there is

Searches for entries in CSR is done by IEntrySearchService which the default implementation uses the configured SearchProvider internally . Fortunately for us, as with most thing in Commerce, we can create our own implementation and inject it. Now that’s with a caveat – IEntrySearchService is marked as BETA remark, so prepare for some breaking changes without prior notice. However it has not changed much since its inception (funny thing, when I checked for its history, I was the one who created it 6 years ago, in 2017. Feeling old now), and if it is changed, it would be quite easy to adapt for such changes.

IEntrySearchService is a simple with just one method:


IEnumerable<int> Search(string keyword, MarketId marketId, Currency currency, string siteId);

It is a bit weird to return an IEnumerable<int> (what was I thinking ? ), but it was likely created as a scaffolding of SearchManager.Search which returns an IEnumerable<int>, and was not updated later. Anyway, an implementation using Find should look like this:

    public class FindEntrySearchService : IEntrySearchService
    {
        private EPiServer.Find.IClient _searchClient;

        public FindEntrySearchService(EPiServer.Find.IClient searchClient) => _searchClient = searchClient;

        public IEnumerable<int> Search(string keyword, MarketId marketId, Currency currency, string siteId)
        {
            return _searchClient.Search<EntryContentBase>()
                 .For(keyword)
                 .Filter(x => x.MatchMarketId(marketId))
                 .Filter(x => x.SiteId().Match(siteId))
                 .Filter(x => FilterPriceAvailableForCurrency<IPricing>(y => y.Prices(), currency))
                 .GetResult()
                 .Select(x => x.ContentLink.ID);
        }

        public FilterExpression<Price> FilterPriceAvailableForCurrency<T>(Expression<Func<T, IEnumerable<Price>>> prices, Currency currency)
        {
            var currencyCode = currency != null ? currency.CurrencyCode : string.Empty;

            return new NestedFilterExpression<T, Price>(prices, price => price.UnitPrice.Currency.CurrencyCode.Match(currencyCode), _searchClient.Conventions);
        }
    }

Note that I am not an expert on Find, especially on NestedFilterExpression, so my FilterPriceAvailableForCurrency might be wrong. Feel free to correct it, the code is not copyrighted and is provided as-is.

As always, you need to register this implementation for IEntrySearchService. You can add it anywhere you like as long as it’s after .AddCommerce.

_services.AddSingleton<IEntrySearchService, FindEntrySearchService>();

Optimizing an interesting query

It’s not a secret, I love optimizing things. In a sense, I am both an Optimizer (literally) and an optimizer. And today we will be back to basic – optimizing a tricky SQL query.

The query in question is this particular stored procedure ecf_CatalogNode_GetAllChildNodes, this is used to get all children nodes of specific nodes. It is used in between to find all entries that are direct, or indirect children of specific nodes. Why, you might ask, because when you change the url segment of the node, you want to make sure that all entries that are under that node, will have their indexed object refreshed.

Let’s take a look at this stored procedure, this is how it looks like

CREATE PROCEDURE [dbo].[ecf_CatalogNode_GetAllChildNodes]
    @catalogNodeIds udttCatalogNodeList readonly
AS
BEGIN
    WITH all_node_relations AS 
    (
        SELECT ParentNodeId, CatalogNodeId AS ChildNodeId FROM CatalogNode
        WHERE ParentNodeId > 0
        UNION
        SELECT ParentNodeId, ChildNodeId FROM CatalogNodeRelation
    ),
    hierarchy AS
    (
        SELECT 
            n.CatalogNodeId,
            '|' + CAST(n.CatalogNodeId AS nvarchar(4000)) + '|' AS CyclePrevention
        FROM @catalogNodeIds n
        UNION ALL
        SELECT
            children.ChildNodeId AS CatalogNodeId,
            parent.CyclePrevention + CAST(children.ChildNodeId AS nvarchar(4000)) + '|' AS CyclePrevention
        FROM hierarchy parent
        JOIN all_node_relations children ON parent.CatalogNodeId = children.ParentNodeId
        WHERE CHARINDEX('|' + CAST(children.ChildNodeId AS nvarchar(4000)) + '|', parent.CyclePrevention) = 0
    )
    SELECT CatalogNodeId FROM hierarchy
END

I previously wrote about the relations between entities in Commerce catalog, here Commerce relation(ship), a story – Quan Mai’s blog (vimvq1987.com) , so relations between nodes can be a bit complicated – a node can have one true parent defined in CatalogNode table, and then other “linked” nodes in CatalogNodeRelation . So to find all children – and grand children of a node, you need to get from both.

Getting children of a node from CatalogNode or CatalogNodeRelation is simple, but things become more complicated when you have to get grandchildren, then great-grandchildren, and so on, and so forth. with that, CTE needs to be used in a recursive way. But then there is a problem arises – there is a chance, small, but still, that the data was added in a correct way, so circular reference is possible. i.e. A is a parent of B, which is a parent of C, and itself is a parent of A. To stop the SP from running forever, a check needs to be added to make sure any circular reference is cut short.

This brings back memory as the first ever support case I worked on at Optimizely (then Episerver) was with a circular reference. The site would crash whenever someone visited the catalog management in Commerce Manager. That was around June, 2012 (feeling old now?). My “boss” at that time involuntarily volunteered me for the case. See what you made me do, boss.

Now you can grasp the basic of what the SP does – let’s get back to the original problem. it’s slow to run especially with big catalog and complex node structure. As always, to optimize everything you need to find the bottleneck – time to fire up SQL Server Management Studio and turn on the Actual Execution Plan

I decided to go with 66, the “root” catalog node. this query yield around 18k rows

declare @Nodes udttCatalogNodeList 

insert into @Nodes (CatalogNodeId) select 66

exec ecf_CatalogNode_GetAllChildNodes @Nodes

and also 18s of execution.

Mind you, this is on my machine with pretty powerful CPU (AMD Ryzen 7 5800x, 8 cores 16 threads), and a very fast nvme PCIe SSD (Western Digital Black SN850 2TB). If this was executed on Azure Sql database for example, a timeout is almost certainly guaranteed. So time of execution should only be compared relatively with each other.

If we look at the execution plan, it is quite obvious where the bottleneck is. A scan on CatalogNode table is heavy (it read 79M rows on that operation). As suggest by Anders from Timeout when deleting CatalogNodes from a large catalog (optimizely.com), adding a non clustered index on ParentNodeId column would improve it quite a lot. And indeed it does. The execution time is reduced to 5 second.

And the number of rows read on CatalogNode reduced to just 17k

This is of course a very nice improvement. But the customer reported that it is not enough and the SP is still giving timeout, i.e. further optimization is needed.

Naturally, the next step would be to see if we can skip the circular check. It was added as a safe measure to avoid bad data. It should not be there, as the check should be performed at data modification. But it is there for historical reasons and we can’t just change it, not trivially. So let’s try it for our curiousity.

The modified query looks like this (basically just commented out any code related to the CyclePrevention

ALTER PROCEDURE [dbo].[ecf_CatalogNode_GetAllChildNodes]
    @catalogNodeIds udttCatalogNodeList readonly
AS
BEGIN
    WITH all_node_relations AS 
    (
        SELECT ParentNodeId, CatalogNodeId AS ChildNodeId FROM CatalogNode
        WHERE ParentNodeId > 0
        UNION
        SELECT ParentNodeId, ChildNodeId FROM CatalogNodeRelation
    ),
    hierarchy AS
    (
        SELECT 
            n.CatalogNodeId
			--, '|' + CAST(n.CatalogNodeId AS nvarchar(4000)) + '|' AS CyclePrevention
        FROM @catalogNodeIds n
        UNION ALL
        SELECT
            children.ChildNodeId AS CatalogNodeId
			--, parent.CyclePrevention + CAST(children.ChildNodeId AS nvarchar(4000)) + '|' AS CyclePrevention
        FROM hierarchy parent
        JOIN all_node_relations children ON parent.CatalogNodeId = children.ParentNodeId
        --WHERE CHARINDEX('|' + CAST(children.ChildNodeId AS nvarchar(4000)) + '|', parent.CyclePrevention) = 0
    )
    SELECT CatalogNodeId FROM hierarchy
END

And the improvement is quite impressive (more than I expected), the query completes almost instantly (less than 1s). The read on CatalogNodeRelation significantly reduced

A word of warning here, execution plan can’t be simply compared as-is. If I run two versions side by side, it gives quite misleading comparison

Even though the top one (without the circular reference check) is much faster than the original (the bottom one), SQL Server estimates that the first is slower (almost 2x slower than the second). So execution plan should be used to see what has been done and what is likely the bottleneck inside a query, it should not be used as comparison between queries. In most cases, comparing statistics using set statistics io on is the best way to compare.

If not for the fact that we are changing the behavior of the stored procedure, I would be happy with this approach. The chance of running into circular reference is small, but it is not zero. As we said, we can in theory gating the relation during insert/updating, but that would be too big a change to start with. This is one of constraint as we work at framework level – we have to step carefully to not break anything. A breaking change is bad, but a data corruption is simply unacceptable. I spent a few hours (probably more than I should) trying to optimize the circular reference check, but no better solution is found.

The next approach would be – as we can guess, to make sure that we get rid of the Clustered Index Scan happened on the CatalogNodeRelation table. The solution would be quite simple, a non clustered index on the `ParentNodeId should be enough.

Great success. The performance is comparable with the “non circular reference check” approach.

As adding an index is a non breaking change (and albeit in some cases it can cause performance regression, like in A curious case of SQL execution plan – Quan Mai’s blog (vimvq1987.com) , but it is rare, also, in this case the cardinality of the ParentNodeId is most likely quite well distributed).

That is all for today. Hopefully you learn one thing or two about optimizing queries in your daily works.

Delete orphaned assets

I was asked this question: we have about 3TB of assets, any way to clean it up.

These days, storage is cheap, but still not free. and big storage means you need space for back up. and with that, bandwidth and time.

Is there away to clean up things you no longer need?

Yes!

Optimizely Content already has a scheduled job named Remove Abandoned BLOBs, but this job only removes the blobs that have no content associated. I.e. the content is deleted by IContentRepository.Delete but the blob was left behind. The job uses the log to find out which content were deleted, then find those blobs.

How’s about the assets that still have contents associated with them, but not used anywhere? Time to get your hands dirty!

Due to the nature of this task, it is best to make it a scheduled job.

All of the assets are children under the global asset root. By iterating over them, we can check if each of them is being used by another content. If not, we will add them to a list for later delete. Before deleting the content, we will find the blob and then delete it as well. Easy, right?

To get the content recursively we use this little piece of code

        public virtual IEnumerable<T> GetAssetRecursive<T>(ContentReference parentLink, CultureInfo defaultCulture) where T : MediaData
        {
            foreach (var folder in LoadChildrenBatched<ContentFolder>(parentLink, defaultCulture))
            {
                foreach (var entry in GetAssetRecursive<T>(folder.ContentLink, defaultCulture))
                {
                    yield return entry;
                }
            }

            foreach (var entry in LoadChildrenBatched<T>(parentLink, defaultCulture))
            {
                yield return entry;
            }
        }

        private IEnumerable<T> LoadChildrenBatched<T>(ContentReference parentLink, CultureInfo defaultCulture) where T : IContent
        {
            var start = 0;

            while (!_isStopped)
            {
                var batch = _contentRepository.GetChildren<T>(parentLink, defaultCulture, start, 50);
                if (!batch.Any())
                {
                    yield break;
                }
                foreach (var content in batch)
                {
                    // Don't include linked products to avoid including them multiple times when traversing the catalog
                    if (!parentLink.CompareToIgnoreWorkID(content.ParentLink))
                    {
                        continue;
                    }

                    yield return content;
                }
                start += 50;
            }
        }

And we will start from SiteDefinition.Current.GlobalAssetsRoot, and use IContentRepository.GetReferencesToContent to see if it is used in any content (both CMS and Catalog). If not, we add it to a list. Later, we use IPermanentLinkMapper to see if it has any blob associated, and delete that as well

            foreach (var asset in GetAssetRecursive<MediaData>(SiteDefinition.Current.GlobalAssetsRoot, CultureInfo.InvariantCulture))
            {
                totalAsset++;
                if (!_contentRepository.GetReferencesToContent(asset.ContentLink, false).Any())
                {
                    toDelete.Add(asset.ContentLink.ToReferenceWithoutVersion());
                }

                if (toDelete.Count % 50 == 0)
                {
                    var maps = _permanentLinkMapper.Find(toDelete);
                    foreach (var map in maps)
                    {
                        deletedAsset++;
                        _contentRepository.Delete(map.ContentReference, true, EPiServer.Security.AccessLevel.NoAccess);
                        var container = Blob.GetContainerIdentifier(map.Guid);
                        //Probably redundency, can just delete directly
                        var blob = _blobFactory.GetBlob(container);
                        if (blob != null)
                        {
                            _blobFactory.Delete(container);
                        }
                        OnStatusChanged($"Deleting asset with id {map.ContentReference}");
                    }
                    toDelete.Clear();
                }
            }

We need another round of delete after the while loop to clean up the left over (or if we have less than 50 abandoned assets)

And we’re done!

Testing this job is simple – uploading a few assets to your cms and do not use it anywhere, then run the job. it should delete those assets.

Things to improve: we might want to make sure only assets that created more than a certain number of days ago are deleted. This allows editors to upload assets for later uses without having to use them immediately.

The code has been open sourced at vimvq1987/DeleteUnusedAssets: Delete unused assets from an Optimizely/Episerver site (github.com) , and I have uploaded a nuget package to Packages (optimizely.com) to be reviewed.

Skip validating carts

Carts are meant to be validated. Prices changed, customers add more quantity than allowed, promotions expired, stock ran out, etc.. All kinds of stuffs that make the items in carts need validation to make sure they up to date, and be ready to be converted to an order.

However, there are cases when you don’t want your carts to be validated. The most common case is of course, wish list – a special cart that allows customer to add items to, just to keep track of. You certainly don’t want to touch it. Another example is quote – when you give a specific item at specific price for a customer, and you don’t want it to be automatically changed to the public prices, which is different from that said price.

By default, when it is called to validate a cart, these things will be done:

  • Remove items that no longer available (either deleted or end of line)
  • Update prices of the items to the latest applicable prices, or remove items that have no prices.
  • Update quantity of the items (to comply with the settings or in stock quantity), or remove items that are out of stock
  • Apply promotions
  • Update quantity again (As promotions could do things like adding free items to the cart)

There are two ways you can avoid carts being validated, let’s see what we can do.

The “Wish list names” route

With OrderOptions you can set certain wishlist names to be exempted from the validation, using WishListCartNames. By default, it’s only “Wishlist”, but you can set several using the comma separator, like this

orderOptions.WishListCartNames = "Wishlist,Quote";

However there is a caveat, with this approach, carts in those names will not be shown in the Order Management (If you want, you can change that, however it is not an easy or quick one)

The OrderValidationService route

The validation of carts (or rather, order types in general) is done by OrderValidationService. And that class is meant to be extended if necessary. Here is how you would avoid validation carts with name “Quote”, using OrderValidationService

    public class CustomOrderValidationService : OrderValidationService
    {
        public CustomOrderValidationService(ILineItemValidator lineItemValidator, IPlacedPriceProcessor placedPriceProcessor, IPromotionEngine promotionEngine, IInventoryProcessor inventoryProcessor, OrderOptions orderOptions) : base(lineItemValidator, placedPriceProcessor, promotionEngine, inventoryProcessor, orderOptions)
        {
        }

        public override IDictionary<ILineItem, IList<ValidationIssue>> ValidateOrder(IOrderGroup orderGroup)
        {
            if (orderGroup.Name.Equals("Quote", System.StringComparison.OrdinalIgnoreCase))
            {
                return new Dictionary<ILineItem, IList<ValidationIssue>>();
            }
            return base.ValidateOrder(orderGroup);
        }
    }

And as OrderValidationService is registered by ServiceConfiguration attribute, you can register yours by

services.AddTransient<OrderValidationService, CustomOrderValidationService>();

One caveat though, making changes to OrderValidationService means those changes will apply to the entire website, so make sure the changes are actually the ones you want site-wide, not just in specific places.

Index only Catalog content

If you are using Find to index your content, you likely have used the Find Indexing job – which would index everything in one go. Today I stumped upon this question – A way to run indexing job for Commerce only | Optimizely Develope – and it is a good one – if you have many of content in CMS side, and they don’t change that often, if at all – you certain don’t want to waste time and resource in trying to reindex them again. Is there away to just index catalog content?

Yes, there is. It is a bit hacky solution, but it can certain work. But first, let’s dive in on how Find indexing job does it work. It relies on IIndexingJobService , which itself relies on ContentIndexer to do the job. In its turn, ContentIndexer uses a list of IReindexInformation to know which content to index, and in which languages. Here’s what it looks like

    public interface IReindexInformation
    {
        /// <summary>
        /// Content links to be reindexed.
        /// </summary>
        IEnumerable<ReindexTarget> ReindexTargets { get; }

        /// <summary>
        /// Gets the root to index.
        /// </summary>
        ContentReference Root { get; }
    }

It has one Root, and multiple ReindexTarget, which contains

    public class ReindexTarget
    {
        /// <summary>
        /// The content references.
        /// </summary>
        public IEnumerable<ContentReference> ContentLinks { get; set; }

        /// <summary>
        /// The languages the collection of <see cref="ContentReference"/> are enabled on.
        /// </summary>
        public IEnumerable<CultureInfo> Languages { get; set; }

        /// <summary>
        /// The site that the collection of <see cref="ContentReference"/> appears on
        /// or <c>null</c> if unknown.
        /// </summary>
        public SiteDefinition SiteDefinition { get; set; }
    }

As you might have guessed, Commerce has its own IReindexInformation to index catalog content. If we can only use that to run our job. This is how our “hack” begins

The interface IContentIndexer has no method to control the IReindexInformation`, but the default implementation ContentIndexer does. We set it to the only one we need, so here it is

        List<IReindexInformation> targets;
        var contentIndexer = _contentIndexer as ContentIndexer;
        if (contentIndexer != null)
        {
            targets = contentIndexer.ReindexInformation.ToList();
            var commerceReIndexInformation = targets.FirstOrDefault(x => x.GetType() == typeof(CommerceReIndexInformation));
            contentIndexer.ReindexInformation = new List<IReindexInformation>() { commerceReIndexInformation };
            _indexingJobService.Start(OnStatusChanged);

            contentIndexer.ReindexInformation = targets;
        }

A note is that you will still see the “Indexing Global assets and other data” message, because IIndexingJobService implementation will go through all SiteDefinition regardless and show that message, but the internal ContentIndexer will skip if the SiteDefinition passed to it does not match the SiteDefinition in the IReindexInformation (and for CommerceReIndexInformation it’s SiteDefinition.Empty

As I mentioned in the beginning, this is a bit hacky solution, as you have to cast IContentIndexer to its concrete implementation. The proper solution would be implement IContentIndexer yourself. Given that’s not a trivial job, I’ll leave at that.

Loading the contacts/organizations, the right way

If you have been using Business Foundation, you most likely know about a limitation – you can only load the first 1000 objects using the GetXXX methods. For example, by using CustomerContext.Current.GetOrganizations(), you can load the first 1000 organizations. In theory, you can get more objects by changing the value of MaxObjectsList. However, changing that has consequences. Changing that will affect all types of objects, including contacts, organizations, and your custom objects. Also, loading too much in one go is almost never a good idea.

Is there a better way?

Yes, of course – which is why we have this blog post

There is a “hidden” method from base class of Business Foundation – BusinessManager that takes paging parameters

public static EntityObject[] List(string metaClassName, FilterElement[] filters, SortingElement[] sorting, int? start, int? count)

You will need to convert the results to the type you want. Note that all Business Foundation objects are inherited from EntityObject. So if you want to get the contacts by paging, it would look like this:

                var contacts = BusinessManager.List(ContactEntity.ClassName, new FilterElement[0], new SortingElement[] { new SortingElement(sortField, sortType) }, startIndex, recordsToRetrieve)
                .OfType<CustomerContact>();

Let’s go through the parameters one by one.

  • The first you need is the class name of your objects. For contacts, you can use ContactEntity.ClassName as shown above. For organizations, OrganizationEntity.ClassName
  • Next one is the filter. As you are trying to load all objects, you can just pass in an empty (but not null) instance – new FilterElement[0]
  • Third one is how you want to sort it. If you pass an empty array, it will be sort by default. If you want to sort by Name for example, set your sortField to Name and sortType to one of SortingElementType (Asc or Desc)
  • Forth and fifth ones are what we are looking for, they’re simply paging parameters – which position to start getting, and how many objects to get. Combine this with a simple while loop, you can get all of your Business Foundation objects.

And that’s about it, my friends.

What’s about caching?

Caching with list is always tricky – as you have to keep track of each item in the list to make sure you invalidate the list cache if one of the item is changed (updated/removed). For the purpose of just loading all contacts/organizations, it is probably better to just skip caching, for simplicity.

Delete property no longer available in code

Recently I stumped upon this question Removing a property that no longer exists in the code (optimizely.com) . it’s a valid (and even good) question. It is easy to add a new property to your catalog content type – you can simply add a new property to the model, build and start the site. However the opposite is not easy. In Commerce 14 at least.

A property for the strongly typed content type, is actually mapped and backed by a MetaField in MetaDataPlus system (of course unless you specifically tell it not to, by using IgnoreMetaDataPlusSynchronization attribute). When you add a new property to your content type, build and start your site, your content type is scanned and metafields will be created if necessary. However, if you delete a property from your content type, the scanner will just leave the metafield there. There are a few reasons for that. Firstly, it allows loosely typed content type, i.e. content types with none, or only a few property defined. If you have used some kind of external PIM, you’ll understand why it is important. Lastly, because the property can be mapped with a metafield of different name, the scanner might have trouble figuring out which metafield to delete. All in all, keeping the metafields is the sensible (if not the right) choice.

Then what to do if you want to delete the property and also clean up the metafield? With Commerce 13 and earlier, you can detach a MetaField from its MetaClass(s), then delete it using Commerce Manager. With the dead of CM in Commerce 13, what is your option?

By using code, of course. There are a few APIs – namely MetaField and MetaClass that can be used for that purpose. Note that there are two MetaField and MetaClass, and only the ones in Mediachase.MetaDataPlus.Configurator namespace are what we want (the others are for Business Foundation)

Enough for chit chat, this is the code that you would need to run

        private void DeleteMetaField(string metafieldName)
        {
            var metaField = MetaField.Load(CatalogContext.MetaDataContext, metafieldName);
            if (metaField == null)
            {
                return;
            }
            foreach (int metaClassId in metaField.OwnerMetaClassIdList)
            {
                var metaClass = MetaClass.Load(CatalogContext.MetaDataContext, metaClassId);
                if (metaClass == null)
                {
                    metaClass.DeleteField(metafieldName);
                }
            }
            MetaField.Delete(CatalogContext.MetaDataContext, metaField.Id);
        }

It is pretty straightforward. We load the MetaField by its name, if it is not null, then we remove it from all MetaClass that are using it, then eventually delete it.

In beginning of this post we mentioned strongly typed content type, but note that order system also uses the same metaclass/metafield system, so this code can be used for them as well.

This piece of code can be used in an admin-privilege controller to delete metafields on demand. Until Commerce 14 allows you to do it with a proper UI.

A curious case of cookie threading issue – part 2

A while back, I wrote this post A curious case of cookie threading issue – Quan Mai’s blog (vimvq1987.com) as an example of how to use WinDbg to diagnose a race condition (with results in an infinite loop). Naturally that issue needs to be fixed. A starting point to fix a race condition is to put locks whenever you change the data. But it’s not that simple. Too much locking would definitely hurt your application performance, and increase the chance of you running into another nasty problem (namely, deadlock). So the art of locking is “lagom” – Swedish word for “just right”, not too much, not too little.

ReaderWriteLockSlim comes to rescue. You only need to lock when writing, not locking. So this for reading

            readerWriterLockSlim.EnterReadLock();
            try
            {
                return httpRequest.Cookies[CookieKey];
            }
            finally
            {
                readerWriterLockSlim.ExitReadLock();
            }

And this for writing

            readerWriterLockSlim.EnterWriteLock();
            try
            {
                httpRequest.Cookies.Remove(CookieKey);               
            }
            finally
            {
                readerWriterLockSlim.ExitWriteLock();
            }

Would be enough, right?

No. The issue runs deeper than that.

If we go back to the stacktrace of the issue

0:039> !clrstack
OS Thread Id: 0x1e34 (39)
        Child SP               IP Call Site
000000740273cff0 00007ff9297d9f59 System.Collections.Generic.HashSet`1[[System.__Canon, mscorlib]].Remove(System.__Canon)
000000740273d080 00007ff92a8eb4e3 System.Web.HttpCookieCollection.Get(System.String)
000000740273d0c0 00007ff9314de98d 

and look closer. Something is not right. If you do not spot the issue – it’s fine, I missed it too. The problem is that we have a HashSet.Remove inside a HttpCookieCollection.Get. Uh oh.

If we look at source code of HttpCookieCollection, inside Get, it calls to EnsureKeyValidated

if (cookie != null) {
                EnsureKeyValidated(name, cookie.Value);
            }

referencesource/HttpCookieCollection.cs at master · microsoft/referencesource · GitHub

which itself calls to

            _keysAwaitingValidation.Remove(key);

referencesource/HttpCookieCollection.cs at master · microsoft/referencesource · GitHub

_keysAwaitingValidation is a HashSet<string> . That explains the Remove we saw. And that explains why ReaderWriterLockSlim is not enough – changes are made within the supposedly read only action as well.

The only valid solution here is to lock both read and write actions on HttpCookieCollection. However, as HttpCookieCollection is per request, so our lock object should only be per request as well (we certainly do not want every thread to be locked when we get cookie on a request).

Moral of the story:

  • Look closer. There might always be something underneath. Close to the truth is still not the truth.
  • Never assume that a Get method is thread-safe. The implementation can do plenty of unexpected things under the hood.
  • You might ask why this happen, as HttpCookieCollection is per request. Turned out that there is a code that use ThreadPool.QueueUserWorkItem to queue tasks which share the same HttpContext object. As we learned, that’s the recipe for disaster. Think twice (or thrice) before sharing an object that is not thread-safe between threads.