Control the thousand separator for Money in Episerver Commerce

If you are selling goods in multiple markets which same currency but with different languages, such as EuroZone, you might notice that while everything looks quite good, except that the thousand separator might be off from time to time: it is always the same and does not change to match with the language, so sometimes it’s correct, sometimes it’s not.

Let’s take a step back to see how to properly show the thousand delimiter 

In the United States, this character is a comma (,). In Germany, it is a period (.). Thus one thousand and twenty-five is displayed as 1,025 in the United States and 1.025 in Germany. In Sweden, the thousands separator is a space.

https://docs.microsoft.com/en-us/globalization/locale/number-formatting

You might ask why the problem happens with Episerver Commerce. In Commerce, each currency has an attached NumberFormatInfo which let the framework knows how to format the currency. During startup, the system will loop through the available CultureInfo and assign its .NumberFormat to the currency.

The problem is there might be multiple CultureInfo that can handle same currency, for example, EUR which is used across Eurozone, can be handled by multiple (20? ) cultures. However, the first matching CultureInfo to handle the format of the currency will be used. In most of the cases, it will be br-FR (because the CultureInfo(s) are sorted by name, and this CultureInfo is the first in the list to handle EUR)

br-FR does not have a thousand separator, but a whitespace. That’s why even if your language is de-DE, the amount in EUR will not be properly formatted as 1.234,45 but 1 234,45

How to fix that problem?

Luckily, we can set the NumberFormatInfo attached for each currency. If you are only selling in Germany, you can make sure that EUR is always formatted in German style, by adding this to one of your initialization modules:

var culture = CultureInfo.GetCultureInfo("de-DE");
Currency.SetFormat("EUR", culture.NumberFormat);

But if you have multiple languages for one currency, this will simply not work (because it’s static, so it will affect all customer). Your only option is to avoid using Money.ToString(), but to use Money.ToString(IFormatProvider), for example

money.ToString(CultureInfo.CurrentUICulture);

Assuming CultureInfo.CurrentUiCulture is set to correct one.

This, however, does not resolve the problem with merchandisers using Commerce Manager. They might have to work with orders from multiple markets, and for example, if your site is selling good stuffs in Europe, there are chances that merchandisers see the prices without correct thousand separator. Most of places in Commerce Manager uses Money.ToString(), and there is a reason for that: it’s too risky to use Money.ToString(CultureInfo.CurrentUICulture), because if a merchandiser uses English, he or she is likely gonna see money formatted as “$” instead of “€”, and that is a much bigger problem of itself.

Moral of the story: localization is hard, and sometimes a compromise is needed.

Fixing ASP.NET Membership performance – part 1

Even though it is not the best identity management system in the .NET world, ASP.NET Membership provider is still fairly widely used, especially for systems that have been running for quite long time with a significant amount of users: migrating to a better system like AspNetIdentity does not comes cheap. However, built from early days of ASP.NET mean Membership provider has numerous significant limitations: beside the “architecture” problems, it also has limited performance. Depends on who you ask, the ultimate “maximum” number of customers that ASP.NET membership provider can handle ranges from 30.000 to 750.000. That does not sound great. Today if you start a new project, you should be probably better off with AspNetIdentity or some other solutions, but if your website is using ASP.NET membership provider and there is currently no plan to migrate, then read on.

The one I will be used for this blog post has around 950.000 registered users, and the site is doing great – but that was achieved by some very fine grained performance tuning, and a very high end Azure subscription.

A performance overview 

I have been using ASP.NET membership provider for years, but I have never looked into it from performance aspects. (Even though I have done some very nasty digging to their table structure). And now I have the chance, I realize how bad it is.

It’s a fairly common seen in the aspnet_* tables that the indexes have ApplicationId as the first column. It does not take a database master to know it is a very ineffective way to create an index – in most of the cases, you only have on ApplicationId in your website, making those indexes useless when you want to, for example, query by UserId. This is a rookie mistake – a newbie tends to make order of columns in the index as same as they appear in the table, thinking, that that SQL Server will just do magic to exchange the order for the best performance. It’s not how SQL Server – or in general – RDBMS systems work.

It is OK to be a newbie or to misunderstand some concepts. I had the very same misconception once, and learned my lessons. However, it should not be OK for a framework to make that mistake, and never correct it.

That is the beginning of much bigger problems. Because of the ineffective order of columns, the builtin indexes are as almost useless. That makes the queries, which should be very fast, become unnecessarily slow, wasting resources and increasing your site average response time. This is of course bad news. But good news is it’s in database level, so we can change it for the better. It if were in the application level then our chance of doing that is close to none.

Missing indexes

If you use Membership.GetUserNameByEmail on your website a lot, you might notice that it is … slow. It leads to this query:

        SELECT  u.UserName
        FROM    dbo.aspnet_Applications a, dbo.aspnet_Users u, dbo.aspnet_Membership m
        WHERE   LOWER(@ApplicationName) = a.LoweredApplicationName AND
                u.ApplicationId = a.ApplicationId    AND
                u.UserId = m.UserId AND
                LOWER(@Email) = m.LoweredEmail

Let’s just ignore the style for now (INNER JOIN would be a much more popular choice), and look into the what is actually done here. So it joins 3 tables by their keys. The join with aspnet_Applications would be fairly simple, because you usually have just one application. The join between aspnet_Users and aspnet_Membership is also simple, because both of them have index on UserId – clustered on aspnet_Users and non-clustered on aspnet_Membership

The last one is actually problematic. The clustered index on aspnet_Membership actually looks like this

CREATE CLUSTERED INDEX [aspnet_Membership_index]
    ON [dbo].[aspnet_Membership]([ApplicationId] ASC, [LoweredEmail] ASC);

Uh oh. Even if this contains LoweredEmail, it’s the worst possible kind of index. By using the least distinctive column in the first, it defeats the purpose of the index completely. Every request to get user name by email address will need to perform a full table scan (oops!)

This is a the last thing you want to see in a execution plan, especially with a fairly big table. 

It should have been just

CREATE CLUSTERED INDEX [aspnet_Membership_index]
    ON [dbo].[aspnet_Membership]([LoweredEmail] ASC);

which helps SQL Server to use the optimal execution plan

If you look into Azure SQL Database recommendation, it suggest you to create a non clustered index on LoweredEmail. That is not technically incorrect, and it still helps. However, keep in mind that each non clustered index will have to “duplicate” the clustered index, for the purpose of identify the rows, so keeping the useless clustered index actually increases wastes and slows down performance (even just a little, because you have to perform more reads to get the same data). However, if your database is currently performing badly, adding a non clustered index is a much quicker and safer option. The change to clustered index should be done with caution at low traffic time.

Tested the stored procedure on database above, without any additional index

Table 'aspnet_Membership'. Scan count 9, logical reads 20101, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'aspnet_Applications'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'aspnet_Users'. Scan count 0, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row affected)

 SQL Server Execution Times:
   CPU time = 237 ms,  elapsed time = 182 ms.

With new non clustered index


(1 row affected)
Table 'aspnet_Applications'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'aspnet_Users'. Scan count 0, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'aspnet_Membership'. Scan count 1, logical reads 9, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row affected)

 SQL Server Execution Times:
   CPU time = 15 ms,  elapsed time = 89 ms.

With new clustered index:

(1 row affected)
Table 'aspnet_Applications'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'aspnet_Users'. Scan count 0, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'aspnet_Membership'. Scan count 1, logical reads 4, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row affected)

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 89 ms.

Don’t we have a clear winner?

Speed up catalog routing if you have multiple children under catalog

A normal catalog structure is like this: you have a few high level categories under the catalog, then each high level category has a few lower level categories under it, then each lower level category has their children, so on and so forth until you reach the leaves – catalog entries.

However it is not uncommon that you have multiple children (categories and entries) directly under catalog. Even though that is not something you should do, it happens. 

But that is not without drawbacks. You might notice it is slow to route to a product. It might not be visible to naked eyes, but if you use some decent profilers (which I personally recommend dotTrace), it can be fairly obvious that your site is not routing optimally.

Why?

To route to a specific catalog content, for example http://commerceref/en/fashion/mens/mens-shirts/p-39101253/, the default router have to figure out which content is mapped to an url segment. So with default registration where the catalog root is the default routing root, we will start with the catalog which maps to the first part of route (fashion ). How do it figure out which content to route for the next part (mens ) ? 

Until recently, what it does it to call GetChildren on the catalog ContentReference . Now you can see the problem. Even with a cached result, that is still too much – GetChildren with a big number of children is definitely expensive.

We noticed this behavior, thanks to Erik Norberg. An improvement have been made in Commerce 12.10 to make sure even with a number of children directly under Catalog, the router should perform adequately efficient.

If you can’t upgrade to 12.10 or later (you should!), then you might have a workaround that improve the performance. By adding your own implementation of HierarchicalCatalogPartialRouter, you can override how you would get the children content – by using a more lightweight method (GetBySegment)

    public class CustomHierarchicalCatalogPartialRouter : HierarchicalCatalogPartialRouter
    {
        private readonly IContentLoader _contentLoader;

        public CustomHierarchicalCatalogPartialRouter(Func<ContentReference> routeStartingPoint, CatalogContentBase commerceRoot, bool enableOutgoingSeoUri) : base(routeStartingPoint, commerceRoot, enableOutgoingSeoUri)
        {
        }

        public CustomHierarchicalCatalogPartialRouter(Func<ContentReference> routeStartingPoint, CatalogContentBase commerceRoot, bool supportSeoUri, IContentLoader contentLoader, IRoutingSegmentLoader routingSegmentLoader, IContentVersionRepository contentVersionRepository, IUrlSegmentRouter urlSegmentRouter, IContentLanguageSettingsHandler contentLanguageSettingsHandler, ServiceAccessor<HttpContextBase> httpContextAccessor) : base(routeStartingPoint, commerceRoot, supportSeoUri, contentLoader, routingSegmentLoader, contentVersionRepository, urlSegmentRouter, contentLanguageSettingsHandler, httpContextAccessor)
        {
            _contentLoader = contentLoader;
        }

        protected override CatalogContentBase FindNextContentInSegmentPair(CatalogContentBase catalogContent, SegmentPair segmentPair, SegmentContext segmentContext, CultureInfo cultureInfo)
        {
            return _contentLoader.GetBySegment(catalogContent.ContentLink, segmentPair.Next, cultureInfo) as CatalogContentBase;
        }
    }

And then instead of using CatalogRouteHelper.MapDefaultHierarchialRouter , you register your router directly

 var referenceConverter = ServiceLocator.Current.GetInstance<ReferenceConverter>();
            var contentLoader = ServiceLocator.Current.GetInstance<IContentLoader>();
            var commerceRootContent = contentLoader.Get<CatalogContentBase>(referenceConverter.GetRootLink());
            routes.RegisterPartialRouter(new HierarchicalCatalogPartialRouter(startingPoint, commerceRootContent, enableOutgoingSeoUri));

(ServiceLocator is just to make it easier to understand the code. You should do this in an IInitializationModule, so use context.Locate.Advanced instead.

This is applicable from 9.2.0 and newer versions. 

Moral of the story:

  • Catalog structure can play a big role when it comes to performance.
  • You should do profiling whenever you can
  • We do that too, and we make sure to include improvements in later versions, so keeping your website up to date is a good way to tune performance.

Refactoring Commerce catalog code, a story

It is not a secret that I am a fan of refactoring. Clean. shorter, simpler code is always better. It’s always a pleasure to delete some code while keeping all functionalities: less code means less possible bugs, and less places to change when you have to change.

However, while refactoring can bring a lot of enjoying to the one who actually does it, it’s very hard to share the experience: most of the cases it’s very specific and the problem itself is not that interesting to the outside world. This story is an exception because it might be helpful/useful for other Commerce developer.

Continue reading “Refactoring Commerce catalog code, a story”

Commerce batching performance – part 2: Loading prices and inventories

UPDATE: When looked into it, I realize that I have a lazy loading collection of entry codes, so each test had to spent time to resolve the entry code(s) from the content links. That actually costs quite a lot of time, and therefore causing the performance tests to return incorrect results. That was corrected and the results are now updated.

In previous post we talked about how loading orders in batch can actually improve your website performance, and we came to a conclusion that 1000-3000 orders per batch probably yields the best performance result.

But orders are not the only thing you would need to load on your website. A more common scenario is to load prices and inventories for product. So If you are displaying a product listing page, it’s quite common to load prices and inventories for all products in that page. How should it be loaded?

Continue reading “Commerce batching performance – part 2: Loading prices and inventories”

Commerce batching performance – part 1: Loading orders

One of best practices for better performance – not just with Commerce or Episerver Commerce, is to batch your calls to load data. In theory, if you want to load a lot of data, loading by both end will be problematic: if you load each record one by one, the overhead for opening the connection and retrieve data will be too much. But if you load all of them, then it is likely that you will end up with either time out exception in database end, or out of memory exception in your application. The better way is to of course, loading them by smaller batch: either 10, 20, or 50 records at one and repeat until the end.

That is the theory, but is it really better in practice? And if it is, which size of batch works best? As they usually say, reality is the golden test for theory, so let’s do it.

Continue reading “Commerce batching performance – part 1: Loading orders”

Speed up your catalog indexing performance – part 2

Almost two years ago I wrote part 1 here: https://vimvq1987.com/speed-catalog-entries-indexing/ on how to speed up your catalog indexing performance. If you have a fairly big catalog with frequent changes, it might take longer time than necessary to build the index incrementally. (Rebuild index, in other hands, just delete everything and rebuild from scratch, so it is not affected by the long queue in ApplicationLog). I have seen some cases where rebuilding the entire index, is actually faster than waiting for it to build incrementally.

The tip in previous blog post should work very well if you are using anything lower than Commerce 11.6, but that is no longer the case!

Continue reading “Speed up your catalog indexing performance – part 2”

A curious case of memory dump diagnostic: How Stackify can cause troubles to your site

It’s been a while since my last blog post, and this time it will be fun time with windbg. One of DXC customers has been having problem with instances of their site hang and restart. We at Episerver takes the availability and performance of DXC sites very seriously, therefore I was asked to help diagnosing the problem. It’s something I’d like to do and to learn anyway, so game is on.

The golden standard for looking at the kind of those problems is still the tried and trusted Windbg. With the help from the Managed Services Engineer, I was able to access several memory dumps from the site. Most of them were taken when the site stalled, which is usually the most useful moment because it should contain the most critical piece of information why the stall happened.

Continue reading “A curious case of memory dump diagnostic: How Stackify can cause troubles to your site”

Episerver Commerce: A problem-solution approach is now draft complete

6 months ago I announced that I was working on a second book on Episerver Commerce. Unlike the first one where I tried to provide a systematic approach about the framework, this book focuses on the bite-size recipes, each one is a solution to solve a real world problem.

It’s one problem solved, at a time.

The world is big and the possibilities are endless, there might be more than one solution to a problem, but I tried to give the best one to my knowledge, experience. I hope that by talking about it, I also give information about how the API:s work, what to use, what should be avoided. what can be better, what are the best practices … , so you can develop solutions to your own problems, or even, come up with better solutions than mine.

If you manage to do that, don’t forget to share your solutions as a blog post on World forums, to help the ones come after you!

The cover I spent a day one. At least it's colorful, right?
People asked me: why foods? Well, because it’s a recipe book.

And now I’m proud to announce that the book is draft complete. Which means I’m happy with the content of the book. I will not stop there, new recipes and/or improvements will be brought from time to time, but if you want to buy the book, this is the time.

As always, writing a book is not an easy task – even with a smaller book like this. I would not have done it without the help and supports from the community. I would like to thank buyers who have bought the book, even from the very early days. I also want to thank people on World who gave me ideas and suggestions for recipes.

Not every news is good news, however. Consider the time and effort I have put into the book, and the new pricing policies of Leanpub (they increased their cut recently from 10% + 50c to 20%), I will increase the asking price of the book to $24.99, from $19.99. If you bought the book, you own the book and there is no extra cost for you. But if you are buying the book now, you would have to spend more money…

But …

… a Commerce book announcement is incomplete without some discount.

Now it’s something special: The book is dedicated to my daughter, who was born today! In the honor of me becoming a father – and to welcome my baby girl to the world, I’m offering 50% off for today: https://leanpub.com/epicommercerecipes/c/HelloWorld which is valid until the end of July 31st, GMT time.

I hope you find something useful from my book, and I welcome all new recipe suggestions and improvements!

Getting all non published variations

I got a question from a colleague today: A customer has multiple languages (8 of them). They need to make sure all variants are published in all languages. That is of course a reasonable request, but there is no feature builtin for such requirement. But good news is that can be done with ease. If you want to try this as practice, go ahead – I think it’s a good exercise for your Episerver Commerce-fu skills.

To do this task, we need the snippet to traverse the catalog from here https://leanpub.com/epicommercerecipes/read_sample

Continue reading “Getting all non published variations”