Pension fund in Sweden, an overview

In Sweden, like many other western countries, each working person has an individual pension fund. This is vastly different from some other countries like Vietnam, where the pension fund is shared, not only between every working person, but also between other purposes (maternity leaves, sick leaves…) . Like many other things in Sweden, the pension fund is transparent to you, and you can manage part of it, to some level. If you do things “correctly”, then it might make a sizable impact on your pension you you retire

A three parts pension scheme

Pension fund in Sweden consists of 3 parts. If you have been in Sweden for more than 1 year, you can always check https://www.minpension.se/ to see how much money you have in your pension fund (your information is only “added” in your first November here)

General pension (Swedish: Allmän pension)

Every working person in Sweden will receive this pension, as contributed by their employer. Every year, 18.5% of your pensionable income, up to a limit, is contributed to this part of your pension. The limit is set as 7.5 PBA – Price Base Amount. This is adjusted by Swedish government every year, and in 2019, 1 PBA = 64.400 SEK, meaning your upper limit this year is 483.000SEK. If you are making more than that in your pensionable income (good for you!), then your public pension contribution is still capped at 89.355 SEK.

The public pension actually has 2 parts of itself:

The income pension (Inkomstpension)

which is 16% of your pensionable income. You can see this, (i.e. how much money do you have), but you can’t manage it. The state will invest the money the way it sees fit, however you can guess that the money is invested in some low risk, low return bonds.

The premium pension (premiepension)

which is 2.5% of your pensionable income. You can actually manage this at https://www.pensionsmyndigheten.se/

By default, your premium pension is put into AP7 Såfa, which is actually a very good fund. It has very low fee (only 0.06 – 0.1%/year), and good return rate. It is also an adaptive fund, which means it will invest more in bonds (which is “safer”, but returns less than stocks) as you age. When you are less than 55 years old, 100% is your money is put into shares, and that will reduce as you age, at 65 (your expected retirement age), it’s 67% stocks and 33% bonds, at 75, it’s 33% stocks and 67% bonds

Pensionsmyndigheten is great, because not only you can manage your fund here, but also it has a wide range of funds for you to choose, at a superbly discounted fee. The discount is small on index funds (which already have cheap management fees by themselves), usually at 0.16%/year compared to 0.2%/year originally, but it is very significant for actively managed funds.

Most of them are discounted for more than 1%. For example: Skandia Time Global has a management fee of 1.4% year, and on top of that, you have to pay transaction fee (Transaktionskostnad) of 0.25%, in total your fee is 1.65%/year. At Pensionsmyndigheten you only pay 0.39%. Which means you get 1.26% gain per year for free! As the biggest fund manager in Sweden, they have the leverage to negotiate with other fund managers to cut down the fees, and that is really good for you.

You can leave your money as-is, I think the default choice is very decent. But you are free to make your bet to potentially make more money.

The only bad thing about Pensionsmyndigheten is that you can’t put more money to their fund, even if you want to. That low fee is just so great.

Occupational pension (Tjänstepension)

Most, but not all, employers in Sweden give their employees the occupational pension. In case your employer doesn’t – like mine – then you will likely have to pay it yourself.

Most of the companies follow ITP scheme, which is in short:

  • ITP1, for people who were born in 1979 and after, which is
    • 4.5% of your salary, up to 7.5 PBA/year, so basically 40.250SEK/month in 2019.
    • 30% of your salary part that is higher than 7.5 PBA/year.
  • ITP 2, for people who were born in 1978 and before
    • 10% of your pensionable income, up to 7.5 PBA
    • 65% of your pensionable income, from 7.5 to 20 PBA
    • 32.5% of your pensionable income, from 20 to 30 PBA

The numbers are updated here https://www.collectum.se/sv/Privat/ITP/ITP-1-och-ITP-2/

So if your salary is higher than 40.250SEK, your occupational insurance increases very fast.

If you get less than, or equal to, 40.250SEK/month, then your total pension contribution is 23% of your pensionable income.

If you are get, for example, 50.000SEK/month, then your total pension contribution is as follow:

9250 SEK from general pension

1811.25 + 2925 SEK from occupational pension

Total: 13986.25 SEK = 27.9725%

So if you are a high income person, this part can be very significant in your pension fund, and by default you don’t have a good choice as with premium pension, so you should pay close attention to it.

Just like premium pension, you can of course manage this pension, even to a bigger extend, as you can choose the provider yourself. In many cases, however, the choices of provider can be limited, depending on your employer contract. Notable providers in the market are:

  • Most banks, including big fours (SEB, Swedbank, Nordea & Handelsbanken) and smaller ones (Skandia, Länsförsäkringar)…
  • Avanza
  • Nordnet
  • SPP
  • Söderberg & Partners

As always, it’s best to talk with your company HR and/or your fund manager to see which options do you have. I’d always recommend to choose the one which lowest fees (less than 0.3%/year) and the biggest fund portfolio. Note that the provider might have a more limited fund portfolio for pension, so make sure to check that.

With occupational pension you have too choices, or actually three:

  • Funds (Fondförsäkring)
  • Traditional insurance (Traditionell försäkring)
  • A mix between those two.

When I started paying my occupational pension, I went with Traditional insurance because I am clueless about other options. In the end, I moved my pension into funds. Why? Lower cost with higher return.

Skandia Traditionell försäkring: 0.55% management fee, with an average 8.4% return/year in the last 4 years (2014-2018), down to 5% from May 1st 2019.

An US index fund: 0.2% management fee, with an average 18%/year in the last 5 years.

It’s not even close! Assuming I put 5000 SEK/month into my occupational pension (which is not the actually number), and the traditional insurance return is 5.3%/year, while the index fund return is 9.5%/year – both after fees, after 30 years:

  • Traditional Insurance gives me 4.419.370 SEK
  • Us index fund gives me 10.245.650 SEK

The choice was easy!

What if you don’t know what funds to invest? Nordnet for example, provides about 1400 funds in their portfolio. Which to choose? A general recommendation is to invest your money into an index fund – a type of fund that follow an index, like Standard & Poor 500, passively. The common characteristics of index funds are they are very low cost (commonly 0.2%/year fee), they yield market return, which is, in most of the cases, very good. The average return for S&P 500 index since its inception has been about 9.7%/year.

A well known advice is to have a three parts portfolio: an US index fund (follow S&P 500 index), a global market index fund, and a total market bond fund. It’s up to you, your age, your risk tolerance … to decide how many percent you want to invest in each. If you ask me, then I’d would say invest into an US index fund (most banks have this fund), and forget about it. If you are older, then you might want to put part of your money into bonds, and if you are younger, you might want to bet by putting more money into actively managed funds – which might have higher return than index funds, but with higher fee. In long term, index funds should still be a majority of your investment portfolio. Avoid funds with high fees. Avoid paying for people to adjust funds for you – the less they take, the more you make.

Private pension

This is your own pension contribution. However since 1/1/2016 this is no longer a “tax deductible” contribution, so there are very few people are still contributing to this part. I personally don’t, so I will skip writing about it.

Listing permissions per user/group

This week I came cross this question on Episerver World forum https://world.episerver.com/forum/developer-forum/Episerver-Commerce/Thread-Container/2019/5/get-rolepermission-data/ , and while it is not Commerce-related. it is quite interesting to solve. Perhaps this short post will help the original poster, as well future visitors.

As in the thread, I replied the first piece to solve the puzzle:


You can use PermissionTypeRepository to get the registered PermissionTypes, then PermissionRepository to figure out which groups/users have a specific permission 

If you want to list permissions granted to a specific role or user, it is just a simple reversion using a dictionary:

            var rolePermissionMap = new Dictionary<string, HashSet<PermissionType>>(StringComparer.OrdinalIgnoreCase);
            var permissionTypes = _permissionTypeRepository.List();
            foreach (var permissionType in permissionTypes)
            {
                var securityEntities = _permissionRepository.GetPermissions(permissionType);
                foreach (var securityEntity in securityEntities)
                {
                    if (rolePermissionMap.ContainsKey(securityEntity.Name))
                    {
                        rolePermissionMap[securityEntity.Name].Add(permissionType);
                    }
                    else
                    {
                        rolePermissionMap[securityEntity.Name] = new HashSet<PermissionType>() { permissionType };
                    }
                }
            }

As suggested above, we use
PermissionTypeRepository to list the registered PermissionType(s) , and then for each PermissionType we get the list of SecurityEntity it is granted for. A SecurityEntity can be an user, a group, or a virtual role, and is identified by the name. For purpose of demonstration, we only use names: For each SecurityEntity granted a permission, we check if it is in our dictionary already, if yes, then add the permission to the list, otherwise add a new entry.

Simple, eh?

Unless if you are assigning/un-assigning permissions a lot, it is probably a good idea to keep this Dictionary in cache for some time, because it is not exactly cheap to build.

A super short review of Dragon Quest XI: Echoes of an Elusive Age

DQXI is the last game I completed, and I don’t complete a lot of games. I was interested on the game because it was hugely anticipated, but the reviews put me off a little bit. Don’t get me wrong, the reviews are very positive – or at least most of them are, but they are not that level I wanted. The final push is a very good discount on the physical version, so I was like “What the hell” . Surprises, I was hooked.

The good

Very good “anime” styles. The game looks quite as good as the pre-rendered cutscenes.

The game is very well rounded and very enjoyable. I had almost no bug, at least none noticeable during my ~100 hours spent with the game.

Some plot twists that make the story interesting.

Good characters and side stories. Even thought they are not up to the level in The Witcher 3, they are still good enough to enjoy.

The crafting system (Fun-sized forge) is actually quite fun to work with.

The bad

The combat lacks depth usually seen in turn based combat. It is not exactly a bad thing for me, but if you are looking for some challenges, then the combat system in Persona 5 is far engaging.

Traditional, quite predictable main story.

The ugly

The background music is repetitive and can be annoying at time

The new improvements in the Switch version are not coming to PS4.

A super short review of XBox One X

I have been a PS4-fan (if I can call myself so) since the day I bought the original PS4 to play Dragon Age: Inquisition. At that point, PS4 is the clearly better choice than XBox One: smaller without a separate power adapter, 50% more powerful (1.84TFLOPS vs 1.23 TFLOPS), simpler policies to share/resell game, etc etc.

I even liked XBox One X when it was announced, it checks almost all the boxes, except for, well, games and price, so I gave it a pass, especially when I got a PS4 Pro from a colleague at a very good price. This genre, PS4 has won for many reasons, one of that is it has a excellent line of exclusive.

Why not all three of them? Why choose side?

And I remain faithful until the day Elgiganten – a big electronic retailer in Sweden sells Xbox One X with Fallout 76 bundle at an unbeatable price (cheaper than Xbox One X selling alone, so the game actually reduces the price of the bundle!). I talked to myself – why not, if I don’t like it I can just return (it’s turned out that the game is digital, so I won’t be able to return. But I like it in the end so I decided to keep it. I find myself playing Apex Legends every night now, and I’m happy with it)

I won’t play Fallout 76, but this is cheaper than the Xbox One X alone, thanks to it.

The good

A marvel of engineering. It’s significantly more powerful than my PS4 Pro, yet it is just the same size and is even more quiet.

Incredible value if you think about Game Pass. Bunch of good games at a very low price, especially when you use the promotions that happen all the time. I spent about 30 SEK (less than 4 USD) for 3 months of that.

The controller battery lasts me weeks, and it takes only 1 minute or so to replace the battery.

A new generation of gamers!

Games that are optimized for X run exceptionally well. Forza Horizon 4, Red Dead Redemption 2, just to name a few.

Xbox and Xbox 360 games backward compatibility.

UHD Blu-ray player.

The bad

The UI is a mess. I complained about how HBO UI is a mess . But I think XBox One UI is on par in term of terrible.

The ugly

The Blu-ray player that refuses to play my UHD bluray, 9 out of 10 times.

I will have to re-buy many games to get the advantage of native 4K.

IContentLoader.Get(contentLink) is considered harmful for catalog content.

A while ago I wrote about how you should be aware of IContentLoader.GetChildren<T>(contentLink) here. However, that is only half of story.

IContentLoader.Get<T>(contentLink) is also considered harmful. Not in terms of it causes damage to your site (we would never, ever let that happen), nor it is slow (not unless you abuse it), but because it can behave very unexpectedly.

As you might already know, catalog content fully supports language versions, which means a catalog might have multiple languages enabled, and each and every catalog item in that catalog (node/category, and entry) will be available in those languages. However, those languages are not equal, (only) one is master language. What’s the difference then?

One of very important characteristics of that is how it affects the properties. Properties with [CultureSpecific] attribute decorated will be different in each language, and therefore, can be edited in each language. Properties without [CultureSpecific] attribute decorated will be the same in all languages, and can only be edited in master language. In Catalog UI, if you switch to non master languages, those properties will be grayed out, indicating they can’t be edited.

Now, why IContentLoader.Get<T>(contentLink) is considered harmful? Because you don’t supply a CultureInfo to let it know which version you want, it relies on the current preferred language to load the content. And if you have a catalog which has master language that is different from the current preferred language, you are loading a non-master language version. And then if you try to edit a non [CultureSpecific] property, then save it, the changes will not be saved, without error or warning.

It then will be very confusing because it sometimes works (someone changes the current preferred language that matches the catalog master language, and sometimes it doesn’t.

Which can cost you hours, if not days, to figure out what is wrong with your code.

Same thing applies to IContentLoader.TryGet<T>(contentLink)

Solution? Always use the overload that takes a CultureInfo or a LoaderOptions parameter, even if you just want to read the content. That creates a “good” habit and you can quickly spot code that might be problematic.

Use this to load master language version, if you wish to update some non CultureSpecific property.

 new LoaderOptions() { LanguageLoaderOption.MasterLanguage() }

Later versions of Commerce will log a warning if you are trying to save a non master language version with one or more changed non [CultureSpecific]properties.

Control the thousand separator for Money in Episerver Commerce

If you are selling goods in multiple markets which same currency but with different languages, such as EuroZone, you might notice that while everything looks quite good, except that the thousand separator might be off from time to time: it is always the same and does not change to match with the language, so sometimes it’s correct, sometimes it’s not.

Let’s take a step back to see how to properly show the thousand delimiter 

In the United States, this character is a comma (,). In Germany, it is a period (.). Thus one thousand and twenty-five is displayed as 1,025 in the United States and 1.025 in Germany. In Sweden, the thousands separator is a space.

https://docs.microsoft.com/en-us/globalization/locale/number-formatting

You might ask why the problem happens with Episerver Commerce. In Commerce, each currency has an attached NumberFormatInfo which let the framework knows how to format the currency. During startup, the system will loop through the available CultureInfo and assign its .NumberFormat to the currency.

The problem is there might be multiple CultureInfo that can handle same currency, for example, EUR which is used across Eurozone, can be handled by multiple (20? ) cultures. However, the first matching CultureInfo to handle the format of the currency will be used. In most of the cases, it will be br-FR (because the CultureInfo(s) are sorted by name, and this CultureInfo is the first in the list to handle EUR)

br-FR does not have a thousand separator, but a whitespace. That’s why even if your language is de-DE, the amount in EUR will not be properly formatted as 1.234,45 but 1 234,45

How to fix that problem?

Luckily, we can set the NumberFormatInfo attached for each currency. If you are only selling in Germany, you can make sure that EUR is always formatted in German style, by adding this to one of your initialization modules:

var culture = CultureInfo.GetCultureInfo("de-DE");
Currency.SetFormat("EUR", culture.NumberFormat);

But if you have multiple languages for one currency, this will simply not work (because it’s static, so it will affect all customer). Your only option is to avoid using Money.ToString(), but to use Money.ToString(IFormatProvider), for example

money.ToString(CultureInfo.CurrentUICulture);

Assuming CultureInfo.CurrentUiCulture is set to correct one.

This, however, does not resolve the problem with merchandisers using Commerce Manager. They might have to work with orders from multiple markets, and for example, if your site is selling good stuffs in Europe, there are chances that merchandisers see the prices without correct thousand separator. Most of places in Commerce Manager uses Money.ToString(), and there is a reason for that: it’s too risky to use Money.ToString(CultureInfo.CurrentUICulture), because if a merchandiser uses English, he or she is likely gonna see money formatted as “$” instead of “€”, and that is a much bigger problem of itself.

Moral of the story: localization is hard, and sometimes a compromise is needed.

Fixing ASP.NET Membership performance – part 1

Even though it is not the best identity management system in the .NET world, ASP.NET Membership provider is still fairly widely used, especially for systems that have been running for quite long time with a significant amount of users: migrating to a better system like AspNetIdentity does not comes cheap. However, built from early days of ASP.NET mean Membership provider has numerous significant limitations: beside the “architecture” problems, it also has limited performance. Depends on who you ask, the ultimate “maximum” number of customers that ASP.NET membership provider can handle ranges from 30.000 to 750.000. That does not sound great. Today if you start a new project, you should be probably better off with AspNetIdentity or some other solutions, but if your website is using ASP.NET membership provider and there is currently no plan to migrate, then read on.

The one I will be used for this blog post has around 950.000 registered users, and the site is doing great – but that was achieved by some very fine grained performance tuning, and a very high end Azure subscription.

A performance overview 

I have been using ASP.NET membership provider for years, but I have never looked into it from performance aspects. (Even though I have done some very nasty digging to their table structure). And now I have the chance, I realize how bad it is.

It’s a fairly common seen in the aspnet_* tables that the indexes have ApplicationId as the first column. It does not take a database master to know it is a very ineffective way to create an index – in most of the cases, you only have on ApplicationId in your website, making those indexes useless when you want to, for example, query by UserId. This is a rookie mistake – a newbie tends to make order of columns in the index as same as they appear in the table, thinking, that that SQL Server will just do magic to exchange the order for the best performance. It’s not how SQL Server – or in general – RDBMS systems work.

It is OK to be a newbie or to misunderstand some concepts. I had the very same misconception once, and learned my lessons. However, it should not be OK for a framework to make that mistake, and never correct it.

That is the beginning of much bigger problems. Because of the ineffective order of columns, the builtin indexes are as almost useless. That makes the queries, which should be very fast, become unnecessarily slow, wasting resources and increasing your site average response time. This is of course bad news. But good news is it’s in database level, so we can change it for the better. It if were in the application level then our chance of doing that is close to none.

Missing indexes

If you use Membership.GetUserNameByEmail on your website a lot, you might notice that it is … slow. It leads to this query:

        SELECT  u.UserName
        FROM    dbo.aspnet_Applications a, dbo.aspnet_Users u, dbo.aspnet_Membership m
        WHERE   LOWER(@ApplicationName) = a.LoweredApplicationName AND
                u.ApplicationId = a.ApplicationId    AND
                u.UserId = m.UserId AND
                LOWER(@Email) = m.LoweredEmail

Let’s just ignore the style for now (INNER JOIN would be a much more popular choice), and look into the what is actually done here. So it joins 3 tables by their keys. The join with aspnet_Applications would be fairly simple, because you usually have just one application. The join between aspnet_Users and aspnet_Membership is also simple, because both of them have index on UserId – clustered on aspnet_Users and non-clustered on aspnet_Membership

The last one is actually problematic. The clustered index on aspnet_Membership actually looks like this

CREATE CLUSTERED INDEX [aspnet_Membership_index]
    ON [dbo].[aspnet_Membership]([ApplicationId] ASC, [LoweredEmail] ASC);

Uh oh. Even if this contains LoweredEmail, it’s the worst possible kind of index. By using the least distinctive column in the first, it defeats the purpose of the index completely. Every request to get user name by email address will need to perform a full table scan (oops!)

This is a the last thing you want to see in a execution plan, especially with a fairly big table. 

It should have been just

CREATE CLUSTERED INDEX [aspnet_Membership_index]
    ON [dbo].[aspnet_Membership]([LoweredEmail] ASC);

which helps SQL Server to use the optimal execution plan

If you look into Azure SQL Database recommendation, it suggest you to create a non clustered index on LoweredEmail. That is not technically incorrect, and it still helps. However, keep in mind that each non clustered index will have to “duplicate” the clustered index, for the purpose of identify the rows, so keeping the useless clustered index actually increases wastes and slows down performance (even just a little, because you have to perform more reads to get the same data). However, if your database is currently performing badly, adding a non clustered index is a much quicker and safer option. The change to clustered index should be done with caution at low traffic time.

Tested the stored procedure on database above, without any additional index

Table 'aspnet_Membership'. Scan count 9, logical reads 20101, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'aspnet_Applications'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'aspnet_Users'. Scan count 0, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row affected)

 SQL Server Execution Times:
   CPU time = 237 ms,  elapsed time = 182 ms.

With new non clustered index


(1 row affected)
Table 'aspnet_Applications'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'aspnet_Users'. Scan count 0, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'aspnet_Membership'. Scan count 1, logical reads 9, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row affected)

 SQL Server Execution Times:
   CPU time = 15 ms,  elapsed time = 89 ms.

With new clustered index:

(1 row affected)
Table 'aspnet_Applications'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'aspnet_Users'. Scan count 0, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'aspnet_Membership'. Scan count 1, logical reads 4, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row affected)

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 89 ms.

Don’t we have a clear winner?

Speed up catalog routing if you have multiple children under catalog

A normal catalog structure is like this: you have a few high level categories under the catalog, then each high level category has a few lower level categories under it, then each lower level category has their children, so on and so forth until you reach the leaves – catalog entries.

However it is not uncommon that you have multiple children (categories and entries) directly under catalog. Even though that is not something you should do, it happens. 

But that is not without drawbacks. You might notice it is slow to route to a product. It might not be visible to naked eyes, but if you use some decent profilers (which I personally recommend dotTrace), it can be fairly obvious that your site is not routing optimally.

Why?

To route to a specific catalog content, for example http://commerceref/en/fashion/mens/mens-shirts/p-39101253/, the default router have to figure out which content is mapped to an url segment. So with default registration where the catalog root is the default routing root, we will start with the catalog which maps to the first part of route (fashion ). How do it figure out which content to route for the next part (mens ) ? 

Until recently, what it does it to call GetChildren on the catalog ContentReference . Now you can see the problem. Even with a cached result, that is still too much – GetChildren with a big number of children is definitely expensive.

We noticed this behavior, thanks to Erik Norberg. An improvement have been made in Commerce 12.10 to make sure even with a number of children directly under Catalog, the router should perform adequately efficient.

If you can’t upgrade to 12.10 or later (you should!), then you might have a workaround that improve the performance. By adding your own implementation of HierarchicalCatalogPartialRouter, you can override how you would get the children content – by using a more lightweight method (GetBySegment)

    public class CustomHierarchicalCatalogPartialRouter : HierarchicalCatalogPartialRouter
    {
        private readonly IContentLoader _contentLoader;

        public CustomHierarchicalCatalogPartialRouter(Func<ContentReference> routeStartingPoint, CatalogContentBase commerceRoot, bool enableOutgoingSeoUri) : base(routeStartingPoint, commerceRoot, enableOutgoingSeoUri)
        {
        }

        public CustomHierarchicalCatalogPartialRouter(Func<ContentReference> routeStartingPoint, CatalogContentBase commerceRoot, bool supportSeoUri, IContentLoader contentLoader, IRoutingSegmentLoader routingSegmentLoader, IContentVersionRepository contentVersionRepository, IUrlSegmentRouter urlSegmentRouter, IContentLanguageSettingsHandler contentLanguageSettingsHandler, ServiceAccessor<HttpContextBase> httpContextAccessor) : base(routeStartingPoint, commerceRoot, supportSeoUri, contentLoader, routingSegmentLoader, contentVersionRepository, urlSegmentRouter, contentLanguageSettingsHandler, httpContextAccessor)
        {
            _contentLoader = contentLoader;
        }

        protected override CatalogContentBase FindNextContentInSegmentPair(CatalogContentBase catalogContent, SegmentPair segmentPair, SegmentContext segmentContext, CultureInfo cultureInfo)
        {
            return _contentLoader.GetBySegment(catalogContent.ContentLink, segmentPair.Next, cultureInfo) as CatalogContentBase;
        }
    }

And then instead of using CatalogRouteHelper.MapDefaultHierarchialRouter , you register your router directly

 var referenceConverter = ServiceLocator.Current.GetInstance<ReferenceConverter>();
            var contentLoader = ServiceLocator.Current.GetInstance<IContentLoader>();
            var commerceRootContent = contentLoader.Get<CatalogContentBase>(referenceConverter.GetRootLink());
            routes.RegisterPartialRouter(new HierarchicalCatalogPartialRouter(startingPoint, commerceRootContent, enableOutgoingSeoUri));

(ServiceLocator is just to make it easier to understand the code. You should do this in an IInitializationModule, so use context.Locate.Advanced instead.

This is applicable from 9.2.0 and newer versions. 

Moral of the story:

  • Catalog structure can play a big role when it comes to performance.
  • You should do profiling whenever you can
  • We do that too, and we make sure to include improvements in later versions, so keeping your website up to date is a good way to tune performance.

Refactoring Commerce catalog code, a story

It is not a secret that I am a fan of refactoring. Clean. shorter, simpler code is always better. It’s always a pleasure to delete some code while keeping all functionalities: less code means less possible bugs, and less places to change when you have to change.

However, while refactoring can bring a lot of enjoying to the one who actually does it, it’s very hard to share the experience: most of the cases it’s very specific and the problem itself is not that interesting to the outside world. This story is an exception because it might be helpful/useful for other Commerce developer.

(more…)