Cheap coffee stuffs from China – a review

A word of warning, buying stuffs from China means long shipping time, and you will have almost no support or customer service (including warranty) ever. If things go wrong during transport – with very cheap items it’s not trackable once it leaves China, so it’s potentially

Timemore Black Mirror Basic Plus – $40/400kr

This is the most “luxurious” item I bought, and I think it’s well worth it. It’s well built, it’s fast, it’s accurate. Maybe it’s not as good as an Acaia – I have never been able to justify spending $200 for a scale, but I’d say it’s more than enough. It does not really matter if your espresso is off by a few tenth of a gram.

My rating: Buy!

Sprouted cup – 70kr

Once upon a time I made a double shots for me and my wife. I used the sprouted portafilter to divide the espresso into 2 cups, nice and easy. But that poses 2 problems: first i lose the fantastic view of bottomless portafilter extraction. Second, the sprouted portafilter is a PITA to clean properly. So I tried a different option – sprouted cup.

You can see in the photo above, a cup with sprouts that makes it easier to divide the espresso. It works, much better than normal cup. But it is also thinner and loose heat much faster.

We have now switched to 2x18gr shots every time I made coffee, so this cup just sits idle around, as it should.

My rating: Buy if no option is available to you.

Bottomless portafilter and balanced tamper

The balanced tamper is to fix the unbalanced tamping – with the traditional tamper, you might title your tamper a bit – i.e. it is not completely balanced and that might result in an uneven tamping. The plate will sit on the wall of the portafilter, and the base will do tamping. Because of that, you are guaranteed a perfect tamping every time.

The balanced tamper is very nice and I liked it a lot. But it has another design flaw – coffee ground gets into between the base and the outer plate. You will have to clean it as often as daily.

My rating:

Bottomless portafilter: Skip. Save and buy some nicer one

Balanced tamped: Buy if you can keep up with cleaning.

No name coffee scale – 20kr (96% off)

I am happy with my Timemore but I hate to move it back and forth between grinder and espresso machine, so I bought another one just for weighing coffee beans – because of crazy deal on Temu. It is a copy of the Black Mirror but smaller. The scale is quite flimsy, and not intuitive to use – you have to hold down the Power button for a few seconds to turn it on. The scale is fairly accurate, but slow to respond, and despite being tout as a coffee scale, there is no silicone pad to protect the scale from heat.

For 20kr, because I got the 96% off discount for first order on Temu, it’s OK. No way I would buy this otherwise. Certainly not at the “full” 250kr price.

My rating: Avoid. Buy Timemore.

Coffee bean dosing bowl – 65kr

When you get into single dosing, a dosing bowl is a must – it is nice to pour beans into it and then pour them into the grinder. I ordered one but it arrived broken (who could have thought a china would not stand shocks of ~10.000km traveling without a lot of wrapping?).

The bowl looks good in photos and seems practical. In the end China is known for their china, so what could go wrong. Well, it’s well made, but with one flawed design – as the nose is very low, beans will jump out of the bowl when you pour them into it. Not much, but 1 bean out of the bowl is 1 bean too many. The bowl was meant for tea (which is not as jumpy as coffee beans)

If you compare the design of this bowl

With the equivalent of Loveramics:

You can clearly see the difference. Loveramics obviously thought about the issue and their design is meant to fix it! I’m ordering the loveramics ones, although they are much more expensive!

My rating: Avoid. Buy Loveramics.

WDT – 100kr

You can see from some of photos above this WDT – I actually bought it for much less, but the price you can get now is closer to 100kr. It is as simple as some long, thin needles attached to a base. Surprisingly it works well to distribute the coffee ground. This is one thing you should own, and because it is so simple, you can’t go wrong. This is one thing that you can buy from Aliexpress without much thinking.

My rating: Buy!

Dosing cup – 70kr

When I decided to try single dose on my Eureka Mignon Specialita, I bought two things: the hopper and the dosing cup.

The dosing cup allows you to grind into it, maybe give it a few shakes then pour it to the portafilter. It easier to use in cases you can’t hold a portafilter, and the shakes are equivalent to using WDT (but some people still use WDT after that), so it has some values. However, the dosing cup has marks inside the cup which allows coffee ground to stick. You will eventually have to clean it daily to avoid build up.

Once you get a hold of Niche Zero dosing cup – you immediately notice the differences in craftmanship and finish. It is much better built, and it is entirely smooth inside. It’s unfair comparison because while the NZ dosing cup is $39.99 without shipping, but as you only need to buy once, maybe save up for that if you need a dosing cup.

Single Dosing Hopper – 200kr

The idea is by slamming on the cover, it forces the remaining coffee grounds inside the burr out. It was pretty well made and fits well to my Specialita (and was advertised to fit with any Eureka Mignon grinder). However, it has the bad plastic smell. Not really strong but definitely there, which made me question if it is safe for food. The It works, but I hate the smell. the main problem is that Eureka Mignon Specialita is not designed to be a single dosing grinder, so while it works to some extend, the workflow is not smooth or intuitive

Closing thoughts

So my advice when it comes to ordering cheap coffee stuffs from China (or Amazon.se with Chinese sellers) is … don’t. If you have to, stay with some establish brands like Timemore. Others are cheap for a reason and don’t expect them to feel nice or perform well.

Performance optimization โ€“ the hardcore series โ€“ part 4

Let’s take a break from the memory allocation, and do some optimization on another aspect, yet as important (if not even more important) – database.

We all know that database queries play an essential part in any serious app. It’s almost a given that if you want your app to perform well, your database queries must also perform well. And for them to perform well, you need things like proper design (normalization, references etc.), properly written queries, and proper indexes. In this post, we will explore how an index can improve query performance, and how can we do it better.

Let’s start with a fairly simple table design

CREATE TABLE [dbo].[UniqueCoupon](
	[Id] [int] identity primary key clustered, 
	[PromotionId] [int] NOT NULL,
	[Code] [nvarchar](10) NOT NULL,
	[ExpiredOn] [datetime] NULL,
	[Redeemed] [bit] NULL
) ON [PRIMARY]

Nothing extraordinary here, pretty common if you ask me. Now for testing purpose, let’s insert 1.000.000 rows into it

INSERT INTO  dbo.[UniqueCoupon] (PromotionId, Code)
SELECT

FLOOR(RAND()*(100)+1),
SUBSTRING(CONVERT(varchar(255), NEWID()), 0, 7)

GO 1000000

We need to query data by the code, so let’s create an user defined type

CREATE TYPE CouponTable AS TABLE (
    Code NVARCHAR(10));

Time to run some query against data, let’s go with this

SELECT Id, PromotionId, Code, ExpiredOn, Redeemed FROM dbo.UniqueCoupons
                                                                    WHERE PromotionId = @PromotionId AND Code in (SELECT Code FROM @Data)

This is the complete query as we need some data

	declare @data CouponTable
	insert into @data 
	select top 10 code from dbo.UniqueCoupon 
	where promotionid = 36

	SELECT Id, PromotionId, Code, ExpiredOn, Redeemed FROM dbo.UniqueCoupon
                                                                    WHERE PromotionId = 36 AND Code in (SELECT Code FROM @Data)

As we learned that execution plan is not a good way to compare performance, let’s use the statistics, our trusted friends

																	set statistics io on
																	set statistics time on

And this is how it takes with our default setting (i.e. no index)

(10 rows affected)
Table '#AEDEED61'. Scan count 1, logical reads 1, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Table 'UniqueCoupon'. Scan count 9, logical reads 7070, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.

If you are somewhat experienced with SQL Server, you might guess it would not be exactly happy because of, obviously an index is needed. As we query on PromotionId, it does makes sense to add an index for it, SQL Server does give you that

If we just blindly add the index suggested by SQL Server

(10 rows affected)
Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Table 'UniqueCoupon'. Scan count 1, logical reads 53, physical reads 0, page server reads 0, read-ahead reads 5, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Table '#A7AA9B2B'. Scan count 1, logical reads 1, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.

But can we do better?

If we look at the index, there’s something not very optimized about it – we are query by both PromotionId and Code, so not really makes senses to have Code as included. How’s about we have the index on both PromotionId and Code?

(10 rows affected)
Table 'UniqueCoupon'. Scan count 10, logical reads 30, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Table '#A1F9F38F'. Scan count 1, logical reads 1, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.

Yet we can make it better! From 53 to 30 logical reads might not sound a lot, but if you have thousands of queries every hour, it will be fairly significant.

Prepare yourself for some pleasant surprises – when we eventually applied the change on an actual database, the change was staggering, much more than what we hoped for. The query that were run for 24h in total, every day, now takes less than 10 minutes (yes you read it right, 10 minutes).

At this point you can certainly be happy and move on. But can we do better? For the sake of curiosity ? Yes we do.

SQL Server is rather smart that it knows we are getting the other columns as well, so those will be included in the index, to avoid a key lookup. Let’s see if we can remove that and see how it performs

(10 rows affected)
Table 'UniqueCoupon'. Scan count 10, logical reads 60, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Table '#B1996E94'. Scan count 1, logical reads 1, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.

So it was indeed worse, a key lookup is performed for every row (SQL Server uses the index to track down the rows and read the other columns from there)

There are two way to get rid of those key lookup – includes the columns in the index itself, or, more dramatic, make the index the clustered. As we can see the data should be accessed by PromotionId and Code, it makes perfect senses.

It is a commonly belief that Identity column should be clustered index – it is unique, it is not null. However, it only makes senses if it is the most heavily accessed column. In this case, Id only serves as an Identity column, it does not need to be the clustered index (although being an unique means it will has a non clustered index for it)

ALTER TABLE [dbo].[UniqueCoupon] DROP CONSTRAINT [PK__UniqueCo__3214EC0744C2FF38] WITH ( ONLINE = OFF )
GO

ALTER TABLE [dbo].[UniqueCoupon] ADD PRIMARY KEY NONCLUSTERED 
(
	[Id] ASC
)

Does this bring dramatically performance change? Unlikely. My test show no improvement in statistic. However, there is one critical impact here: we significantly reduced the size of indexes in the table. (data incoming)

Moral of the story

  • Indexes are crucial.
  • You can almost always do better than the auto suggested indexes.
  • Real test is the only true validation.

Building a better wish list – part 1

If you have been using Optimized Customized Commerce, you probably know that, by default, wish list is just a cart with a special name. Can you guess the name? Surprise, surprise, it’s “Wishlist”. It’s been there since forever, from the early day of Mediachase, and then carried over to the new serializable cart. I have been “fine” with it – i.e. I accept the approach unconsciously. But until very recently I realized there are several problems with the approach.

How come it is not a very good idea?

First of all, it shares same table as the normal cart. To search for abandoned carts, you would have to skip the carts with “wishlist” name. There are only a few cart names and they are not evenly distributed, you will have hard time filtering carts by their names.

But there is more. As most customers are using the serializable cart mode now, ever growing wishlists also pose a another problem – each operation on the wishlist – adding or removing item, will result in a big write to the SerializableCart table. If you have just a few items, it might be fine, but a simple test on Commerce shows that with only 9 items in wishlist, the Data column is more than 2700 characters. And wishlists are meant to be kept forever – they will only grow in size.

My saved for later on Amazon – which is the closet thing to a “wish list”. Imagine having that on Optimizely Customized Commerce.

As wishlists are carts, they have to be in same format even though a lot of them are redundant/unnessary.

The biggest benefit, and I think it triumphs all other disadvantages we have listed, of the default wishlist implementation is it’s built-in. You can start using it without almost no additional effort. Get a cart with the predefined name and you are good to go. Building a different wish list definitely costs time and resource, a luxury not everyone can afford.

For that, I have been starting building a wish list service on my free time. I plan to make it open source when the time is right, but we’ll see about that.

Moral of the story

  • It is critical to take a step back, from time to time, to think about what you have done. Things might make less senses when you see it from a different perspective.
  • You can almost always do better.

Cleaning/maintaining routine for espresso machines

“There is no too clean espresso machine”. That is my favorite quote when it comes to cleaning espresso machines and equipment. When you use your machine, the coffee ground and oil build up, and it can, and will affect the taste of your espresso. Why spend thousands of USD buying fancy machines, and few dozens for each bag of specialty coffee, without getting the best out of it.

Property cleaning and maintaining machine is also helping to prolong your machine and increase your enjoyment of using it.

For every machine

Keep your group head clean after each use. There are several ways of doing that, and you can do a combination of them that you like best

  • Draw some hot water from the group head with an empty portafilter to clean any debris remaining
  • Use a paper filter or a puck screen. This prevents the coffee ground from being attached to the group head.
  • Wipe the group head with a wet cloth (preferably microfiber) after the shot
  • Use this fancy tool from Espazzola to clean it up.

You will also need to backflush – i.e. using a blind basket – a basket without holes so water can’t be escaped. It will flow back to the machine and escape through the OPV (over pressure valve), bringing with it any coffee ground and oil that is inside the group head. Each type of group head needs a different backflush schedule – more on that later.

For milk wand

  • Purge the wand before each use.
  • Wipe the wand right after frothing. Immediately if your is not non burn – i.e. it gets very hot to touch. Otherwise the milk will be baked and is very hard to remove.
  • Purge the wand as soon as possible after each froth.
  • If your milk wand has removable tip, remove it once every month to check for blockages

For equipment

If you are using a bottomless portafilter, either wipe it or rinse it under running water after each use to remove any stuck coffee ground. One quick way to check if the basket is clean is to use a household paper to wipe it. If it comes out clean, you are good. If it comes out black – you need to clean a bit more.

If you are using the normal portafilter with sprout, pop the basket out and clean both it the portafilter (if you have never done it, you might be surprised, yuck!). This is also one of the reason I’d prefer the bottomless.

Every week, soak your equipment that have been in contact with coffee ground in a detergent that can clean coffee oil. I recommend to use puly caff as it’s effective, safe (it’s NSF-certified), and cheap to use. Add 10gr of pulycaff to 1 liter of hot water, stir it well then soak your equipment for 15 minutes, then clean and rinse them thoroughly.

For Integrated/saturated group head

Those group heads can be backflush as many times as needed.

  • Once every week, use 3-4gr of pulycaff in a blind basket, and draw a few shots until the pulycaff is dissolved, then draw a few more until the water in blind basket is “clean”. Remove the blind basket, and draw a few more shots without the portafilter locked in.
  • Every 3 months, or less, open the shower head and clean it. (tip: make sure that the group head is cooled down and completely comfortable to touch. it can retain heat for a long time)
  • Change your gasket every year if it is rubber (as it degrades with heat), or every other year if it is silicone. That is just the guideline, check if it is hard and has lost its elasticity.

For E61 group head

E61 group head needs lubing with food grade silicone grease, and backflushing with pulycaff washing that away, so you need to be conservative about that. Instead:

  • Backflush with water only after the final shot of the day.
  • Backflush with pulycaff every other month, then grease your lever. If you do not, your lever will be squeaky, it will feel tight to open/close, and it will wear much faster.
  • Open your shower head every week and clean it up. Use a spoon and gently remove the shower head. If you have a hardened rubber tool to avoid scratches, even better.
  • Change your gasket every year if it is rubber (as it degrades with heat), or every other year if it is silicone. That is just the guideline, check if it is hard and has lost its elasticity.

Descaling

Limescale is the #1 enemy of espresso machine, especially for dual boilers ones with the steam boiler – as the water boils, it leaves the remaining mineral behind, the TDS in the water increases, and the chance for limescale build up gets higher.

  • If your water is relatively soft, always use the water softener and change it when it is used up.
  • If your tap water is very hard, you might need some other options instead of using it directly. You might have to use distilled water + added mineral (distilled water does not taste good, and it can also be harmful with electronic component in the boilers. Certain sensors rely on the ions available in the water to work (properly).
  • Draw 200ml of water from the hot water tap to increase the water exchange, use that for heat your cup. Don’t draw too much as it can expose the heating element to the air and fry it. This ensure that your steam boiler gets fresh water every day, avoid high concentration.
  • Descale according to the manufacturer guideline. NOTE: be more cautious if one or both of your boilers are brass, as descaling chemical can cause harm to them.

Routines

Each use

Draw some water from steam boiler if you have dual boilers

Clean group head and portafilter

Wipe and purge milk wand

Wipe splashes of coffee (from channeling) or milk (from frothing) if any

Every day

For E61: backflush with water only after last pull of the day

Weekly (or every 3 days, depends on our usage)

Soak portafilter, basket etc. in pulycaff solution, and clean them thoroughly

Clean the dip tray

For saturated group head: backflush with pulycaff

For E61: remove and clean shower head

Every other week

Clean water tank with some disk soap, rinse it thoroughly

Every other month

For E61: backflush with pulycaff, then lubricate the lever

Every 3 months

For saturated group head: remove and clean the shower head

For E61 with flow control: lubricate the o rings of the flow control

Every year

Check gasket and replace if they become hard

Remove cover and check for internal for any sign of leaks

Every other year

Consider descaling if necessary

Performance optimization โ€“ the hardcore series โ€“ part 3

“In 99% of the cases, premature optimization is the root of all devil”

This quote is usually said to be from Donald Knuth, usually regarded as “father of the analysis of algorithms”. His actual quote is a bit difference

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

Yet we should not pass up our opportunities in that critical 3%.

If you have read my posts, you know that I always ask for measuring your application before diving in optimization. But that’s not all of the story. Without profiling, your optimization effort might be futile. But there are things you can “optimize” right away without any profiling – because – they are easy to do, they make your code simpler, easier to follow, and you can be certain they are faster.

Let’s see if you can spot the potential problematic piece of code from this snippet

public Something GetData()
{
var market = list.FirstOrDefault(x => x.MarketId == GetCurrentMarket().MarketId)
{
//do some stuffs
}

}

If you are writing similar code, don’t be discouraged. It’s easy to overlook the problem – when you call FirstOrDefault, you actually iterate over the list until you find the first matching element. And for each and every of that, GetCurrentMarket() will be called.

Because we can’t be sure when we will find the matching element, it might be the first element, or the last, or it does not exist, or anywhere in between. The median is that GetCurrentMarket will be half called half the size of list

We don’t know if GetCurrentMarket is a very lightweight implementation, or list is a very small set, but we know that if this is in one very hot path, the cost can be (very) significant. These are the allocations made by said GetCurrentMarket

This is a custom implementation of IMarketService – the default implementation is much more lightweight and should not be of concern. Of course, fewer calls are always better – no matter how quick something is.

In this specific example, a simple call to get the current market and store it in a local variable to be used in the scope of the entire method should be enough. You don’t need profiling to make such “optimization” (and as we proved, profiling only confirm our suspect )

Moral of the story

  • For optimization, less is almost always, more
  • You definitely should profile before spending any considerable amount optimizing your code. But there are things that can be optimized automatically. Make them your habit.

The economy of making espressos at home

Making espressos, and espresso-based drinks at home is not about the joy of a hobby, but also an economic way of drinking high quality coffee. Let’s talk about it.

An espresso at a cafe costs around 30kr, while a big latte costs around 45kr.

if you drink twice a day, your and your partner would cost between 120kr and 180kr

Assuming you drink 300 days a year – then each year, it’s around 36.000kr and 54.000kr for coffee ๐Ÿ˜ฎ

Now if you are making espressos at home.

Each double shot espresso needs about 18gr of coffee, but we have to consider waste and throw away (for example when you dial a new coffee), so let’s be conservative and assume that 1kg of coffee makes around 45 shots.

A good 1kg of coffee is between 250kr to 400kr (specialty grade – and that is usually much better than what you are served in a normal cafe). So it’s about 5.5 to 8.9kr for coffee for each drink.

A big latte needs around 250ml of milk (including waste and throw away), so each 1.5l of milk can make 6 latte. A 1.5l of Arla standard 3% milk costs 17.9kr (as we always buy at Willys), so it’s 3kr per drink for milk.

Of course you need electricity for heating up the machine. My machine which is an E61 uses around 0.6 kwh-0.7 kwh per day for 4 lattes. Electricity price has gone up a bit, we are quite lucky to only have to pay a fixed price of 1.3kr/kwh, but let’s say you have to pay a bit more, 1.5kr/kwh, it’s 1kr per day for the machine.

And you need other things for cleaning and maintenance – you need water softener. I used Lelit 70l water softener which costs around 110kr/each, and I change every 2 months, which means almost 2kr/day. I also need pulycaff for cleaning machines and other stuffs, but after 2 years I haven’t gone through 1 bottle of 900gr yet (costs around 150kr), so the cost is very minimal.

Basically, it’s 22-36kr per coffee per day, 12kr per milk per day, 1kr electricity per day, and 2kr per cleaning per day, it’s around 37kr- 51kr per day for 4 lattes.

Now you have coffees at home and you will drink more often, let’s say it’s 365 days per year because you also have friends come over, it’s 12,410kr to 18,615kr.

Even with some fancy machines and equipment to start with, you would be break even in one year. That includes things like fancy cups, WDT, scale etc.

CostBuyingMaking
Machine costN/A10.000kr – โˆž 
Per drink30-45kr8.5-12.5kr
Drinks per year4x per day, 3 days4x per day, 365 days
Cost for coffee36000-54000kr12410kr-18615kr

Some might argue that the making espressos also costs time, but you also need to walk down the street (assuming that you have a cafe right around corner) and wait for your coffee. Also need to factor the time to put on/off clothes.

Not to mention the relaxing feeling when brewing espressos is priceless.

Of course those numbers only apply if you drink coffees frequently. Things will change if you drink less, or more, or without milk.

Performance optimization – the hardcore series – part 2

Earlier we started a new series about performance optimization, here Performance optimization โ€“ the hardcore series โ€“ part 1 โ€“ Quan Mai’s blog (vimvq1987.com) . There are ton of places where things can go wrong. A seasoned developer can, from experience, avoid some obvious performance errors. But as we will soon learn, a small thing can make a huge impact if it is called repeatedly, and a big thing might be OK to use as long as it is called once.

Let’s take this example – how would you think about this snippet – CategoryIds is a list of string converted from ContentReference

            if (CategoryIds.Any(x => new ContentReference(x).ToReferenceWithoutVersion() == contentLink))
            {
                //do stuff
            }

If this is in any “cool” path that run a few hundred times a day, you will be fine. It’s not “elegant”, but it works, and maybe you can get away with it. However, if it is in a hot path that is executed every time a visitor visits a product page in your website, it can create a huge problem.

And can you guess what it is?

new ContentReference(string) is fairly lightweight, but if it is called a lot, this is what happen. This is allocations from the constructor alone, and only within 220 seconds of the trace

A lot of allocations which should have been avoided if CategoryIds was just an IEnumerable<ContentReference> instead of IEnumerable<string>

For comparison, this is how 10.000 and 1000.000 new ContentReference would allocate

Thing is similar if you use .ToReferenceWithoutVersion() to compare to another ContentReference (although to a lesser extend as ToReferenceWithoutVersion would return the same ContentReference if the WorkId is 0, and it use cloning instead of new). The correct way to compare two instances of ContentReference without caring about versions, is to use .Compare with ContentReferenceComparer.IgnoreVersion

Moral of the story

  • It is not only what you do, but also how you do it
  • Small things can make big impacts, don’t guess, measure!

Performance optimization – the hardcore series – part 1

Hi again every body. New day – new thing to write about. today we will talk about memory allocation, and effect it has on your website performance. With .NET, memory allocations are usually overlooked because CLR handles that for you. Except in rare cases that you need to handle unmanaged resources, that you have to be conscious about releasing that said resources yourself, it’s usually fire and forget approach.

Truth is, it is more complicated than that. The more objects you created, the more memory you need, and the more time CLR needs to clean it up after you. When you might have written code that is executed blazing fast in your benchmarks, in reality, your website might still struggle to perform well in long run – and that’s because of Garbage collection. Occasional GC is not of a concern – because it’s nature of .NET CLR, but frequent GC, especially Gen 2 GC, is definitely something you should look into and should fix, because it natively affects your website performance.

The follow up question – how do you fix that.

Of course, the first step is always measuring the memory allocations of your app. Locally you can use something like Jetbrains dotMemory to profile your website, but that has a big caveat – you can’t really mimic the actual traffic to your website. Sure, it is very helpful to profile something like a scheduled job, but it is less than optimal to see how your website performs in reality. To do that, we need another tool, and I’ve found nothing better than Application Insights Profiler trace on Azure. It will sample your website periodically, taking ETL (ย event trace log) files in 220 seconds (Note, depends on your .NET version, you might download a .diagsession or a .netperf.zip file from Application Insights, but they are essentially the same inside (zipped .ETL)). Those files are extremely informative, they contains whole load of information which might be overwhelming if you’re new, but take small steps, you’ll be there.

To open a ETL file, common tool is Perfview (microsoft/perfview: PerfView is a CPU and memory performance-analysis tool (github.com)). Yes it has certain 2000 look like other analysis tool (remember Windbg), but it is fast, efficient, and gets the job done

Note that once extracted ETL can be very big – in 1GB or more range often. Perfview has to go through all that event log so it’s extremely memory hungry as well, especially if you open multiple ETL files at once. My perfview kept crashing when I had a 16GB RAM machine (I had several Visual Studio instances open), and that was solved when I switched to 32GB RAM

The first step is to confirm the allocation problems with GCStats (this is one of the extreme ones, but it does happen)

Two main things to look into – Total Allocs, i.e. the total size of objects allocated, and then the time spent in Garbage collection. They are naturally closely related, but not always. Total allocation might not be high but time for GC might be – in case of large objects allocation (we will talk about it in a later post). Then for the purpose of memory allocation analysis, this is where you should look at

What you find in there, might surprise you. And that’s the purpose of this series, point out possible unexpected allocations that are easy – or fairly easy – to fix.

In this first post, we will talk about a somewhat popular feature – Injected<T>.

We all know that in Optimizely Content/Commerce, the preferred way of dependency injection is constructor injection. I.e. if your class has a dependency on a certain type, that dependency should be declared as a parameter of the constructor. That’s nice and all, but not always possible. For example you might have a static class (used for extension methods) so no constructor is available. Or in some rare cases, that you can’t added a new parameter to the constructor because it is a breaking change.

Adding Injected<T> as a hidden dependency in your class is at least working, so can you forget about it?

Not quite!

This is how the uses of Injected<T> result in allocation of Structuremap objects – yes every time you call Injected<T>.Service the whole dependency tree must be built again.

And that’s not everything, during that process, other objects need to be created as well. You can right click on a path and select “Include item”. The allocations below are for anything that were created by `module episerver.framework episerver.framework!EPiServer.ServiceLocation.Injected1[System.__Canon].get_Service() i.e. all object allocations, related to Injected<T>

You can expand further to see what Injected<T>(s) have the most allocations, and therefore, are the ones should be fixed.

How can one fix a Injected<T> then? The best fix is to make it constructor dependency, but that might not always be possible. Alternative fix is to use ServiceLocator.GetInstance, but to make that variable static if possible. That way you won’t have to call Injected<T>.Service every time you need the instance.

There are cases that you indeed need a new instance every time, then the fix might be more complicated, and you might want to check if you need the whole dependency tree, or just a data object.

Moral of the story

  • Performance can’t be guessed, it must be measured
  • Injected<T> is not your good friend. You can use it if you have no other choice, but definitely avoid it in hot paths.

The do nothing SearchProvider

With Find-backed IEntrySearchService in the previous post , we can now put SearchProvider to rest. There are, however, parts of the framework that still rely on SearchManager, and it expects a configured, working SearchProvider. The Full search index job, and the Incremental search index job are two examples. To make sure we don’t break the system, we might want to give SearchManager something to chew on. A do nothing SearchProvider that is!

And we need a DoNothingSearchProvider

    public class DoNothingSearchProvider : SearchProvider
    {
        public override string QueryBuilderType => GetType().ToString();

        public override void Close(string applicationName, string scope) { }
        public override void Commit(string applicationName) { }
        public override void Index(string applicationName, string scope, ISearchDocument document) { }
        public override int Remove(string applicationName, string scope, string key, string value)
        { return 42; }

        public override void RemoveAll(string applicationName, string scope)
        {
        }
        public override ISearchResults Search(string applicationName, ISearchCriteria criteria)
        {
            return new SearchResults(new SearchDocuments(), new CatalogEntrySearchCriteria());
        }
    }

    

And a DoNothingIndexBuilder

public class DoNothingIndexBuiler : ISearchIndexBuilder
    {
        public SearchManager Manager { get; set; }
        public IndexBuilder Indexer { get; set; }

        public event SearchIndexHandler SearchIndexMessage;

        public void BuildIndex(bool rebuild) { }
        public bool UpdateIndex(IEnumerable<int> itemIds) { return true; }
    }

What remains is simply register them in your appsettings.json

                                       "SearchOptions":  {
                                                             "DefaultSearchProvider":  "DoNothingSearchProvider",
                                                             "MaxHitsForSearchResults":  1000,
                                                             "IndexerBasePath":  "[appDataPath]/Quicksilver/SearchIndex",
                                                             "IndexerConnectionString":  "",
                                                             "SearchProviders":  [
                                                              {
                                                                "Name": "DoNothingSearchProvider",
                                                                "Type": "EPiServer.Reference.Commerce.Site.Infrastructure.Indexing.DoNothingSearchProvider, EPiServer.Reference.Commerce.Site",
                                                                "Parameters": {
                                                                  "queryBuilderType": "EPiServer.Reference.Commerce.Site.Infrastructure.Indexing.DoNothingSearchProvider, EPiServer.Reference.Commerce.Site",
                                                                  "storage": "[appDataPath]/Quicksilver/SearchIndex",
                                                                  "simulateFaceting": "true"
                                                                }
                                                              }
                                                                                 ],
                                                             "Indexers":  [
                                                                              {
                                                                                  "Name":  "catalog",
                                                                                  "Type":  "EPiServer.Reference.Commerce.Site.Infrastructure.Indexing.DoNothingIndexBuilder, EPiServer.Reference.Commerce.Site"
                                                                              }
                                                                          ]
                                                         },

And that’s it.

Use Find for CSR UI

If you have been using Find, you might be surprised to find that CSR UI uses the SearchProvider internally. This is a bit unfortunate because you likely are using Find, and that creates unnecessary complexity. For starter, you need to configure a SearchProvider, then you need to index the entries, separately from the Find index. If you install EPiServer.CloudPlatform.Commerce, it will setup the DXPLucenceSearchProvider for you, which is basically a wrapper of LuceneSearchProvider to let it work on DXP (i.e. Azure storage). But even with that, you have to index your entries anyway. You can use FindSearchProvider, but that actually just creates another problem – it uses a different index compared to Find, so you double your index count, yet you have still make sure to index your content. Is there a better way – to use the existing Find indexed content?

Yes, there is

Searches for entries in CSR is done by IEntrySearchService which the default implementation uses the configured SearchProvider internally . Fortunately for us, as with most thing in Commerce, we can create our own implementation and inject it. Now that’s with a caveat – IEntrySearchService is marked as BETA remark, so prepare for some breaking changes without prior notice. However it has not changed much since its inception (funny thing, when I checked for its history, I was the one who created it 6 years ago, in 2017. Feeling old now), and if it is changed, it would be quite easy to adapt for such changes.

IEntrySearchService is a simple with just one method:


IEnumerable<int> Search(string keyword, MarketId marketId, Currency currency, string siteId);

It is a bit weird to return an IEnumerable<int> (what was I thinking ? ), but it was likely created as a scaffolding of SearchManager.Search which returns an IEnumerable<int>, and was not updated later. Anyway, an implementation using Find should look like this:

    public class FindEntrySearchService : IEntrySearchService
    {
        private EPiServer.Find.IClient _searchClient;

        public FindEntrySearchService(EPiServer.Find.IClient searchClient) => _searchClient = searchClient;

        public IEnumerable<int> Search(string keyword, MarketId marketId, Currency currency, string siteId)
        {
            return _searchClient.Search<EntryContentBase>()
                 .For(keyword)
                 .Filter(x => x.MatchMarketId(marketId))
                 .Filter(x => x.SiteId().Match(siteId))
                 .Filter(x => FilterPriceAvailableForCurrency<IPricing>(y => y.Prices(), currency))
                 .GetResult()
                 .Select(x => x.ContentLink.ID);
        }

        public FilterExpression<Price> FilterPriceAvailableForCurrency<T>(Expression<Func<T, IEnumerable<Price>>> prices, Currency currency)
        {
            var currencyCode = currency != null ? currency.CurrencyCode : string.Empty;

            return new NestedFilterExpression<T, Price>(prices, price => price.UnitPrice.Currency.CurrencyCode.Match(currencyCode), _searchClient.Conventions);
        }
    }

Note that I am not an expert on Find, especially on NestedFilterExpression, so my FilterPriceAvailableForCurrency might be wrong. Feel free to correct it, the code is not copyrighted and is provided as-is.

As always, you need to register this implementation for IEntrySearchService. You can add it anywhere you like as long as it’s after .AddCommerce.

_services.AddSingleton<IEntrySearchService, FindEntrySearchService>();