Storing 100.000 prices per SKU – part 1

One of the questions I have received, from time to time, is that how to store a lot of prices per SKU in Optimizely (B2C) Commerce Cloud. While this is usually a perfect candidate for Optimizely B2B Commerce, there are many customers invested in B2C and want to make the best out of it. Is it possible?

It’s important to understand the pricing system of Optimizely Commerce (which is, written in detail in my book – shameless plug). But in short:

  • There are two price systems, IPriceService and IPriceDetailService
  • One is handling prices in batch – i.e. prices per SKU (IPriceService), and one is handling prices per individual price (IPriceDetailService)
  • Both are cached in latest version (cache for IPriceDetailService was added in late 13.x version)

With that in mind, it would be very problematic if you use IPriceService for such high number of prices per SKU, because each time you save a price, you save a lot of prices at once (same as loading prices). This is how the default IPriceService implementation saves prices of a SKU

create procedure dbo.ecf_Pricing_SetCatalogEntryPrices
    @CatalogKeys udttCatalogKey readonly,
    @PriceValues udttCatalogEntryPrice readonly
as
begin
    begin try
        declare @initialTranCount int = @@TRANCOUNT
        if @initialTranCount = 0 begin transaction

        delete pv
        from @CatalogKeys ck
        join dbo.PriceGroup pg on ck.CatalogEntryCode = pg.CatalogEntryCode
        join dbo.PriceValue pv on pg.PriceGroupId = pv.PriceGroupId

        merge into dbo.PriceGroup tgt
        using (select distinct CatalogEntryCode, MarketId, CurrencyCode, PriceTypeId, PriceCode from @PriceValues) src
        on (    tgt.CatalogEntryCode = src.CatalogEntryCode
            and tgt.MarketId = src.MarketId
            and tgt.CurrencyCode = src.CurrencyCode
            and tgt.PriceTypeId = src.PriceTypeId
            and tgt.PriceCode = src.PriceCode)
        when matched then update set Modified = GETUTCDATE()
        when not matched then insert (Created, Modified, CatalogEntryCode, MarketId, CurrencyCode, PriceTypeId, PriceCode)
            values (GETUTCDATE(), GETUTCDATE(), src.CatalogEntryCode, src.MarketId, src.CurrencyCode, src.PriceTypeId, src.PriceCode);

        insert into dbo.PriceValue (PriceGroupId, ValidFrom, ValidUntil, MinQuantity, MaxQuantity, UnitPrice)
        select pg.PriceGroupId, src.ValidFrom, src.ValidUntil, src.MinQuantity, src.MaxQuantity, src.UnitPrice
        from @PriceValues src
        left outer join PriceGroup pg
            on  src.CatalogEntryCode = pg.CatalogEntryCode
            and src.MarketId = pg.MarketId
            and src.CurrencyCode = pg.CurrencyCode
            and src.PriceTypeId = pg.PriceTypeId
            and src.PriceCode = pg.PriceCode

        delete tgt
        from dbo.PriceGroup tgt
        join @CatalogKeys ck on tgt.CatalogEntryCode = ck.CatalogEntryCode
        left join dbo.PriceValue pv on pv.PriceGroupId = tgt.PriceGroupId
        where pv.PriceGroupId is null

        if @initialTranCount = 0 commit transaction
    end try
    begin catch
        declare @msg nvarchar(4000), @severity int, @state int
        select @msg = ERROR_MESSAGE(), @severity = ERROR_SEVERITY(), @state = ERROR_STATE()
        if @initialTranCount = 0 rollback transaction
        raiserror(@msg, @severity, @state)
    end catch
end

If you have experience with SQL (which you probably should), you will see that it’s a deletion of rows in PriceValue that have CatalogEntryCode same as , then a merge, then a deletion of left over rows. To make matters worse, IPriceService system stores data on 3 tables: PriceValue, PriceGroup and PriceType. Imagine doing that with a few dozen of thousands rows.

Even if you change just one price, all prices of that specific SKU will be touched. It’d be fine if you have like ten prices, but if you have ten thousands prices, it’ll be a huge waste.

Not just that. To save one price, you would still need to load all prices of that specific SKU. That’s two layers of waste: the read operations at database layer, and then on application, a lot of price objects will need to be constructed, and then you need to recreate a datatable to send all the data back to the database to do the expensive operation above.

And wait, because the prices saved to IPriceService needs to be synchronized to IPriceDetailService (however, you can disable this). Prices that were changed (which is, all of them) need to be replicated to another table.

So in short, IPriceService was not designed to handle many prices per SKU. If you have less than a few hundred prices per SKU (on average), it’s fine. But if you have more than 1000 prices per SKU, it’s time to look at other options.

Fix Crunchyroll app crash on playing Android

For quite a while, my Crunchyroll app on my OnePlus 8 Pro just crashes. It opens OK, but everytime I click play on everything, it crashes. Freezes for a few seconds, and a pop up appears for me to force close it.

I have tried:

  • Remove and reinstall
  • Remove app cache
  • Uninstall/Reinstall the Android Webview

None helped. Meanwhile, Crunchyroll is working fine on my iPhone 11, so it’s not their service. There’s also no update that could have triggered it.

Then I noticed my HBO Max app started doing the same. So it might have something to do with my phone. When I was almost ready to give up, I decided to try the final option – wipe the phone cache partition (I would not go as far as factory reset it, too much work!)

It worked!

This is the steps for my phone, but yours should be the same:Turn off your device.

Press and hold the Volume down and Power button at the same time.

When your phone logo appears, release the Power button, but continue holding the Volume down button.

At the Password screen, enter the Password or PIN number that you use to unlock your device. When done, tap Ok.

Tap English.

Tap Wipe data and cache.

Tap Wipe cache.

Tap Yes to continue.

Tap Reboot to restart the phone.

How to get the thumbnail preview for Neptune 2(s)

This is a feature that is only available for Cura. To make it easier to select which file to print on Elegoo Neptune 2 (and 2s), you can save your gcode files in form of TFT format, so the slicer inserts a thumbnail to the gcode, and your printer can display it.

Open the Marketplace by the button on the top right of cura

which would allow you to find Mks wifi plugin

Accept the license to install this plugin, then restart cura for it to take effect. Then you will need to activate it by selecting Menu => Settings => Printer => Manage Printer

Then select MKS Wifi Plugin to activate

Switch to Preview settings to turn on preview

If you are using Elegoo cura, they bundled Mks wifi plugin by default. but there is virtually no reason to use Elegoo cura. It’s based on Cura 4.8 which is very outdated (released on November 2020). The only reason you should install Elegoo Cura is that you can copy the start and end code and settings for your Neptune (it’s still not support by Cura), and that’s it.

Go download the latest version of Cura at Ultimaker Cura: Powerful, easy-to-use 3D printing software.

Another alternative, even simpler, and without MKS plugin is to use the post processing script. Menu => Extensions => Post Processing => Modify G-Code

Choose Add a script then select Create Thumbnail. By default, the thumbnail size is 32×32 which is way too small. I select 128×128 instead.

Now you will have a small icon next to Slice button. Clicking on it will open the Post Processing Plugin window. Note that you can see how many scripts you added (For me it’s only 1)

Slice as usual and copy your gcode files to microsd. Next time you select something to print, you will be able to see the preview of it

Left is sliced with Cura, right is sliced with Super Slicer

RedirectToAction is dead, long live RedirectToContent

In .NET 4.8/CMS 11.x and earlier, this is very commonly used to redirect an action

return RedirectToAction("Index", new{ node = contentLink });

Which will redirect the user to

public TResult Index(CheckoutPage currentPage );

and you will get the currentPage parameter set with content you specified by the contentLink.

However in .NET 5 once redirected currentPage will be null. It’s due to how .NET 5 handle the routing. The correct way is use this

return RedirectToContent(currentPage.ContentLink, "Index");

There is a action you can use – RedirectToContent. Note that the order of parameter is reserved – you pass in the content link to the content first, then the name of the action.

And that’s how’s it done in .NET 5/CMS 12.

Where to store big collection data

No, I do not mean that big, big data (in size of terabytes or more). It’s big collection, like when you have a List<string> and it has more than a few hundreds of items. Where to store it?

Naturally, you would want to store that data as a property of a content. it’s convenient and it just works, so you definitely can. But the actual question is: should you?

It’s as simple as this

public virtual IList<String> MyBigProperty {get;set;}

But under the hood, it’s more than just … that. Let’s ignore UI for a moment (rendering such long list is a bad UX no matters how you look at it, but you can simply ignore rendering that property by appropriate attributes), and focus on the backend aspects of it.

List<T> properties are serialized as a long strings, and save at once to database. If you have a big property in your content, this will happen every time you load your content:

  • The data must be read from database, then transferred through the network
  • The data must be parsed to create an array (the underlying data structure of List<T>. The original string is tossed away.
  • Now you have a big array that you might not use every time. it’s just there taking your previous LOH (as with the original string)

Same thing happens when you actually save that property

  • The data must be serialized as string, the List<T> is now tossed away
  • The data then must be transferred through the network
  • The data then saved to database. Even though it is a very long string and you changed, maybe 10 characters, it’s completely rewritten. Due to its size, there might be multiple page writes needed.

As you can see, it can create a lot of waste, especially if you rarely use that property. To make the matter worse, due to the size of the property, it means they are taking up space in LOH (large objects heap).

And imagine if you have such properties in each and every of your content. The waste is multiplied, and your site is now at risk of some frequent Gen 2 Garbage collection. Nobody likes visiting a website that freezes (if not crashes) once every 30 minutes.

Then when to store such big collection data?

The obvious answer is … somewhere else. Without other inputs, it’s hard to give you some concrete suggestions, but how’s about a normalized custom table? You have the key as the content reference, and the other column is each value of the list. Just an idea. Then you only load the data when you absolutely need it. More work, yes, but it’s the better way to do it.

Just a reminder that whatever you do, just stay away from DDS – Dynamic Data Store. It’s the worst option of all. Just, don’t 🙂

Tales of arise – super quick review

Tales of Arise Characters and Party Members

The good

The character designs are pretty good. With a lot of JPRG character designs are basically fan service, to a point it’s even ridiculous to look at (Looking at you, Xenoblade Chronicles 2), the characters from Tales of Arise are pretty great (saved some small parts, like, um, uh, the back of Kisara).

The game looks good – it is not breaking any record, but gorgeous in its own rights.

The bad

Tales has always been a budget series, and while Tales of arise might have a bigger budget than previous entries, it is still not an AAA production. And that shows in places. You see pretty repetitive enemies from place to place. A different color and a new name, and you’re done. This also clear that the game has more loading screens than it should, even with the current gen (Xbox Series X and PS5) versions. This is somewhat understandable as they share same design with previous gen consoles, but it was quite sad to see the limitations.

The voice acting is a hit or miss, i.e. inconsistent. Some actors sounds great and convincing, some, not so much.

Repetitive enemies designs. New area = almost same enemies with different colors.

Performance on boss battles – especially after Dohalim suffer badly. This was not covered by Digital Foundry analysis here, but trust me, I’ve seen it with my own eyes (And suffers from it)

The Ugly

It is one of the worst, if not the worst game when it comes to stability. And I played Witcher 3: The Wild Hunt at release on PS4, and XCOM 2 both on PS4 and Xbox Series X, that says something about it. I played (or rather, am playing) the game on Xbox Series X, and it crashes in almost every session. It does not work with Quick Resume (i.e. it should be disabled).

Mini annoyances

There are upgraded versions of equipment, but if a character is equipping that base version, it can’t be upgraded! You have to unequip that then craft the upgrade. Why?

Getting being delete variant codes

I got the question here Content Events and Service API | Optimizely Developer Community , and CHKN contacted Optimizely developer support service which is the good/right thing to do. However it could be beneficial for a wider audience, hence this blogpost.

If you are developing new Optimizely Commerce Cloud now, you should be using the latest version (CMS 12/ Commerce 14), or at least Commerce 13.31 if you have to stay with .NET 4.8. You then could use the IContentEvents to listen to any events that might be fired from Service API.

However, if you are using older versions, you might be limited to lower level, non content event through EventContext. It works, but with a catch: there is no EntryDeleting event in EventContext. At this point I’m not entirely sure why, probably just an overlook. But it’s not impossible to work around that issue.

As I suggested in the post, EntryUpdating is like a blanket event – every change to the entries goes through there. The sender is a CatalogEntryDto, which should contain information about the being deleted entries.

        private void Instance_EntryUpdating(object sender, Mediachase.Commerce.Catalog.Events.EntryEventArgs e)
        {
            var dto = sender as CatalogEntryDto;
            if (dto != null)
            {
                var rows = dto.CatalogEntry.Where(x => x.RowState == DataRowState.Deleted);
                var deletingCodes = rows.Select(x => (string)x["Code", DataRowVersion.Original]);
                //do stuffs with codes being deleted.
            }
        }

And then to listen to the event in one of your IInitializationModule

EventContext.Instance.EntryUpdating += Instance_EntryUpdating;

However, there is a caveat here: as EntryUpdating happens before the entry deletion, it is possible that the change did not go through (i.e. the changes were to be reverted). It is unlikely, but it’s a possibility. You might accept that, or you can:

  • Store the id and code in a “Deleting entry” dictionary
  • Listen to EntryDeleted event and match from DeletedEntryEventArgs parameter (which contains EntryId ) to get the deleted Code, and continue from there.

Pro Optimizely Commerce Cloud can now be preordered

If you want to start your week with a purchase, then I have a (hopefully) good news for you: My book Pro Optimizely Commerce  Cloud can now be preordered (but with immediate access to content). https://leanpub.com/prooptimizelycommercecloud

It took me a week due to an technical issue with Leanpub, but in the end it can now be purchased.

I planned to finish this book in August. As always, I overestimated my ability to work during nights, and underestimated my procrastination power. I made some progress, but far from completing the book.

My solution? Well, I will let money speak. By letting you (my dear readers) preorder the book, I make a commitment to finish it. The book is now at 50% complete, and I aim to make it content complete before the end of this year.

If you own the previous version (even if you got it for free), there will be a coupon with 60% off coming your way, please check your mailbox!

The book is dedicated to my children, Emmy and Erik. While I can’t say they are very supportive of me writing the book (quite the contrary), they are my biggest motivation to make more money (cough, cough), and are biggest smiles on my face.

I would like to thank many of my colleagues, former and current, who helped and supported me writing the first book, which was the beginning of everything.

I would like to thank Optimizely community for inspiration to write this book.

And of course, I would like to thank you, for buying and reading it.

And I hope it’ll help you.

The hidden gotcha with IObjectInstanceCache

It’s not a secret that cache is one of the most, if not the most, important factors to their website performance. Yes cache is great, and if you are using Optimizely Content/Commerce Cloud, you should be using ISynchronizedObjectInstanceCache to cache your objects whenever possible.

But caching is not easy. Or rather, the cache invalidation is not easy.

To ensure that you have effective caching strategy, it’s important that you have cache dependencies, i.e. In principles, there are two types of dependencies:

  • Master keys. This is to control a entire cache “segment”. For example, you could have one master key for the prices. If you needs to invalidate the entire price cache, just remove the master key and you’re done.
  • Dependency keys. This is to tell cache system that your cache item depends on this or that object. If this or that object is invalidated, your cache item will be invalidated automatically. This is particularly useful if you do not control this or that object.

ISynchronizedObjectInstanceCache allows you to control the cache dependencies by CacheEvictionPolicy . There are a few ways to construct an instance of CacheEvictionPolicy, from if the cache expiration will be absolute (i.e. it will be invalidated after a fixed amount of time), or sliding (i.e. if it is accessed, its expiration will be renewed), to if your cache will be dependent on one or more master keys, and/or one or more dependency keys, like this

   
        /// <summary>
        /// Initializes a new instance of the <see cref="CacheEvictionPolicy"/> class.
        /// </summary>
        /// <param name="cacheKeys">The dependencies to other cached items, idetified by their keys.</param>
        public CacheEvictionPolicy(IEnumerable<string> cacheKeys)

        /// <summary>
        /// Initializes a new instance of the <see cref="CacheEvictionPolicy"/> class.
        /// </summary>
        /// <param name="cacheKeys">The dependencies to other cached items, idetified by their keys.</param>
        /// <param name="masterKeys">The master keys that we depend upon.</param>
        public CacheEvictionPolicy(IEnumerable<string> cacheKeys, IEnumerable<string> masterKeys)

The constructors that takes master keys and dependency keys look pretty the same, but there is an important difference/caveat here: if there is no dependency key already existing in cache, the cache item you are inserting will be invalidated (i.e. removed from cache) immediately. (For master keys, the framework will automatically add an empty object (if none existed) for you.)

That will be some unpleasant surprise – everything seems to be working fine, no error whatsoever. But, if you look closely, your code seems to be hitting database more than it should. But other than that, your website performance is silently suffering (sometimes, not “silently”)

This is an easy mistake to make – I did once myself (albeit long ago) in an important catalog content cache path. And I saw some very experienced developers made the same mistake as well (At this point you might wonder if the API itself is to blame)

Take away:

  • Make sure you are using the right constructor when you construct an instance of CacheEvictionPolicy . Are you sure that the cache keys you are going to depend on, actually exist?

In newer version of CMS, there would be a warning in log if the cache is invalidated immediately, however, it could be missed, unless you are actively looking for it.

Note that this behavior is the same with ISynchronizedObjectInstanceCache as it extends IObjectInstanceCache.

Lelit Elizabeth v3 – 2 months impression (super short review)

First of all, if you are looking to buy Lelit Elizabeth, check out the detailed review from David Corbey. It is probably the best review on the machine. He also wrote very good information regarding setting, and maintaining the machine, so check it out.

Out of the box, the Elizabeth looks much better than in photos. I must admit, the photos do not do its justice – it looks rather dull in those. While it is certainly not the best looking espresso machine (to my taste at least, I think the one with E61 group looks more compelling), it definitely looks good.

Next to the machine it is going to replace

Coming from the popular Sage (Breville) Barista Pro, there are a few things that impress/surprise me (of course this is an unfair comparison. When you spend 3x more money on a new machine + a grinder, you would definitely not want to get “marginally better”):

  • The actual stainless steel construction is very, very nice. It feels much more solid than the fake stainless steel look from the Barista Pro. The machine is well made, probably not the best built machine around, but you know it will last you a long time.
  • It takes really long time to heat up, about 20 minutes. Compared to Barista Pro, it was a bummer at first, because you could pull your first shot almost instantaneously with the Pro as the machine is ready after 3s. But that is a lie. For the Barista Pro, the machine allows you to pull shots, but with the cost of temperature stability. I learned the hard way that most of the shots with Barista Pro is severely under temperature, resulting in extreme sour taste, and it was very hard to adjust – you have like 5 levels for temperature, ranging from 90*C to 98*C, so about 2*C each. If the machine can reach that desired temperature or not, is another question. I only realize that once I switched to Elizabeth. The portafilter is actually hot (and very uncomfortable to touch to steel part). On Barista Pro however, it is only lukewarm, even if I pulled a few empty shots before hand. Furthermore, with Elizabeth, I can set the brewing water to whatever temp I like (or half the degree if I switch to F instead of C), so I can comfortably brew light, medium or dark roast the way they are meant to be brewed.

The flow is now so, so much easier and smoother and I had with the Sage Barista Pro.

Buying accessories is also now easier and cheaper – I could easily find branded, quality accessories for reasonable prices. They might be still expensive, but I feel the price is justified for the the quality.

But no machine is perfect, so is Elizabeth, there are a few downsides

The biggest one, to me at least, is that the water tank is pretty hard to refill, as I put my machine under the kitchen cabinet, so I have a few little space left. I need to either move it out, or use a gooseneck kettle to refill. I went with the latter approach and it works quite well.

Another bummer is that there is lack of a real tamper included in the package. Lacking of the milk jug is somewhat acceptable, but tamper? I bought a nice one from Motta (wish I chose the 58.55mm version instead), but I wish they included one by default. Of course, this one is an easy one to fix.

In the end, I’m happy with my Elizabeth, and I feel happy and excited to use it every day. My only regret is that I didn’t step up to Bianca – heard great things about it. But well, it exceeded my budget at that point by a large margin, I’ll have to wait.

Meanwhile, Lelit Elizabeth will serve me well for a long, long time.