One of best practices for better performance – not just with Commerce or Episerver Commerce, is to batch your calls to load data. In theory, if you want to load a lot of data, loading by both end will be problematic: if you load each record one by one, the overhead for opening the connection and retrieve data will be too much. But if you load all of them, then it is likely that you will end up with either time out exception in database end, or out of memory exception in your application. The better way is to of course, loading them by smaller batch: either 10, 20, or 50 records at one and repeat until the end.

That is the theory, but is it really better in practice? And if it is, which size of batch works best? As they usually say, reality is the golden test for theory, so let’s do it.

The code to load the orders is quite straightforward

                var options = new OrderSearchOptions { Namespace = "Mediachase.Commerce.Orders" };
                options.Classes.Add(OrderContext.PurchaseOrderClassType);
                int startingRecord = 0;
                options.RecordsToRetrieve = 50;
                int totalCount;
                var orders = OrderContext.Current.FindPurchaseOrders(new OrderSearchParameters(), options, out totalCount);
                startingRecord += orders.Length;
                actualOrderCount = startingRecord;
                while (startingRecord < totalCount)
                {
                    options.StartingRecord = startingRecord;
                    orders = OrderContext.Current.FindPurchaseOrders(new OrderSearchParameters(), options);
                    actualOrderCount += orders.Length;
                    startingRecord += orders.Length;
                }

You might ask me why I am still using OrderContext, not the new fancy shiny abstraction API – the reason is we don’t have API to search for orders in the new API. We are working on that, but this is what we have for now. It works, at least.

The only part to change is the value of RecordsToRetrieve – we need to change them to different values to validate our theory.

Testing with 30000 orders. Due to “reasons”, those are fairly simple orders with 2 line items, 1 shipment and 1 payment each. In real world scenarios you might have more complicated orders with more information. That might affect the batch size that works best for you.

  The test 'Load orders, batch of 1' ran 1 time(s) successfully. Took: 1347788 ms. Average: 1347788 per run

  The test 'Load orders, batch of 10' ran 1 time(s) successfully. Took: 131846 ms. Average: 131846 per run

  The test 'Load orders, batch of 20' ran 1 time(s) successfully. Took: 72880 ms. Average: 72880 per run

  The test 'Load orders, batch of 50' ran 1 time(s) successfully. Took: 41870 ms. Average: 41870 per run

  The test 'Load orders, batch of 100' ran 1 time(s) successfully. Took: 26347 ms. Average: 26347 per run

  The test 'Load orders, batch of 1000' ran 1 time(s) successfully. Took: 30751 ms. Average: 30751 per run

  The test 'Load orders, batch of 3000' ran 1 time(s) successfully. Took: 18022 ms. Average: 18022 per run

  The test 'Load orders, batch of 10000' ran 1 time(s) successfully. Took: 1663883 ms. Average: 1663883 per run

  The test 'Load orders, batch of 15000' ran 1 time(s) successfully. Took: 3212072 ms. Average: 3212072 per run

  The test 'Load orders, batch of 30000' ran 1 time(s) successfully. Took: 2097531 ms. Average: 2097531 per run

So it is clear – it is bad to load orders one by one, which is 10 times slower than loading 10 orders at once. When you increase the batch size, it actually gets faster, and at 3000 orders per run it is the fastest. It also true that when you try to load too many orders, your code is not only slower, but fails to complete.

There is no one size fits all, but if you ever need to to load a lot of orders, then loading by batches of 1000-3000 orders/batch seems to yield the best performance. Your mileage might vary, but not by much!


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *