If you have been using Optimized Customized Commerce, you probably know that, by default, wish list is just a cart with a special name. Can you guess the name? Surprise, surprise, it’s “Wishlist”. It’s been there since forever, from the early day of Mediachase, and then carried over to the new serializable cart. I have been “fine” with it – i.e. I accept the approach unconsciously. But until very recently I realized there are several problems with the approach.
How come it is not a very good idea?
First of all, it shares same table as the normal cart. To search for abandoned carts, you would have to skip the carts with “wishlist” name. There are only a few cart names and they are not evenly distributed, you will have hard time filtering carts by their names.
But there is more. As most customers are using the serializable cart mode now, ever growing wishlists also pose a another problem – each operation on the wishlist – adding or removing item, will result in a big write to the SerializableCart table. If you have just a few items, it might be fine, but a simple test on Commerce shows that with only 9 items in wishlist, the Data column is more than 2700 characters. And wishlists are meant to be kept forever – they will only grow in size.
My saved for later on Amazon – which is the closet thing to a “wish list”. Imagine having that on Optimizely Customized Commerce.
As wishlists are carts, they have to be in same format even though a lot of them are redundant/unnessary.
The biggest benefit, and I think it triumphs all other disadvantages we have listed, of the default wishlist implementation is it’s built-in. You can start using it without almost no additional effort. Get a cart with the predefined name and you are good to go. Building a different wish list definitely costs time and resource, a luxury not everyone can afford.
For that, I have been starting building a wish list service on my free time. I plan to make it open source when the time is right, but we’ll see about that.
Moral of the story
It is critical to take a step back, from time to time, to think about what you have done. Things might make less senses when you see it from a different perspective.
If you are looking for an espresso machine with range of $3000 (or around €2500 if you are in EU – this is one of the wins for European), you will most likely come to battle of these three. They are probably the most popular options in this price range, and rightly so. The prices are fairly comparable, with Profitec Pro 700 is the cheapest in the US (around $200), and Lelit Bianca v3 is the cheapest in the EU (also around €200). I did quite intensive research on the topic, and finally come to the conclusion (spoiler alert, in the end of this post).
If you haven’t known already, Profitec is a subsidiary of ECM. Pro 700 is still made in Milan, Italy, but it shares a lot of design with ECM Synchronika. Basically two sibling except for some cosmetic difference. I would expect them to perform very similar. For easier comparison, I will compare Bianca and Synchronika. Let’s go through pros and cons of each, and hopefully it will help you come to a decision
They are very similar espresso machines on definitions. Both are E61, dual boiler machines that target home enthusiasts.
Build quality
This is no contest. Synchronika is a clear winner, Pro 700 a second and Bianca comes last. It is not only that Synchronika has better fit and finish, it has clear internal layout which is like an engineer’s dream. Whole Latte Love has several dive in videos for that, and it means if you ever need to service your Synchronika yourself, you will easily know where to go and what to check/change
Bianca has less fit and finish, and its internal is pretty cramped – more on that below
To be very clear Bianca’s build quality is definitely more than decent, and it would last you a very long time with proper care. The cramped inside has two reasons – due to its smaller size and more features.
Size and look
Of all three, Lelit Bianca is smallest, and is the only one come with wood (walnut) knobs and wands finish by default, while the others come with hard black plastic . It is only 29cm wide and only 40cm deep. Both ECM and Profitec are noticeable larger, with the former is 33.5cm wide and 49cm deep, and the latter 34cm and 47cm, respectively.
While look is definitely subjective – make sure each of these machine can fit into your coffee station, either that is under your cupboard or otherwise. One of the biggest selling point of Bianca is the moveable water tank, you can put it behind, left or right. All three can be plumbed and you can put your days of refilling water behind you, but sometimes plumbing is not an option, and being able to move the tank is a huge plus. As it’s my biggest complaint of Lelit Elizabeth. Now it can be solved easily.
Start up time
If you have very stable schedule every day, start up time might not be of your concern, you can use a smart power plug and schedule it to turn on your coffee machine every day at a fixed time. but let’s be very clear here: all these machines take a significantly long time to be fully heated. Not only they have to heat up both boilers to temp and let them stabilize, they also need to heat up the E61 group head via thermosiphon (baristahustle explains it in great detail here EM 3.04 How the E61 Thermosyphon Works – Barista Hustle – but basically, let hot water flow through the head to heat it up). The E61 is very heavy, like 4kg heavy, so it’s important to make it hot, so the water does not lose too much temperature during brewing.
Synchronika takes significantly longer to heat up. By the test of kaffemacher, it takes a whoping 35 minutes to be able to pull 5 shots without failing (not reaching targeted temp)
That is double of what Bianca v3 needs
That means you can start pulling shorts 16 minutes faster on Bianca. That’s impressive. If you want to brew lighter roasts which need higher temps, say, 96*C, unofficial and unscientific tests showed that Bianca is ready in even shorter time (12 minutes), based on the indication on the PID. It’s not breaking any records, but for E61, that’s nothing to be sniffed at.
Temperature stability
This is one interesting test. Thanks to kaffemacher we have measurements from both machines, and it’s a tie
ECM Synchronika has more stability during the shot. i.e. with a 25s shot, the temp between 5s and 25s remains a more straight line (albeit hotter toward the end). With shot after shot however, it tends to be under temp after being idle for some time
Lelit Bianca has more stability between shots. Temp within shot is fluctuated a bit, but does not rise up as much as with Synchronika. You can, however, adjust the PID with settings like temp offset to have even better temp stability, especially after your machine has been idling for a while.
Features
Bianca hand down.
Bianca comes with the default flow control by default. ECM Synchronika and Profitec Pro 700 can be retrofitted with the E61 flow control package, which cost you somewhere $200 more, plus installation. As most people has commented, Bianca flow control feels natural and nicer to use. That is of course subjective, but it is not too surprising. The main difference is that Bianca flow control has ~200 degree travel from fully open to fully close, while the E61 flow control is ~720 degree. The former allows some more fine tuning, but it is less intuitive to use.
Bianca can pre-infuse even with water tank, while ECM and Profitec need plumbed in to pre-infuse (using line pressure). Bianca v3 has low flow settings which make pre infusion even more flexible. You can pre-infuse in any way you like.
Lelit is also known to make their Lelit Control Center – LCC settings available to end users and you can fine tune your machine even further. Most notably, the temp offset (between the boiler, and the targeted temp at group head), so you can fine tune your brew temp to what you would like.
Conclusion
When I bought my Lelit Elizabeth, I thought about Bianca as something I wanted but couldn’t get, and if I upgrade, I would pick it. After two years, when I finally decided to upgrade, for some reasons I skipped Bianca. I almost decided to go with Synchronika but slowly and steadily Bianca won me back. And I will be soon one of its owners.
With that said, you can’t go wrong with each option. Those three are the most popular options in their price range, and there’s reason for that – they are that good.
It’s not a secret, I love optimizing things. In a sense, I am both an Optimizer (literally) and an optimizer. And today we will be back to basic – optimizing a tricky SQL query.
The query in question is this particular stored procedure ecf_CatalogNode_GetAllChildNodes, this is used to get all children nodes of specific nodes. It is used in between to find all entries that are direct, or indirect children of specific nodes. Why, you might ask, because when you change the url segment of the node, you want to make sure that all entries that are under that node, will have their indexed object refreshed.
Let’s take a look at this stored procedure, this is how it looks like
CREATE PROCEDURE [dbo].[ecf_CatalogNode_GetAllChildNodes]
@catalogNodeIds udttCatalogNodeList readonly
AS
BEGIN
WITH all_node_relations AS
(
SELECT ParentNodeId, CatalogNodeId AS ChildNodeId FROM CatalogNode
WHERE ParentNodeId > 0
UNION
SELECT ParentNodeId, ChildNodeId FROM CatalogNodeRelation
),
hierarchy AS
(
SELECT
n.CatalogNodeId,
'|' + CAST(n.CatalogNodeId AS nvarchar(4000)) + '|' AS CyclePrevention
FROM @catalogNodeIds n
UNION ALL
SELECT
children.ChildNodeId AS CatalogNodeId,
parent.CyclePrevention + CAST(children.ChildNodeId AS nvarchar(4000)) + '|' AS CyclePrevention
FROM hierarchy parent
JOIN all_node_relations children ON parent.CatalogNodeId = children.ParentNodeId
WHERE CHARINDEX('|' + CAST(children.ChildNodeId AS nvarchar(4000)) + '|', parent.CyclePrevention) = 0
)
SELECT CatalogNodeId FROM hierarchy
END
I previously wrote about the relations between entities in Commerce catalog, here Commerce relation(ship), a story – Quan Mai’s blog (vimvq1987.com) , so relations between nodes can be a bit complicated – a node can have one true parent defined in CatalogNode table, and then other “linked” nodes in CatalogNodeRelation . So to find all children – and grand children of a node, you need to get from both.
Getting children of a node from CatalogNode or CatalogNodeRelation is simple, but things become more complicated when you have to get grandchildren, then great-grandchildren, and so on, and so forth. with that, CTE needs to be used in a recursive way. But then there is a problem arises – there is a chance, small, but still, that the data was added in a correct way, so circular reference is possible. i.e. A is a parent of B, which is a parent of C, and itself is a parent of A. To stop the SP from running forever, a check needs to be added to make sure any circular reference is cut short.
This brings back memory as the first ever support case I worked on at Optimizely (then Episerver) was with a circular reference. The site would crash whenever someone visited the catalog management in Commerce Manager. That was around June, 2012 (feeling old now?). My “boss” at that time involuntarily volunteered me for the case. See what you made me do, boss.
Now you can grasp the basic of what the SP does – let’s get back to the original problem. it’s slow to run especially with big catalog and complex node structure. As always, to optimize everything you need to find the bottleneck – time to fire up SQL Server Management Studio and turn on the Actual Execution Plan
I decided to go with 66, the “root” catalog node. this query yield around 18k rows
Mind you, this is on my machine with pretty powerful CPU (AMD Ryzen 7 5800x, 8 cores 16 threads), and a very fast nvme PCIe SSD (Western Digital Black SN850 2TB). If this was executed on Azure Sql database for example, a timeout is almost certainly guaranteed. So time of execution should only be compared relatively with each other.
If we look at the execution plan, it is quite obvious where the bottleneck is. A scan on CatalogNode table is heavy (it read 79M rows on that operation). As suggest by Anders from Timeout when deleting CatalogNodes from a large catalog (optimizely.com), adding a non clustered index on ParentNodeId column would improve it quite a lot. And indeed it does. The execution time is reduced to 5 second.
And the number of rows read on CatalogNode reduced to just 17k
This is of course a very nice improvement. But the customer reported that it is not enough and the SP is still giving timeout, i.e. further optimization is needed.
Naturally, the next step would be to see if we can skip the circular check. It was added as a safe measure to avoid bad data. It should not be there, as the check should be performed at data modification. But it is there for historical reasons and we can’t just change it, not trivially. So let’s try it for our curiousity.
The modified query looks like this (basically just commented out any code related to the CyclePrevention
ALTER PROCEDURE [dbo].[ecf_CatalogNode_GetAllChildNodes]
@catalogNodeIds udttCatalogNodeList readonly
AS
BEGIN
WITH all_node_relations AS
(
SELECT ParentNodeId, CatalogNodeId AS ChildNodeId FROM CatalogNode
WHERE ParentNodeId > 0
UNION
SELECT ParentNodeId, ChildNodeId FROM CatalogNodeRelation
),
hierarchy AS
(
SELECT
n.CatalogNodeId
--, '|' + CAST(n.CatalogNodeId AS nvarchar(4000)) + '|' AS CyclePrevention
FROM @catalogNodeIds n
UNION ALL
SELECT
children.ChildNodeId AS CatalogNodeId
--, parent.CyclePrevention + CAST(children.ChildNodeId AS nvarchar(4000)) + '|' AS CyclePrevention
FROM hierarchy parent
JOIN all_node_relations children ON parent.CatalogNodeId = children.ParentNodeId
--WHERE CHARINDEX('|' + CAST(children.ChildNodeId AS nvarchar(4000)) + '|', parent.CyclePrevention) = 0
)
SELECT CatalogNodeId FROM hierarchy
END
And the improvement is quite impressive (more than I expected), the query completes almost instantly (less than 1s). The read on CatalogNodeRelation significantly reduced
A word of warning here, execution plan can’t be simply compared as-is. If I run two versions side by side, it gives quite misleading comparison
Even though the top one (without the circular reference check) is much faster than the original (the bottom one), SQL Server estimates that the first is slower (almost 2x slower than the second). So execution plan should be used to see what has been done and what is likely the bottleneck inside a query, it should not be used as comparison between queries. In most cases, comparing statistics using set statistics io on is the best way to compare.
If not for the fact that we are changing the behavior of the stored procedure, I would be happy with this approach. The chance of running into circular reference is small, but it is not zero. As we said, we can in theory gating the relation during insert/updating, but that would be too big a change to start with. This is one of constraint as we work at framework level – we have to step carefully to not break anything. A breaking change is bad, but a data corruption is simply unacceptable. I spent a few hours (probably more than I should) trying to optimize the circular reference check, but no better solution is found.
The next approach would be – as we can guess, to make sure that we get rid of the Clustered Index Scan happened on the CatalogNodeRelation table. The solution would be quite simple, a non clustered index on the ParentNodeId should be enough.
Great success. The performance is comparable with the “non circular reference check” approach.
As adding an index is a non breaking change (and albeit in some cases it can cause performance regression, like in A curious case of SQL execution plan – Quan Mai’s blog (vimvq1987.com) , but it is rare, also, in this case the cardinality of the ParentNodeId is most likely quite well distributed).
That is all for today. Hopefully you learn one thing or two about optimizing queries in your daily works.
I was asked this question: we have about 3TB of assets, any way to clean it up.
These days, storage is cheap, but still not free. and big storage means you need space for back up. and with that, bandwidth and time.
Is there away to clean up things you no longer need?
Yes!
Optimizely Content already has a scheduled job named Remove Abandoned BLOBs, but this job only removes the blobs that have no content associated. I.e. the content is deleted by IContentRepository.Delete but the blob was left behind. The job uses the log to find out which content were deleted, then find those blobs.
How’s about the assets that still have contents associated with them, but not used anywhere? Time to get your hands dirty!
Due to the nature of this task, it is best to make it a scheduled job.
All of the assets are children under the global asset root. By iterating over them, we can check if each of them is being used by another content. If not, we will add them to a list for later delete. Before deleting the content, we will find the blob and then delete it as well. Easy, right?
To get the content recursively we use this little piece of code
public virtual IEnumerable<T> GetAssetRecursive<T>(ContentReference parentLink, CultureInfo defaultCulture) where T : MediaData
{
foreach (var folder in LoadChildrenBatched<ContentFolder>(parentLink, defaultCulture))
{
foreach (var entry in GetAssetRecursive<T>(folder.ContentLink, defaultCulture))
{
yield return entry;
}
}
foreach (var entry in LoadChildrenBatched<T>(parentLink, defaultCulture))
{
yield return entry;
}
}
private IEnumerable<T> LoadChildrenBatched<T>(ContentReference parentLink, CultureInfo defaultCulture) where T : IContent
{
var start = 0;
while (!_isStopped)
{
var batch = _contentRepository.GetChildren<T>(parentLink, defaultCulture, start, 50);
if (!batch.Any())
{
yield break;
}
foreach (var content in batch)
{
// Don't include linked products to avoid including them multiple times when traversing the catalog
if (!parentLink.CompareToIgnoreWorkID(content.ParentLink))
{
continue;
}
yield return content;
}
start += 50;
}
}
And we will start from SiteDefinition.Current.GlobalAssetsRoot, and use IContentRepository.GetReferencesToContent to see if it is used in any content (both CMS and Catalog). If not, we add it to a list. Later, we use IPermanentLinkMapper to see if it has any blob associated, and delete that as well
foreach (var asset in GetAssetRecursive<MediaData>(SiteDefinition.Current.GlobalAssetsRoot, CultureInfo.InvariantCulture))
{
totalAsset++;
if (!_contentRepository.GetReferencesToContent(asset.ContentLink, false).Any())
{
toDelete.Add(asset.ContentLink.ToReferenceWithoutVersion());
}
if (toDelete.Count % 50 == 0)
{
var maps = _permanentLinkMapper.Find(toDelete);
foreach (var map in maps)
{
deletedAsset++;
_contentRepository.Delete(map.ContentReference, true, EPiServer.Security.AccessLevel.NoAccess);
var container = Blob.GetContainerIdentifier(map.Guid);
//Probably redundency, can just delete directly
var blob = _blobFactory.GetBlob(container);
if (blob != null)
{
_blobFactory.Delete(container);
}
OnStatusChanged($"Deleting asset with id {map.ContentReference}");
}
toDelete.Clear();
}
}
We need another round of delete after the while loop to clean up the left over (or if we have less than 50 abandoned assets)
And we’re done!
Testing this job is simple – uploading a few assets to your cms and do not use it anywhere, then run the job. it should delete those assets.
Things to improve: we might want to make sure only assets that created more than a certain number of days ago are deleted. This allows editors to upload assets for later uses without having to use them immediately.
If you are using Find to index your content, you likely have used the Find Indexing job – which would index everything in one go. Today I stumped upon this question – A way to run indexing job for Commerce only | Optimizely Develope – and it is a good one – if you have many of content in CMS side, and they don’t change that often, if at all – you certain don’t want to waste time and resource in trying to reindex them again. Is there away to just index catalog content?
Yes, there is. It is a bit hacky solution, but it can certain work. But first, let’s dive in on how Find indexing job does it work. It relies on IIndexingJobService , which itself relies on ContentIndexer to do the job. In its turn, ContentIndexer uses a list of IReindexInformation to know which content to index, and in which languages. Here’s what it looks like
public interface IReindexInformation
{
/// <summary>
/// Content links to be reindexed.
/// </summary>
IEnumerable<ReindexTarget> ReindexTargets { get; }
/// <summary>
/// Gets the root to index.
/// </summary>
ContentReference Root { get; }
}
It has one Root, and multiple ReindexTarget, which contains
public class ReindexTarget
{
/// <summary>
/// The content references.
/// </summary>
public IEnumerable<ContentReference> ContentLinks { get; set; }
/// <summary>
/// The languages the collection of <see cref="ContentReference"/> are enabled on.
/// </summary>
public IEnumerable<CultureInfo> Languages { get; set; }
/// <summary>
/// The site that the collection of <see cref="ContentReference"/> appears on
/// or <c>null</c> if unknown.
/// </summary>
public SiteDefinition SiteDefinition { get; set; }
}
As you might have guessed, Commerce has its own IReindexInformation to index catalog content. If we can only use that to run our job. This is how our “hack” begins
The interface IContentIndexer has no method to control the IReindexInformation`, but the default implementation ContentIndexer does. We set it to the only one we need, so here it is
List<IReindexInformation> targets;
var contentIndexer = _contentIndexer as ContentIndexer;
if (contentIndexer != null)
{
targets = contentIndexer.ReindexInformation.ToList();
var commerceReIndexInformation = targets.FirstOrDefault(x => x.GetType() == typeof(CommerceReIndexInformation));
contentIndexer.ReindexInformation = new List<IReindexInformation>() { commerceReIndexInformation };
_indexingJobService.Start(OnStatusChanged);
contentIndexer.ReindexInformation = targets;
}
A note is that you will still see the “Indexing Global assets and other data” message, because IIndexingJobService implementation will go through all SiteDefinition regardless and show that message, but the internal ContentIndexer will skip if the SiteDefinition passed to it does not match the SiteDefinition in the IReindexInformation (and for CommerceReIndexInformation it’s SiteDefinition.Empty
As I mentioned in the beginning, this is a bit hacky solution, as you have to cast IContentIndexer to its concrete implementation. The proper solution would be implement IContentIndexer yourself. Given that’s not a trivial job, I’ll leave at that.
If you have been using Business Foundation, you most likely know about a limitation – you can only load the first 1000 objects using the GetXXX methods. For example, by using CustomerContext.Current.GetOrganizations(), you can load the first 1000 organizations. In theory, you can get more objects by changing the value of MaxObjectsList. However, changing that has consequences. Changing that will affect all types of objects, including contacts, organizations, and your custom objects. Also, loading too much in one go is almost never a good idea.
Is there a better way?
Yes, of course – which is why we have this blog post
There is a “hidden” method from base class of Business Foundation – BusinessManager that takes paging parameters
You will need to convert the results to the type you want. Note that all Business Foundation objects are inherited from EntityObject. So if you want to get the contacts by paging, it would look like this:
var contacts = BusinessManager.List(ContactEntity.ClassName, new FilterElement[0], new SortingElement[] { new SortingElement(sortField, sortType) }, startIndex, recordsToRetrieve)
.OfType<CustomerContact>();
Let’s go through the parameters one by one.
The first you need is the class name of your objects. For contacts, you can use ContactEntity.ClassName as shown above. For organizations, OrganizationEntity.ClassName
Next one is the filter. As you are trying to load all objects, you can just pass in an empty (but not null) instance – new FilterElement[0]
Third one is how you want to sort it. If you pass an empty array, it will be sort by default. If you want to sort by Name for example, set your sortField to Name and sortType to one of SortingElementType (Asc or Desc)
Forth and fifth ones are what we are looking for, they’re simply paging parameters – which position to start getting, and how many objects to get. Combine this with a simple while loop, you can get all of your Business Foundation objects.
And that’s about it, my friends.
What’s about caching?
Caching with list is always tricky – as you have to keep track of each item in the list to make sure you invalidate the list cache if one of the item is changed (updated/removed). For the purpose of just loading all contacts/organizations, it is probably better to just skip caching, for simplicity.
Recently I stumped upon this question Removing a property that no longer exists in the code (optimizely.com) . it’s a valid (and even good) question. It is easy to add a new property to your catalog content type – you can simply add a new property to the model, build and start the site. However the opposite is not easy. In Commerce 14 at least.
A property for the strongly typed content type, is actually mapped and backed by a MetaField in MetaDataPlus system (of course unless you specifically tell it not to, by using IgnoreMetaDataPlusSynchronization attribute). When you add a new property to your content type, build and start your site, your content type is scanned and metafields will be created if necessary. However, if you delete a property from your content type, the scanner will just leave the metafield there. There are a few reasons for that. Firstly, it allows loosely typed content type, i.e. content types with none, or only a few property defined. If you have used some kind of external PIM, you’ll understand why it is important. Lastly, because the property can be mapped with a metafield of different name, the scanner might have trouble figuring out which metafield to delete. All in all, keeping the metafields is the sensible (if not the right) choice.
Then what to do if you want to delete the property and also clean up the metafield? With Commerce 13 and earlier, you can detach a MetaField from its MetaClass(s), then delete it using Commerce Manager. With the dead of CM in Commerce 13, what is your option?
By using code, of course. There are a few APIs – namely MetaField and MetaClass that can be used for that purpose. Note that there are two MetaField and MetaClass, and only the ones in Mediachase.MetaDataPlus.Configurator namespace are what we want (the others are for Business Foundation)
Enough for chit chat, this is the code that you would need to run
private void DeleteMetaField(string metafieldName)
{
var metaField = MetaField.Load(CatalogContext.MetaDataContext, metafieldName);
if (metaField == null)
{
return;
}
foreach (int metaClassId in metaField.OwnerMetaClassIdList)
{
var metaClass = MetaClass.Load(CatalogContext.MetaDataContext, metaClassId);
if (metaClass == null)
{
metaClass.DeleteField(metafieldName);
}
}
MetaField.Delete(CatalogContext.MetaDataContext, metaField.Id);
}
It is pretty straightforward. We load the MetaField by its name, if it is not null, then we remove it from all MetaClass that are using it, then eventually delete it.
In beginning of this post we mentioned strongly typed content type, but note that order system also uses the same metaclass/metafield system, so this code can be used for them as well.
This piece of code can be used in an admin-privilege controller to delete metafields on demand. Until Commerce 14 allows you to do it with a proper UI.
One of the questions I have received, from time to time, is that how to store a lot of prices per SKU in Optimizely (B2C) Commerce Cloud. While this is usually a perfect candidate for Optimizely B2B Commerce, there are many customers invested in B2C and want to make the best out of it. Is it possible?
It’s important to understand the pricing system of Optimizely Commerce (which is, written in detail in my book – shameless plug). But in short:
There are two price systems, IPriceService and IPriceDetailService
One is handling prices in batch – i.e. prices per SKU (IPriceService), and one is handling prices per individual price (IPriceDetailService)
Both are cached in latest version (cache for IPriceDetailService was added in late 13.x version)
With that in mind, it would be very problematic if you use IPriceService for such high number of prices per SKU, because each time you save a price, you save a lot of prices at once (same as loading prices). This is how the defaultIPriceService implementation saves prices of a SKU
create procedure dbo.ecf_Pricing_SetCatalogEntryPrices
@CatalogKeys udttCatalogKey readonly,
@PriceValues udttCatalogEntryPrice readonly
as
begin
begin try
declare @initialTranCount int = @@TRANCOUNT
if @initialTranCount = 0 begin transaction
delete pv
from @CatalogKeys ck
join dbo.PriceGroup pg on ck.CatalogEntryCode = pg.CatalogEntryCode
join dbo.PriceValue pv on pg.PriceGroupId = pv.PriceGroupId
merge into dbo.PriceGroup tgt
using (select distinct CatalogEntryCode, MarketId, CurrencyCode, PriceTypeId, PriceCode from @PriceValues) src
on ( tgt.CatalogEntryCode = src.CatalogEntryCode
and tgt.MarketId = src.MarketId
and tgt.CurrencyCode = src.CurrencyCode
and tgt.PriceTypeId = src.PriceTypeId
and tgt.PriceCode = src.PriceCode)
when matched then update set Modified = GETUTCDATE()
when not matched then insert (Created, Modified, CatalogEntryCode, MarketId, CurrencyCode, PriceTypeId, PriceCode)
values (GETUTCDATE(), GETUTCDATE(), src.CatalogEntryCode, src.MarketId, src.CurrencyCode, src.PriceTypeId, src.PriceCode);
insert into dbo.PriceValue (PriceGroupId, ValidFrom, ValidUntil, MinQuantity, MaxQuantity, UnitPrice)
select pg.PriceGroupId, src.ValidFrom, src.ValidUntil, src.MinQuantity, src.MaxQuantity, src.UnitPrice
from @PriceValues src
left outer join PriceGroup pg
on src.CatalogEntryCode = pg.CatalogEntryCode
and src.MarketId = pg.MarketId
and src.CurrencyCode = pg.CurrencyCode
and src.PriceTypeId = pg.PriceTypeId
and src.PriceCode = pg.PriceCode
delete tgt
from dbo.PriceGroup tgt
join @CatalogKeys ck on tgt.CatalogEntryCode = ck.CatalogEntryCode
left join dbo.PriceValue pv on pv.PriceGroupId = tgt.PriceGroupId
where pv.PriceGroupId is null
if @initialTranCount = 0 commit transaction
end try
begin catch
declare @msg nvarchar(4000), @severity int, @state int
select @msg = ERROR_MESSAGE(), @severity = ERROR_SEVERITY(), @state = ERROR_STATE()
if @initialTranCount = 0 rollback transaction
raiserror(@msg, @severity, @state)
end catch
end
If you have experience with SQL (which you probably should), you will see that it’s a deletion of rows in PriceValue that have CatalogEntryCode same as , then a merge, then a deletion of left over rows. To make matters worse, IPriceService system stores data on 3 tables: PriceValue, PriceGroup and PriceType. Imagine doing that with a few dozen of thousands rows.
Even if you change just one price, all prices of that specific SKU will be touched. It’d be fine if you have like ten prices, but if you have ten thousands prices, it’ll be a huge waste.
Not just that. To save one price, you would still need to load all prices of that specific SKU. That’s two layers of waste: the read operations at database layer, and then on application, a lot of price objects will need to be constructed, and then you need to recreate a datatable to send all the data back to the database to do the expensiveoperation above.
And wait, because the prices saved to IPriceService needs to be synchronized to IPriceDetailService (however, you can disable this). Prices that were changed (which is, all of them) need to be replicated to another table.
So in short, IPriceService was not designed to handle many prices per SKU. If you have less than a few hundred prices per SKU (on average), it’s fine. But if you have more than 1000 prices per SKU, it’s time to look at other options.
This is a feature that is only available for Cura. To make it easier to select which file to print on Elegoo Neptune 2 (and 2s), you can save your gcode files in form of TFT format, so the slicer inserts a thumbnail to the gcode, and your printer can display it.
Open the Marketplace by the button on the top right of cura
which would allow you to find Mks wifi plugin
Accept the license to install this plugin, then restart cura for it to take effect. Then you will need to activate it by selecting Menu => Settings => Printer => Manage Printer
Then select MKS Wifi Plugin to activate
Switch to Preview settings to turn on preview
If you are using Elegoo cura, they bundled Mks wifi plugin by default. but there is virtually no reason to use Elegoo cura. It’s based on Cura 4.8 which is very outdated (released on November 2020). The only reason you should install Elegoo Cura is that you can copy the start and end code and settings for your Neptune (it’s still not support by Cura), and that’s it.
Another alternative, even simpler, and without MKS plugin is to use the post processing script. Menu => Extensions => Post Processing => Modify G-Code
Choose Add a script then select Create Thumbnail. By default, the thumbnail size is 32×32 which is way too small. I select 128×128 instead.
Now you will have a small icon next to Slice button. Clicking on it will open the Post Processing Plugin window. Note that you can see how many scripts you added (For me it’s only 1)
Slice as usual and copy your gcode files to microsd. Next time you select something to print, you will be able to see the preview of it
Left is sliced with Cura, right is sliced with Super Slicer
No, I do not mean that big, big data (in size of terabytes or more). It’s big collection, like when you have a List<string> and it has more than a few hundreds of items. Where to store it?
Naturally, you would want to store that data as a property of a content. it’s convenient and it just works, so you definitely can. But the actual question is: should you?
It’s as simple as this
public virtual IList<String> MyBigProperty {get;set;}
But under the hood, it’s more than just … that. Let’s ignore UI for a moment (rendering such long list is a bad UX no matters how you look at it, but you can simply ignore rendering that property by appropriate attributes), and focus on the backend aspects of it.
List<T> properties are serialized as a long strings, and save at once to database. If you have a big property in your content, this will happen every time you load your content:
The data must be read from database, then transferred through the network
The data must be parsed to create an array (the underlying data structure of List<T>. The original string is tossed away.
Now you have a big array that you might not use every time. it’s just there taking your previous LOH (as with the original string)
Same thing happens when you actually save that property
The data must be serialized as string, the List<T> is now tossed away
The data then must be transferred through the network
The data then saved to database. Even though it is a very long string and you changed, maybe 10 characters, it’s completely rewritten. Due to its size, there might be multiple page writes needed.
As you can see, it can create a lot of waste, especially if you rarely use that property. To make the matter worse, due to the size of the property, it means they are taking up space in LOH (large objects heap).
And imagine if you have such properties in each and every of your content. The waste is multiplied, and your site is now at risk of some frequent Gen 2 Garbage collection. Nobody likes visiting a website that freezes (if not crashes) once every 30 minutes.
Then when to store such big collection data?
The obvious answer is … somewhere else. Without other inputs, it’s hard to give you some concrete suggestions, but how’s about a normalized custom table? You have the key as the content reference, and the other column is each value of the list. Just an idea. Then you only load the data when you absolutely need it. More work, yes, but it’s the better way to do it.
Just a reminder that whatever you do, just stay away from DDS – Dynamic Data Store. It’s the worst option of all. Just, don’t 🙂