Leschi HEAT CUSHION for Neck and Shoulders/for Microwave and Oven/Wheat Grain Pillow for Babies, Kids and Adults/Heating Animals: Noah the Fox, Red Grey Black.
.NET developers have been in the transition to move from synchronous APIs to asynchronous API. That was boosted a lot by await/async keyword of C# 5.0, but we are now in a dangerous middle ground: there are as many synchronous APIs as there are async ones. The mix of them requires the ability to call async APIs from a synchronous context, and vice versa. Calling synchronous APIs from an async context is simple – you can fire up a task and let it does the work. Calling async APIs from a sync context is much more complicated. And that is where AsyncHelper comes to the play.
AsyncHelper is a common thing used to run async code in a synchronous context. It is simple helper class with two methods to run async APIs
public static TResult RunSync<TResult>(Func<Task<TResult>> func)
{
var cultureUi = CultureInfo.CurrentUICulture;
var culture = CultureInfo.CurrentCulture;
return _myTaskFactory.StartNew(() =>
{
Thread.CurrentThread.CurrentCulture = culture;
Thread.CurrentThread.CurrentUICulture = cultureUi;
return func();
}).Unwrap().GetAwaiter().GetResult();
}
public static void RunSync(Func<Task> func)
{
var cultureUi = CultureInfo.CurrentUICulture;
var culture = CultureInfo.CurrentCulture;
_myTaskFactory.StartNew(() =>
{
Thread.CurrentThread.CurrentCulture = culture;
Thread.CurrentThread.CurrentUICulture = cultureUi;
return func();
}).Unwrap().GetAwaiter().GetResult();
}
There are slight variants of it, with and without setting the CurrentCulture and CurrentUICulture, but the main part is still spawning a new Task to run the async task, then blocks and gets the result using Unwrap().GetAwaiter().GetResult();
One of the reason it was so popular was people think it was written by Microsoft so it must be safe to use, but it is actually not true: the class is introduced as an internal class by AspNetIdentity AspNetIdentity/src/Microsoft.AspNet.Identity.Core/AsyncHelper.cs at main · aspnet/AspNetIdentity (github.com) .That means Microsoft teams can use it when they think it’s the right choice to do, it’s not the default recommendation to run async tasks in a synchronous context.
Unfortunately I’ve seen a fair share of threads stuck in AsyncHelper.RunSync stacktrace, likely have fallen victims of a deadlock situation.
Async/sync is a complex topic and even experienced developers make mistake. There is no simple way to just run async code in a sync context. AsyncHelper is absolutely not. It is simple, convenient way, but does not guarantee to be correct thing in your use case. I see it as a shortcut to solve some problems but create bigger ones down the path.
Just because you can. doesn’t mean you should. That applies to AsyncHelper perfectly
Recently I helped to chase down a ghost (and you might be surprised to know that I, for most part, spend hours to be a ghostbuster, it could be fun, sometimes). A customer reported a weird issue when a visitor goes to their website, have every thing correct in the cart, including the discount, only to have the discount disappeared when they check out. That would be a fairly easy task to debug and fix if not for the problem is random in nature. It might happen once in a while, but on average, daily. It could not be reproduced locally, or reproduced consistently on production, so all fix is based on guess work.
After a lot of dry code reading and then log reading, it turned out that it seems the problem with missing discount was problem with the missing cache. Once in a while, the cache that contains the promotion list is returned empty, resulting that no discount is applied to the order.
But why?
After a few guesses, it eventually came to me that the problem is with the caching using Dictionary, more specifically, campaigns are loaded and cached using a Dictionary, using IMarket as a key. It would be fine and highly efficient and well, if not for the fact that the default implementation of IMarket is not suitable to be a Dictionary key. It does not implement IComparable<T> and IEquatable<T> which means, for the only case that two IMarket instances to be equal, is that they are the same instances. Otherwise even if their properties all equal in value, they will not be equal.
This is a short program that demonstrates the problem. You can expect it write “False” to the output console.
public class Program
{
private static Dictionary<AClass, int> dict = new Dictionary<AClass, int>();
public static void Main()
{
dict.Add(new AClass("abc", 1), 1);
dict.Add(new AClass("xyz", 2), 2);
Console.WriteLine(dict.ContainsKey(new AClass("abc", 1)));
}
}
public class AClass
{
public AClass(string a, int b)
{
AString = a;
AnInt = b;
}
public string AString { get; set; }
public int AnInt { get; set; }
}
The question arises is that if the key is not matched and an empty list of campaigns returns, why this only happens sometimes. The answer is the IMarket instances themselves are cached, by default in 5 minutes. So for the problem to occur, a cache for campaigns must be loaded in memory, just before the cache for IMarket instances to be expired (then new instances are created). Once the new IMarket instances are loaded, then the campaigns cache must be accessed again before itself expires (default to 30 seconds). The timing needs to be “right” which causes this problem elusive and hard to find from normal testing – both automated and manual.
Time to some blaming and finger pointing. When I fix something I usually try to check the history of the code to understand the reason behind the original idea and intention. Was there a reason or just an overlook. And most importantly
Who wrote such code?
Me, about 7 months ago.
Uh oh.
The fix was simple enough. Instead of IMarket, we can change the key to MarketId which implements both IEquatable<T> and IComparer<T>. So it does not matter if you have two different instances of MarketId, as long as they have the same value, they will be equal.
A workaround was sent to the customer to test and after a week or so they reported back the problem is gone. The official fix is in Commerce 14.31 which was released yesterday https://nuget.optimizely.com/package/?id=EPiServer.Commerce.Core&v=14.31.0 , so you’re, always, highly recommended to upgrade.
Lessons learned:
Pick the dictionary key carefully. It should implement IEquatable<T> and IComparable<T> , properly I might ask. In general, a struct is a better choice than a class, if you can.
No matter how “experienced” you think you are, you are still a human being and can make mistake. It’s important to have someone to check your work from time to time, spotting problems that you couldn’t.
Black Friday is in full speed and here are the best deals you can buy now
Ninja air fryer MAX PRO, 6.2 litres, uses no oil, large square single box, family size, non-Stick, dishwasher safe basket and crisp plate, silicone tongs, black and copper, AF180EUCP
Ninja Foodi MAX Dual Zone Digital Air Fryer, 2 Drawers, 9.5L, 6-in-1, Usually No Oil, Air Fryer, Maximize Crisp, Roast, Bake, Dry, 8 Serves, Nonstick Dishwasher Safe Baskets, Black AF400EU
Corsair SP120 ELITE, 120mm PWM Hydraulic Stored Fan For Housing With CORSAIR AirGuide Technology – Low Noise, 24.7dBA, Fan Speeds From 300 RPM – 1300 RPM, 45.4 CFM, Single Pack – Black
Today when I was tracking down some changes, I came across this commit comment
The bugfix for COM-xxxx seems to make the importing
of metafields too fast and causing too many events raised,
potentially flood the event system. This avoids raising the events
unnecessarily.
Who wrote this? Me, almost 5 years ago. I did have a chuckle in my head, for my younger self being brutally honest (I was the one who fixed the original bug, so I was responsible for making it “too fast”.). Because it was sending too many events in a short time, the event system was overwhelmed and some other events (like cache invalidation of other parts) did not get through.
Well, the bug fix makes it even faster by reducing the number of events raised. In the end, everyone was happy.
Moral of the story:
The system can only be as fast as the slowest component. Keep that in mind when you put your effort. In most cases, the slowest part is where you should spend your time.
It is important to validate any optimization is end-to-end. Faster is not always better if some other parts (that part) could not keep up with this part becomes unexpectedly faster.
You can’t predict everything, something will happen in the most unexpected way. Move fast, break things, learn from it, fix it. Rinse and Repeat.
Sometimes, my work is easy, the problem could be resolved with one look (when I’m lucky enough to look at where it needs to be looked, just like this one Varchar can be harmful to your performance – Quan Mai’s blog (vimvq1987.com) ). Sometimes, it is hard. Can’t count number of times that I stared blankly at the screen, and decided I’d better take a nap, roast a batch of coffee, or take a walk (that is lying, however, I don’t walk), because I’m out of idea and this is going nowhere. The life of a software diagnostic engineer is like that, sometimes you are solving the mystery of “what do I need to solve this mystery”. There are usually more dots scattered around in all places, your job is to figure out which dots make senses, which dots do not, and how to connect those that are relevant to solve the problem, and to tell a story.
The story today is about a customer complaining about their scheduled instance on DXP keeps having high memory after running Find indexing job. They have a custom job that was built to optimize performance for their language settings, but the idea is the same – load content, serialize it and send it to the server endpoint for indexing. It is, indeed a memory heavy job, especially when you have a lot of content that needs to be indexed (basically, number of content x number of languages x the complexity of the content). It is normal to have an increase in memory usage during such job – the application (or rather, the runtime, depending on which way you look at it) is doing it job – content needs to be loaded in memory, and if there is available memory it will be a huge waste if it is not used for something useful. And the application will not immediately release that memory, as the content is cached. The memory will only be reclaimed only if the cache expired, or the application has memory pressure (i.e. it asks the operating system for more memory and the OS refuses “there is nothing left”). Even if the cache is expired, the application will not always compact and release the memory back to the OS (LOH etc.)
Now what is problematic is that the customer application retains 25GB of memory for indefinitely. They waited for 24h but the memory usage is still high. The application appears to be fine, it does not crash because of memory issues (like Out of Memory), but it causes confusion s and worries to our customer. Game’s on.
One thing that does not make senses in this case is that even thought they have a custom index job, it is still a scheduled job. And for scheduled jobs, the contents are supposed to have a very short sliding expiration time (default to 1 minute). However, the cache entries in the memory dumps tell a different story. A majority of the cache entries have 12h sliding expiration time. Which does explain – in part at least – why the memory remains high. When you have a longer sliding time, chance is higher that the cache is hit at least once before it expires, which reset the expiration. If you have sufficient hit, the cache will effectively remain in memory forever, until you actively evict it (by editing the content for example)
It is not what it should be, however, as the default value for sliding expiration timeout of a content loaded by a scheduled job is 1 minute – i.e. it is considered to be load once and be done item. Was it set to 12h by mistake. Nope
Timeout is set to 600.000.000 ticks which is 60 second, which is the default value.
I have been pulling my hairs over this for quite a while. What if the cache entries were not added by the scheduled job, but by some other way not affected by the limitation of scheduled job? In short, we were deceived by customer’s statement regarding Find indexing job. It was merely a victim of same issue. It was resetting the last access to the cache entry but that’s about it.
Time to dig a bit more. While Windbg is extremely powerful, it does not let you know where is the code that load a specific content into cache (not unless you catch it red handed). So the only way to know is to look around and check if there are any suspicious call the IContentLoader.GetItems or IContentLoader.GetChildren . A colleague of mine worked with the customer to obtain their source code, and another deep dive.
Fortunately for us, the customer has a custom built Find indexer we helped to built in a previous problem, and that was shown in the search for GetItems. It struck me that it could be the culprit. The job itself is … fine, however it was given wrong data so it keeps loading content to index.
If my hypothesis is correct, then these things must be true:
The app’s memory usage will raise to 25GB regardless of the indexing job running or not. And it remains there without much fluctuation
There are a lot of row in tblFindIndexQueue
It turned out both of those were correct: there were more than 4 millions of rows in tblFindIndexQueue, and this is the memory consumption of the app over 24 hours
One we figured out the source of content loading, the fix was pretty straightforward. One thing we could do from our side is to shorten caching time of content loaded by the event-driven indexer. You should upgrade to Find 16.2.0 which contains the fix for FIND-12436 which is a nice improvement for memory usage.
Moral of story:
I’m a workaholic. I definitely should not work on weekends, but sometimes I need to because that’s when my mind is clearest
Keep looking. But as always, know when to give up and admit defeat
Take breaks. Long, shorts. Refresh your mind and look at different angles.
The sliding cache expiration time can be quite unexpected. if a content is already in cache with long sliding expiration, then a cache hit (via ISynchronizedObjectInstanceCache.ReadThrough to get that content with short sliding expiration will not change that value, only refresh the last access time, and vice versa)
Once upon a time, a colleague asked me to look into a customer database with weird spikes in database log usage. (You might start to wonder why I am always the one who looks into weird things, is there a pattern here)
Upon reviewing the query store, I noticed a very high logical reads related to tblScheduledItem. From past experience, it was likely because of fragmentation of indexes in this table (which has only one clustered). I did a quick look at the table, and confirmed the index indeed has high fragmentation. I suggested to do a rebuild of that index and see what happen. Well, it could have been one of the daily simple quick questions, and I almost forgot about it.
A few days passed, the colleague pinged me again. Apparently they rebuilt it but it does not really help. That raised my eye brows a little bit, so I dug deeper.
To my surprised, the problem was not really fragmentation (it definitely contributed). The problem is that the column has a column of type nvarchar(max) , and it’s for recording the last text from the Execute method of the scheduled job. It was meant for something short like “The job failed successfully”, or “Deleted 12345 versions from 1337 contents”. But because it’s nvarchar(max) it could be very, very long. You can, in theory, store the entire book content of a a few very big libraries in there.
Of course because of you can, does not mean you should. When your column is long, each read from the table will be a burden to SQL Server. And the offending job was nothing less than our S&N indexing job.
In theory, any job could have caused this issue, however it’s much more likely happen with S&N indexing job for a reason – it keeps track of every exception thrown during indexing process, and because it indexes each and every content of your website (except the ones you specifically, explicitly tell it not too), the chance of its running into a recurring issue that affects multiple (reads, a lot) of content is much higher than any built-in job.
I asked, this time, to trim the column, and most importantly, fix any exceptions that might be thrown during the index. I was on my day off when my colleague noticed me that the job is running for 10h without errors, as they fixed it. Curious, so I did check some statistic. Well, let those screenshots speak for themselves:
The query itself went from 16,000ms to a mere 2.27ms. Even better, each call to get the list of scheduled job before results in 3.5GB logical reads. Now? 100KB. A lot of resource saved!
So, make sure your job is not throwing a lot of errors. And fix your S&N indexing job.
P/S: I do think S&N indexing job should have simpler return result. Maybe “Indexed 100.000 content with 1234 errors”, and the exceptions could have been logged. But that’s debatable. For now, you can do your parts!
Puck screen is the common recommended tool to people starting their espresso journey. It is said to improve water distribution at to the puck, which promotes even extraction which means better tasting espresso. My my best of effort, I can’t detect any change in flavor with or without the puck screen. The benefit of using puck screen – is to keep your group head clean. When you stop the pump, some of the coffee ground will be suck up to the
Normcore
Thickness: 1.7mm
This is a mesh puck screen, which means it has several levels pressed together with a laser etching. It is the thickest and also the heaviest (but not by much). It used to cost around 250kr on amazon.se but it is now 185kr.
Pros: It is probably the one that works best.
Cons: it feels bulky. It probably works the best but it is also the hardest to clean. I had to soak it in puly caff from time to time to clean it even after putting it in disk washing machine. There could be concern that it will cold the water down more than it should, but given its thermal mass compared to the group head it should be very negligible.
3MHW-Bomber
Thickness: 0.8mm
It is a hybrid puck screen, with a plate with bigger holes and a net with smaller holes. Price is around 140kr on Aliexpress but I bought 5 for 330kr.
Pros: Of all three, this one is the best looking, not as bulky as the normcore but not as slimy as the Temu one. It also seems to work great – on par with Normcore, while much easier to clean.
Cons: it’s asymmetric so if you put it in the puck up side down, it is not as beautiful.
No name Temu puck screen
I had I hope for this but it turn out to be quite a disappointment. It is by far the cheapest, for around 2 pieces/35kr. It is simply a very thin metal place with holes
Pros: It is the thinnest (0.2mm), and also the lightest (2.7gr), so virtually no impact on thermality. It is also the easiest to clean.
Cons: it is too thin it is easy to bend, harder to hold as well. When you store 2 or more of them, they are easy stuck together. Performs the worst as the holes are big enough for grounds to get through.
All in all, my favorite is the 3MHW-Bomber. It has the best craftmanship, it has a nice balanced between size and function. After some trials I decided to put away my Normcore and Temu, and use it exclusively.
As with most things in life, if it is not illegal, or immoral, or not prohibitively expensive, I want to try it. Life is so short that we need to experience more. When we still can.
Adverting is one of them. You probably noticed that my blog has plenty of ads (in some cases, a lot). I wanted to see two things
How Google ads works
If I can make money out of it
The answer to the second question is yes, I can. Just … not a lot. In the last few years I have been making less than 1 SEK per day from ads revenue. Just barely enough to cover the cost for the domain name renewal. I’m lucky enough that my employer is sponsoring the hosting cost, so ads is not making me rich. Not fast enough at least. I’ll likely not live to the day I can buy myself a serious roaster from ads revenue. (And if you want to sponsor me, this is the roaster I want to buy )
And ads can be annoying and sometimes, intrusive, I decided to turn off ads from this blog. I will add a button so you can donate/sponsor me some money if you think the blog is helpful to you (or you just want to be nice and toss a coin to your witcher, I meant, blogger)
I promise I will use those money on weed, I meant, green coffee beans. I started coffee roasting recently, and stay true to my philosophy, I want to test as many coffee as I can. Some are reasonable priced, some are pretty expensive.
In any cases, thank you for visiting my blog. I hope it is helpful to you, and I wish you a wonderful day!
As string is the most common data type in an application, nvarchar and its variant varchar are probably the most common column types in your database. (We almost always use nvarchar because nchar is meant for fixed length columns which we don’t have). The difference is that nvarchar has encoding of UTF-16/USC-2 while varchar has UTF-8
Starting with SQL Server 2012 (11.x), when a Supplementary Character (SC) enabled collation is used, these data types store the full range of Unicode character data and use the UTF-16 character encoding. If a non-SC collation is specified, then these data types store only the subset of character data supported by the UCS-2 character encoding.
But varchar can be harmful in a way that you don’t expect it to. Let’s assume we have this simple table with two columns (forgive naming, I can’t come up with better names)
Each will be inserted with same random values, almost unique. We will add a non clustered index on each of these columns, and as we know, the index should be very efficient on querying based on those values.
Let’s try with out varchar column first. It should work pretty well right. Nope!
SELECT *
FROM dbo.[Demo]
where varcharColumn = N'0002T9'
Instead of a highly efficient Index seek, it does an Index scan on the entire table. This is of course not what you want to.
But, why? Good question. You might have noticed that I used N’0002T9′ which is a nvarchar type – which is what .NET would pass to your query if your parameter is of type string. If you look closer to the execution plan, you’ll see that SQL Server has to do a CONVERT_IMPLICIT on each row of this column, effectively invalidates the index.
If we pass ‘0002T9’ without the notion though, it works as it should, this can cause the confusion as it works during development, but once deployed it is much slower
To see the difference we can run the queries side by side. Note that this is for a very simple table with 130k rows. If you have a few millions or more rows, the difference will be even bigger.
What’s about the vice versa? If we have data as nvarchar(100) but the parameter is passed as varchar ? SQL Server can handle it with ease. It simply converts the parameters to nvarchar and does an index seek, as it should
So moral of the story? Unless you have strong reasons to use varchar (or char ), stick with nvarchar (or nchar ) to avoid complications with data type conversion which can, and will hurt your database performance.