I have been a PS4-fan (if I can call myself so) since the day I bought the original PS4 to play Dragon Age: Inquisition. At that point, PS4 is the clearly better choice than XBox One: smaller without a separate power adapter, 50% more powerful (1.84TFLOPS vs 1.23 TFLOPS), simpler policies to share/resell game, etc etc.
I even liked XBox One X when it was announced, it checks almost all the boxes, except for, well, games and price, so I gave it a pass, especially when I got a PS4 Pro from a colleague at a very good price. This genre, PS4 has won for many reasons, one of that is it has a excellent line of exclusive.
And I remain faithful until the day Elgiganten – a big electronic retailer in Sweden sells Xbox One X with Fallout 76 bundle at an unbeatable price (cheaper than Xbox One X selling alone, so the game actually reduces the price of the bundle!). I talked to myself – why not, if I don’t like it I can just return (it’s turned out that the game is digital, so I won’t be able to return. But I like it in the end so I decided to keep it. I find myself playing Apex Legends every night now, and I’m happy with it)
A marvel of engineering. It’s significantly more powerful than my PS4 Pro, yet it is just the same size and is even more quiet.
Incredible value if you think about Game Pass. Bunch of good games at a very low price, especially when you use the promotions that happen all the time. I spent about 30 SEK (less than 4 USD) for 3 months of that.
The controller battery lasts me weeks, and it takes only 1 minute or so to replace the battery.
Games that are optimized for X run exceptionally well. Forza Horizon 4, Red Dead Redemption 2, just to name a few.
Xbox and Xbox 360 games backward compatibility.
UHD Blu-ray player.
The UI is a mess. I complained about how HBO UI is a mess . But I think XBox One UI is on par in term of terrible.
The Blu-ray player that refuses to play my UHD bluray, 9 out of 10 times.
I will have to re-buy many games to get the advantage of native 4K.
Even though it is not the best identity management system in the .NET world, ASP.NET Membership provider is still fairly widely used, especially for systems that have been running for quite long time with a significant amount of users: migrating to a better system like AspNetIdentity does not comes cheap. However, built from early days of ASP.NET mean Membership provider has numerous significant limitations: beside the “architecture” problems, it also has limited performance. Depends on who you ask, the ultimate “maximum” number of customers that ASP.NET membership provider can handle ranges from 30.000 to 750.000. That does not sound great. Today if you start a new project, you should be probably better off with AspNetIdentity or some other solutions, but if your website is using ASP.NET membership provider and there is currently no plan to migrate, then read on.
The one I will be used for this blog post has around 950.000 registered users, and the site is doing great – but that was achieved by some very fine grained performance tuning, and a very high end Azure subscription.
A performance overview
I have been using ASP.NET membership provider for years, but I have never looked into it from performance aspects. (Even though I have done some very nasty digging to their table structure). And now I have the chance, I realize how bad it is.
It’s a fairly common seen in the aspnet_* tables that the indexes have ApplicationId as the first column. It does not take a database master to know it is a very ineffective way to create an index – in most of the cases, you only have on ApplicationId in your website, making those indexes useless when you want to, for example, query by UserId. This is a rookie mistake – a newbie tends to make order of columns in the index as same as they appear in the table, thinking, that that SQL Server will just do magic to exchange the order for the best performance. It’s not how SQL Server – or in general – RDBMS systems work.
It is OK to be a newbie or to misunderstand some concepts. I had the very same misconception once, and learned my lessons. However, it should not be OK for a framework to make that mistake, and never correct it.
That is the beginning of much bigger problems. Because of the ineffective order of columns, the builtin indexes are as almost useless. That makes the queries, which should be very fast, become unnecessarily slow, wasting resources and increasing your site average response time. This is of course bad news. But good news is it’s in database level, so we can change it for the better. It if were in the application level then our chance of doing that is close to none.
If you use Membership.GetUserNameByEmail on your website a lot, you might notice that it is … slow. It leads to this query:
FROM dbo.aspnet_Applications a, dbo.aspnet_Users u, dbo.aspnet_Membership m
WHERE LOWER(@ApplicationName) = a.LoweredApplicationName AND
u.ApplicationId = a.ApplicationId AND
u.UserId = m.UserId AND
LOWER(@Email) = m.LoweredEmail
Let’s just ignore the style for now (INNER JOIN would be a much more popular choice), and look into the what is actually done here. So it joins 3 tables by their keys. The join with aspnet_Applications would be fairly simple, because you usually have just one application. The join between aspnet_Users and aspnet_Membership is also simple, because both of them have index on UserId – clustered on aspnet_Users and non-clustered on aspnet_Membership.
The last one is actually problematic. The clustered index on aspnet_Membership actually looks like this
CREATE CLUSTERED INDEX [aspnet_Membership_index]
ON [dbo].[aspnet_Membership]([ApplicationId] ASC, [LoweredEmail] ASC);
Uh oh. Even if this contains LoweredEmail, it’s the worst possible kind of index. By using the least distinctive column in the first, it defeats the purpose of the index completely. Every request to get user name by email address will need to perform a full table scan (oops!)
This is a the last thing you want to see in a execution plan, especially with a fairly big table.
It should have been just
CREATE CLUSTERED INDEX [aspnet_Membership_index]
ON [dbo].[aspnet_Membership]([LoweredEmail] ASC);
which helps SQL Server to use the optimal execution plan
If you look into Azure SQL Database recommendation, it suggest you to create a non clustered index on LoweredEmail. That is not technically incorrect, and it still helps. However, keep in mind that each non clustered index will have to “duplicate” the clustered index, for the purpose of identify the rows, so keeping the useless clustered index actually increases wastes and slows down performance (even just a little, because you have to perform more reads to get the same data). However, if your database is currently performing badly, adding a non clustered index is a much quicker and safer option. The change to clustered index should be done with caution at low traffic time.
Tested the stored procedure on database above, without any additional index
I’m a mediocre gamer, by any measures, but that does not defy the fact that I love playing games. Great games, only. With my very limited time and budget, I must to be very selective about the games I play. For a very long time, Metacritic is my to-go website to check if I should play a game – and I set my rules rather high: anything less than 80 is a big no-no. Less than 85 is a no, unless it’s a sequel of a game I loved so much (so I can see how the story turned out to be). More than 85 is “I’ll think about it” (when it comes with a very tempting discount, I might make up my mind). A game of at least 90 Metacritic score will be in my short list (but not necessary in my playing list). A game of at least 95 Metacritic means “must try” for me. I might not love it, but a game with such reviews must have something I like about.
However, overtime, I did realize what MetaCritic is really meant to be, and why reviews can be full of sh*t.
Game reviews, as almost other content in the Internet, include the post you are reading, are opinions. It’s opinion, just like you and me, but the only thing that makes critics stand out is professionalism.
But that’s not something I’ve seen in many reviews which are listed on Metacritic.
You can either love, or hate a game. You can talk about it, openly, it’s your rights! But if you want to help your readers decide if they should buy or play the game, based on your judgement, you have to be fair. Your opinions have to be as objective as possible. Of course, there is no such thing as “true objective” in reviews, but yours should have a reasonable amount in them.
Unfortunately, in this area of internet, clicks and views are everything. More views mean more ads impressions, ads clicks, and that mean more money.
The best known trick for clicks, is of course, controversial opinion.
Metacritic has a fairly easy review policy. The only measurable criteria is you have to have at least 30 reviews in your website, and you have to review fairly regularly. All other criteria about ethical or professionalism are indeed subjective. And that are being heavily abused.
Less known websites have been using that tactic to draw attention to their website. By giving a highly anticipated game an unreasonable low score, they create the controversy. People will urge to visit their websites to comment on the absurd review, or even sharing the reviews so their friends can attack them.
Who is benefiting? Do you have to guess?
Let’s talk about the infamous Washington Post’s Uncharted 4: A thief end review. Of 113 reviews averaging 93/100, they gave them 2/5 (equivalent to 40/100).
Or the slightly less infamous, USGamer’s Horizon: Zero Dawn review: 5/10 for a game which is otherwise rated 89/100.
Can you imagine it?
And Washington Post is not the only one. GameCritics, another less known site, has jumped in a gave the game a 50/100.
You can be harsh – that’s what famous magazine like Edge is known for – but you also need to be fair. You can criticize a game as much as you like, as long as the criticism is objective – why don’t you like it, and what it can have done better. When you bash a game because it’s not your taste, you become biased. The said review from WP was even disagreed by their staff
I’ve never disagreed more with a piece of content on our website as I have with this. haha
The worst thing is this kind of shaggy tactic is not per review. Some sites are known to use it systematically, hoping to attract more clicks to the website:
GameCritics is just slightly better:
You see the pattern here? Well, you can expect those websites will, more often than not, give the lowest scores to a highly anticipated game, so their reviews become more visible and attract more clicks. People would go their and bash their review, like they themselves did with the game. In the end, who’s getting the benefits here?
Metacritic has changed the way we look at reviews. Before Metacritic, it’s more difficult to get a fairly balance, objective views of the game. But the good deed of Metacritic is being abused and if they don’t do anything to fight that, they will soon obsolete themselves.
This is the third part of the series: How to survive and thrive – a series for new developers to become better at their jobs. You can read the first two parts here and here.
In military, there is a term of “uphill battle”. That when you have to fight your way up a hill, when you enemy controls the top. It’s a very difficult fight and your chance of success is low. Any experienced military leader knows uphill battles are something you should avoid until there are no other options.
That also applies with any jobs. Including programming.
The truth is, you shouldn’t fight the battles you can’t win. (more…)
This is something you don’t do daily, but you will probably need one day, so it might come in handy.
Recently we got a question on how to update the code of all entries in the catalog. This is interesting, because even thought you don’t update the codes that often (if at all, as the code is the identity to identify the entries with external system, such as ERPs or PIMs), it raises a question on how to do mass update on catalog entries.
Update the code directly via database query. It is supposedly the fastest to do such thing. If you have been following my posts closely, you must be familiar with my note regarding how Episerver does not disclose the database schema. I list it here because it’s an option, but not the good one. It easily goes wrong (and cause catastrophes), you have to deal with versions and cache, and those can be hairy to get right. Direct data manipulation should be only used as the last resort when no other option is available.
No this is not really “art” – I’m just trying to have a more clickbait title. It’s more about understanding what you have at your disposal and use them for your benefits – in this case – how new SQL statement can drastically improve your performance.
In this blogpost we will look into paging feature of SQL Server. in Commerce we usually work with large set of data – millions of rows are fairly common, and it’s natural to load data by page. There is no point loading thousands, or even millions of rows in one go. First it’s not practical to display all of them. Second you’ll likely end up with an timeout exception and/or an out of memory exception. Even if you are lucky enough to get through, it’s still able to take your SQL Server instance to a knee, and transferring that much data over network will be another bottleneck for your system. So my friends, the best practice for loading data is to do it by batches, and to not load everything at once.
At Episerver development team, we understand the importance of good performance. Who would not like a lightning fast website? We work hard to ensure the framework is fast, and we seize (almost) every opportunity to make it faster.
You know in Commerce 10.2 we introduced a new cart mode – serializable cart, and it’s proven to bring great performance compared to the “old/traditional” approach. Our own tests showed an improvement of 3-5x times faster. But can it be even faster? Probably yes.
And actually we did some improvements in later versions. In the scope of this blog post, we will just focus into a specific aspect – and to learn a little more about SQL Server performance optimization.
Indexes are crucial to SQL Server performance. Having the right indexes might make the difference of day and night with your application performance – as I once talked here.
However, even having the right indexes is not everything. You have to keep them healthy. Indexes, as any other kinds of storage, is subjected to fragmentation. SQL Server works best if the index structure is compact and continuous, but with all of the inserts/updates/deletes, it’s inevitable to get fragmented. When the fragmentation grows, it starts affecting the performance of SQL Server: Instead of having to read just one page, it now have to read two, which increases both time and resource needed, and so on and so forth.