Don’t use AspNetIdentity FindByEmailAsync/FindByIdAsync

Or any of its equivalent – FindByEmail/FindById etc.

Why?

Reason? It’s slow. Slow enough to effectively kill your database, and therefore, your website.

If you want dig into the default implementation (which is using EntityFramework), this is what you end up with, either if you are using FindByEmailAsync, or its synchronous equivalent FindByEmail

    public virtual Task<TUser> FindByEmailAsync(string email)
    {
      this.ThrowIfDisposed();
      return this.GetUserAggregateAsync((Expression<Func<TUser, bool>>) (u => u.Email.ToUpper() == email.ToUpper()));
    }

It finds user by matching the email, but not before it uses ToUpper in both sides of the equation. This is to ensure correctness because an user can register with “[email protected]” but then try to login with “[email protected]”. If the database is set to be CS – case sensitive collation, that is not a match.

That is fine for C#/.NET, but it is bad for SQL Server. When it reaches database, this query is generated

(@p__linq__0 nvarchar(4000))SELECT TOP (1) 
    [Extent1].[Id] AS [Id], 
    [Extent1].[NewsLetter] AS [NewsLetter], 
    [Extent1].[IsApproved] AS [IsApproved], 
    [Extent1].[IsLockedOut] AS [IsLockedOut], 
    [Extent1].[Comment] AS [Comment], 
    [Extent1].[CreationDate] AS [CreationDate], 
    [Extent1].[LastLoginDate] AS [LastLoginDate], 
    [Extent1].[LastLockoutDate] AS [LastLockoutDate], 
    [Extent1].[Email] AS [Email], 
    [Extent1].[EmailConfirmed] AS [EmailConfirmed], 
    [Extent1].[PasswordHash] AS [PasswordHash], 
    [Extent1].[SecurityStamp] AS [SecurityStamp], 
    [Extent1].[PhoneNumber] AS [PhoneNumber], 
    [Extent1].[PhoneNumberConfirmed] AS [PhoneNumberConfirmed], 
    [Extent1].[TwoFactorEnabled] AS [TwoFactorEnabled], 
    [Extent1].[LockoutEndDateUtc] AS [LockoutEndDateUtc], 
    [Extent1].[LockoutEnabled] AS [LockoutEnabled], 
    [Extent1].[AccessFailedCount] AS [AccessFailedCount], 
    [Extent1].[UserName] AS [UserName]
    FROM [dbo].[AspNetUsers] AS [Extent1]
    WHERE ((UPPER([Extent1].[Email])) = (UPPER(@p__linq__0))) OR ((UPPER([Extent1].[Email]) IS NULL) AND (UPPER(@p__linq__0) IS NULL))

If you can’t spot the problem – don’t worry because I have seen experienced developers made the same mistake. By using the TOUPPER function on the column you are effectively remove any performance benefit of the index that might be on Email column. That means this query will do an index scan every time it is called. We have the TOP(1) statement which somewhat reduces the impact (it can stop as soon as it finds a match), but if there is no match – e.g. no registered email, it will be a full index scan.

If you have a lot of registered customers, frequent calls to that query can effectively kill your database.

And how to fix it

Fixing this issue will be a bit cumbersome, because the code is well hidden inside the implementation of AspNetIdentity EntityFramework. But it’s not impossible. First we need an UserStore which does not use the Upper for comparison:

public class FoundationUserStore<TUser> : UserStore<TUser> where TUser : IdentityUser, IUIUser, new()
{
    public FoundationUserStore(DbContext context)
        : base(context)
    { }

    public override Task<TUser> FindByEmailAsync(string email)
    {
        return GetUserAggregateAsync(x => x.Email == email);
    }

    public override Task<TUser> FindByNameAsync(string name)
    {
        return GetUserAggregateAsync(x => x.UserName == name);
    }
}

And then a new UserManager to use that new UserStore

    public class CustomApplicationUserManager<TUser> : ApplicationUserManager<TUser> where TUser : IdentityUser, IUIUser, new()
    {
        public CustomApplicationUserManager(IUserStore<TUser> store)
            : base(store)
        {
        }

        public static new ApplicationUserManager<TUser> Create(IdentityFactoryOptions<ApplicationUserManager<TUser>> options, IOwinContext context)
        {
            var manager = new ApplicationUserManager<TUser>(new FoundationUserStore<TUser>(context.Get<ApplicationDbContext<TUser>>()));

            // Configure validation logic for usernames
            manager.UserValidator = new UserValidator<TUser>(manager)
            {
                AllowOnlyAlphanumericUserNames = false,
                RequireUniqueEmail = true
            };

            // Configure validation logic for passwords
            manager.PasswordValidator = new PasswordValidator
            {
#if DEBUG
                RequiredLength = 2,
                RequireNonLetterOrDigit = false,
                RequireDigit = false,
                RequireLowercase = false,
                RequireUppercase = false
#else
                RequiredLength = 6,
                RequireNonLetterOrDigit = true,
                RequireDigit = true,
                RequireLowercase = true,
                RequireUppercase = true

#endif
            };

            // Configure user lockout defaults
            manager.UserLockoutEnabledByDefault = true;
            manager.DefaultAccountLockoutTimeSpan = TimeSpan.FromMinutes(5);
            manager.MaxFailedAccessAttemptsBeforeLockout = 5;

            var provider = context.Get<ApplicationOptions>().DataProtectionProvider.Create("EPiServerAspNetIdentity");
            manager.UserTokenProvider = new DataProtectorTokenProvider<TUser>(provider);

            return manager;
        }
    }

And then a way to register our UserManager

    public static IAppBuilder AddCustomAspNetIdentity<TUser>(this IAppBuilder app, ApplicationOptions applicationOptions) where TUser : IdentityUser, IUIUser, new()
    {
        applicationOptions.DataProtectionProvider = app.GetDataProtectionProvider();

        // Configure the db context, user manager and signin manager to use a single instance per request
        app.CreatePerOwinContext<ApplicationOptions>(() => applicationOptions);
        app.CreatePerOwinContext<ApplicationDbContext<TUser>>(ApplicationDbContext<TUser>.Create);
        app.CreatePerOwinContext<ApplicationRoleManager<TUser>>(ApplicationRoleManager<TUser>.Create);
        app.CreatePerOwinContext<ApplicationUserManager<TUser>>(CustomApplicationUserManager<TUser>.Create);
        app.CreatePerOwinContext<ApplicationSignInManager<TUser>>(ApplicationSignInManager<TUser>.Create);

        // Configure the application
        app.CreatePerOwinContext<UIUserProvider>(ApplicationUserProvider<TUser>.Create);
        app.CreatePerOwinContext<UIRoleProvider>(ApplicationRoleProvider<TUser>.Create);
        app.CreatePerOwinContext<UIUserManager>(ApplicationUIUserManager<TUser>.Create);
        app.CreatePerOwinContext<UISignInManager>(ApplicationUISignInManager<TUser>.Create);

        // Saving the connection string in the case dbcontext be requested from none web context
        ConnectionStringNameResolver.ConnectionStringNameFromOptions = applicationOptions.ConnectionStringName;

        return app;
    }

Finally, replace the normal app.AddAspNetIdentity with this:

        app.AddCustomAspNetIdentity<SiteUser>(new ApplicationOptions
        {
            ConnectionStringName = commerceConectionStringName
        });

As I mentioned, this is cumbersome to do. If you know a better way to do, I’m all ear ;).

We are also skipping the case sensitivity part. In most of the cases, it’ll be fine as you are most likely using CI collation instead. But it’s better to be sure than leave it to chance. We will address that in the second part of this blog post.

Register your custom implementation, the sure way

The point of Episerver dependency injection is that you can plug in your custom implementation for, well almost, everything. But it can be tricky at times how to properly register your custom implementation.

The default DI framework (and possibly any other popular DI frameworks) works in the way that implementation registered later wins, i.e. it overrides any other implementation registered before it. To make Episerver uses your implementation, you have to make sure yours is registered last.

  • Never register your customer implementation using ServiceConfiguration. Implementation with that attributes will be registered first in the initialization pipeline. You will run into either
    • The default implementation was registered in an IConfigurableModule.ConfigureContainer. As those will be registered later than any implementation using ServiceConfiguration , yours will be overridden by the default ones.
    • The default implementation was also registered using ServiceConfiguration. Now you run into indeterministic situation – the order will be randomized every time your website starts. Sometimes it’s yours, sometimes it’s the default one, and that might cause some nasty bug (Heisenbug, if you know the reference 😉 )
  • That leaves you with registering your implementation by IConfigurableModule.ConfigureContainer . In many cases, registering your implementations here will just work, because the default implementations are registered by ServiceConfiguration attribute. However, that is not always the case. There is a possibility that the default one was registered using IConfigurableModule.ConfigureContainer, and things will be tricky. First of all, unlike IInitializationModule when you can make your module depends on a specific module, the order in which IConfigurationModule.ConfigureContainer is executed is not determined. Even if you were allowed to make the dependency, it’s not clear which module you should depend on, and in many cases, that module is internal, so you can’t specify it

That is the point of this post then. To make sure your implementation is registered regardless of how the default one is registered, you can always fallback to use the ConfigurationComplete event of ServiceConfigurationContext. This is called once all ConfigureContainer have been called, so you can be sure that the default implementation is registered – time to override it then!

        public void ConfigureContainer(ServiceConfigurationContext context)
        {
            context.ConfigurationComplete += Context_ConfigurationComplete;
        }

        private void Context_ConfigurationComplete(object sender, ServiceConfigurationEventArgs e)
        {
            e.Services.AddSingleton<IOrderRepository, CustomOrderRepository>();
        }

Simple as that!

Note that this only applies to cases when you want to override the default implementation. If you register an implementation of your own interfaces/abstract classes, or you will be adding your implementation (not overriding the default one, an example is if you have an implementation of IShippingPlugin), you can register it in any way.

Don’t insert an IEnumerable to cache

You have been told, cache is great. If used correctly, it can greatly improve your website performance, sometimes, it can even make the difference between life and death.

While that’s true, cache can be tricky to get right. The #1 issue with cache is cache invalidation, which we will get into detail in another blog post. The topic of today is a hidden, easy to make mistake, but can wreck havok in production.

Can you spot the problem in this snippet?

var someData = GetDataFromDatabase();                
var dataToCache = someData.Concat(someOtherData);
InsertToCache(cacheKey, dataToCache);

If you can’t, don’t worry – it is more common than you’d imagine. It’s easy to think that you are inserting your data correctly to cache.

Except you are not.

dataToCache is actually just an enumerator. It’s not until you get your data back and actually access the elements, the enumerator is actually called to fetch the data. If GetDataFromDatabase does not return a List<T>, but a lazy loading collection, that is when unpredictable things happen.

Who like to have unpredictability on a production website?

A simple, but effective advice is to always make sure you have the actual data in the object you are inserting to cache. Calling either .ToList() or ToArray() before inserting the data to cache would solve the problem nicely.

And that’s applied to any other lazy loading type of data as well.

Include/IncludeOn/Exclude/ExcludeOn: a simple explanation

When I come across this question https://world.episerver.com/forum/developer-forum/-Episerver-75-CMS/Thread-Container/2020/3/trouble-with-availablecontenttypesattribute-excludeonincludeon/ I was rather confused by the properties of AvailableContentTypesAttribute (admittedly I don’t use them that often!). Looking at the code that defined them, or the XML documentation does not really help. I only come to an understanding when I look into how they are used, and I guess many other developers, especially beginners, might have same confusion, so here’s a simple explanation.

Include : defines content types that can be created as children of a content of this type (being decorated by the attribute)

IncludeOn: defines content types that can be parent of a content of this type

Exclude: defines content types that can not be created as children of a content of this type

ExcludeOn: defines content types that can not be parent of a content of this type.

If there is a conflict between those properties, for example content type A has Include with content type B, and content type B has ExcludeOn with content type A, then Exclude and ExcludeOn take priority (i.e. they will override Include and IncludeOn. In the example above then content type B will not be able to be children of content type A)

While AvailableContentTypesAttribute is extremely helpful, the property naming is not the best – they are short and symmetric, but they are not easy to understand and remember. An “improved” example might be

CanBeParentOf

CanBeChildrenOf

CannotBeParentOf

CannotBeChildrenOf

Yes they are more verbose, but they are unambiguous and you will not have to check the document (or this blog post) when you use them.

This is not the first time we have something that rather confusing in our API. One notable example is the old (now removed) ILinksRepository with the Source and Target properties in Relation . For quite some time I always had to check the code to know what to use, and then had the documentation updated, and eventually, changed to Parent and Child. No API is created perfect, but we can improve over time.

Dynamic data store is slow, (but) you can do better.

If you have been developing with Episerver CMS for a while, you probably know about its embedded “ORM”, called Dynamic Data Store, or DDS for short. It allows you to define strongly typed types which are mapped to database directly to you. You don’t have to create the table(s), don’t have to write stored procedures to insert/query/delete data. Sounds very convenient, right? The fact is, DDS is quite frequently used, and more often than you might think, mis-used.

As Joel Spolsky once said Every abstraction is leaky, an ORM will likely make you forget about the nature of the RDBMS under neath, and that can cause performance problems, sometime severe problems.

Let me make it clear to you

DDS is slow, and it is not suitable for big sets of data.

If you want to store a few settings for your website, DDS should be fine. However, if you are thinking about hundreds of items, it is probably worth looking else. Thousands and more items, then it would be a NO.

I did spend some time trying to bench mark the DDS to see how bad it is. A simple test is to add 10.000 items to a store, then query by each item, then deleted by each item, to see how long does it take

The item is defined like this, this is just another boring POCO:

internal class ShippingArea : IDynamicData
{
    public Identity Id { get; set; }

    public string PostCode { get; set; }

    public string Area { get; set; }

    public DateTime Expires { get; set; }
}

The store is defined like this

    public class ShippingAreaStore
    {
        private const string TokenStoreName = "ShippingArea";

        internal virtual ShippingArea CreateNew(string postCode, string area)
        {
            var token = new ShippingArea
            {
                Id = Identity.NewIdentity(),
                PostCode = postCode,
                Area = area,
                Expires = DateTime.UtcNow.AddDays(1)
            };
            GetStore().Save(token);
            return token;
        }

        internal virtual IEnumerable<ShippingArea> LoadAll()
        {
            return GetStore().LoadAll<ShippingArea>();
        }

        internal virtual IEnumerable<ShippingArea> Find(IDictionary<string, object> parameters)
        {
            return GetStore().Find<ShippingArea>(parameters);
        }

        internal virtual void Delete(ShippingArea shippingArea)
        {
            GetStore().Delete(shippingArea);
        }

        internal virtual ShippingArea Get(Identity tokenId)
        {
            return GetStore().Load<ShippingArea>(tokenId);
        }

        private static DynamicDataStore GetStore()
        {
            return DynamicDataStoreFactory.Instance.CreateStore(TokenStoreName, typeof(ShippingArea));
        }
    }

Then I have some quick and dirty code in QuickSilver ProductController.Index to measure the time (You will have to forgive some bad coding practices here ;). As usual StopWatch should be used on demonstration only, it should not be used in production. If you want a good break down of your code execution, use tools like dotTrace. If you want to measure production performance, use some monitoring system like NewRelic or Azure Application Insights ):

        var shippingAreaStore = ServiceLocator.Current.GetInstance<ShippingAreaStore>();
        var dictionary = new Dictionary<string, string>();
        for (int i = 0; i < 10000; i++)
        {
            dictionary[RandomString(6)] = RandomString(10);
        }
        var identities = new List<ShippingArea>();
        var sw = new Stopwatch();
        sw.Start();
        foreach (var pair in dictionary)
        {
            shippingAreaStore.CreateNew(pair.Key, pair.Value);
        }
        sw.Stop();
        _logger.Error($"Creating 10000 items took {sw.ElapsedMilliseconds}");
        sw.Restart();
        foreach (var pair in dictionary)
        {
            Dictionary<string, object> parameters = new Dictionary<string, object>();
            parameters.Add("PostCode", pair.Key);
            parameters.Add("Area", pair.Value);
            identities.AddRange(shippingAreaStore.Find(parameters));
        }

        sw.Stop();
        _logger.Error($"Querying 10000 items took {sw.ElapsedMilliseconds}");
        sw.Restart();

        foreach (var id in identities)
        {
            shippingAreaStore.Delete(id);
        }
        sw.Stop();
        _logger.Error($"Deleting 10000 items took {sw.ElapsedMilliseconds}");

Everything is ready. So a few tries gave us a fairly stable result:

2019-12-02 13:33:01,574 Creating 10000 items took 11938

2019-12-02 13:34:59,594 Querying 10000 items took 118009

2019-12-02 13:35:24,728 Deleting 10000 items took 25131

And this is strictly single-threaded, the site will certainly perform worse when it comes to real site with a lot of traffic, and thus multiple insert-query-delete at the same time.

Can we do better?

There is a little better attribute that many people don’t know about DDS: you can mark a field as indexed, by adding [EPiServerDataIndex] attribute to the properties. The new class would look like this.

    [EPiServerDataStore]
    internal class ShippingArea : IDynamicData
    {
        public Identity Id { get; set; }

        [EPiServerDataIndex]
        public string PostCode { get; set; }

        [EPiServerDataIndex]
        public string Area { get; set; }

        public DateTime Expires { get; set; }
    }

If you peek into the database during the test, you can see that the data is now being written to Indexed_String01 and Indexed_String02 columns, instead of String01 and String02 as without the attributes. Such changes give us quite drastic improvement:

2019-12-02 15:38:16,376 Creating 10000 items took 7741

2019-12-02 15:38:19,245 Querying 10000 items took 2867

2019-12-02 15:38:44,266 Deleting 10000 items took 25019

The querying benefits greatly from the new index, as it no longer has to do a Clustered Index Scan, it can now do a non clustered index seek + Key look up. Deleting is still equally slow, because the delete is done by a Clustered Index delete on the Id column, which we already have, and the index on an Uniqueidentifier column is not the most effective one.

Before you are happy which such improvement, keep in mind that there are two indexes added for Indexed_String01 and Indexed_String02 separately. Naturally, we would want a combination, clustered even, on those columns, but we just can’t.

What if we want to go bare metal and create a table ourselves, write the query ourselves? Our repository would look like this

public class ShippingAreaStore2
    {
        private readonly IDatabaseExecutor _databaseExecutor;

        public ShippingAreaStore2(IDatabaseExecutor databaseExecutor)
        {
            _databaseExecutor = databaseExecutor;
        }

        /// <summary>
        /// Creates and stores a new token.
        /// </summary>
        /// <param name="blobId">The id of the blob for which the token is valid.</param>
        /// <returns>The id of the new token.</returns>
        internal virtual ShippingArea CreateNew(string postCode, string area)
        {
            var token = new ShippingArea
            {
                Id = Identity.NewIdentity(),
                PostCode = postCode,
                Area = area,
                Expires = DateTime.UtcNow.AddDays(1)
            };
            _databaseExecutor.Execute(() =>
            {
                var cmd = _databaseExecutor.CreateCommand();
                cmd.CommandText = "ShippingArea_Add";
                cmd.CommandType = CommandType.StoredProcedure;
                cmd.Parameters.Add(_databaseExecutor.CreateParameter("Id", token.Id.ExternalId));
                cmd.Parameters.Add(_databaseExecutor.CreateParameter("PostCode", token.PostCode));
                cmd.Parameters.Add(_databaseExecutor.CreateParameter("Area", token.Area));
                cmd.Parameters.Add(_databaseExecutor.CreateParameter("Expires", token.Expires));
                cmd.ExecuteNonQuery();
            });

            return token;
        }

        internal virtual IEnumerable<ShippingArea> Find(IDictionary<string, object> parameters)
        {
            return _databaseExecutor.Execute<IEnumerable<ShippingArea>>(() =>
            {
                var areas = new List<ShippingArea>();
                var cmd = _databaseExecutor.CreateCommand();
                cmd.CommandText = "ShippingArea_Find";
                cmd.CommandType = CommandType.StoredProcedure;
                cmd.Parameters.Add(_databaseExecutor.CreateParameter("PostCode", parameters.Values.First()));
                cmd.Parameters.Add(_databaseExecutor.CreateParameter("Area", parameters.Values.Last()));
                var reader = cmd.ExecuteReader();
                while (reader.Read())
                {
                    areas.Add(new ShippingArea
                    {
                        Id = (Guid)reader["Id"],
                        PostCode = (string)reader["PostCode"],
                        Area = (string)reader["Area"],
                        Expires = (DateTime)reader["Expires"]
                    });
                }
                return areas;
            });
        }

        /// <summary>
        /// Deletes a token from the store.
        /// </summary>
        /// <param name="token">The token to be deleted.</param>
        internal virtual void Delete(ShippingArea area)
        {
            _databaseExecutor.Execute(() =>
            {
                var cmd = _databaseExecutor.CreateCommand();
                cmd.CommandText = "ShippingArea_Delete";
                cmd.CommandType = CommandType.StoredProcedure;
                cmd.Parameters.Add(_databaseExecutor.CreateParameter("PostCode", area.PostCode));
                cmd.Parameters.Add(_databaseExecutor.CreateParameter("Area", area.Area));
                cmd.ExecuteNonQuery();
            });
        }
    }

And those would give us the results:

2019-12-02 16:44:14,785 Creating 10000 items took 2977

2019-12-02 16:44:17,114 Querying 10000 items took 2315

2019-12-02 16:44:20,307 Deleting 10000 items took 3190

Moral of the story?

DDS is slow and you should be avoid using it if you are working with fairly big set of data. If you have to use DDS for whatever reason, make sure to at least try to index the columns that you query the most.

And in the end of the days, hand-crafted custom table + query beats everything. Remember that you can use some tools like Dapper to do most of the works for you.

Listing permissions per user/group

This week I came cross this question on Episerver World forum https://world.episerver.com/forum/developer-forum/Episerver-Commerce/Thread-Container/2019/5/get-rolepermission-data/ , and while it is not Commerce-related. it is quite interesting to solve. Perhaps this short post will help the original poster, as well future visitors.

As in the thread, I replied the first piece to solve the puzzle:


You can use PermissionTypeRepository to get the registered PermissionTypes, then PermissionRepository to figure out which groups/users have a specific permission 

If you want to list permissions granted to a specific role or user, it is just a simple reversion using a dictionary:

            var rolePermissionMap = new Dictionary<string, HashSet<PermissionType>>(StringComparer.OrdinalIgnoreCase);
            var permissionTypes = _permissionTypeRepository.List();
            foreach (var permissionType in permissionTypes)
            {
                var securityEntities = _permissionRepository.GetPermissions(permissionType);
                foreach (var securityEntity in securityEntities)
                {
                    if (rolePermissionMap.ContainsKey(securityEntity.Name))
                    {
                        rolePermissionMap[securityEntity.Name].Add(permissionType);
                    }
                    else
                    {
                        rolePermissionMap[securityEntity.Name] = new HashSet<PermissionType>() { permissionType };
                    }
                }
            }

As suggested above, we use
PermissionTypeRepository to list the registered PermissionType(s) , and then for each PermissionType we get the list of SecurityEntity it is granted for. A SecurityEntity can be an user, a group, or a virtual role, and is identified by the name. For purpose of demonstration, we only use names: For each SecurityEntity granted a permission, we check if it is in our dictionary already, if yes, then add the permission to the list, otherwise add a new entry.

Simple, eh?

Unless if you are assigning/un-assigning permissions a lot, it is probably a good idea to keep this Dictionary in cache for some time, because it is not exactly cheap to build.

Watch out for Singletons

If you are a seasoned Episerver developer, you should (and probably, already) know about the foundation of the framework: dependency injection. With the Inversion of control framework (most common, Structuremap, but recent versions of Framework allow much more flexible options), you can easily register your implementations, without having to manually create each and every instance by new operator. Sounds great, right? Yes it is.

And Episerver Framework allows you to make it even easier by this nice ServiceConfiguration attribute:

[ServiceConfiguration]
public class MyClass 
{
}

so your class will be automatically registered, and whenever you need an instance of MyClass, IoC framework will get the best instance for you, automatically, without breaking a sweat. Isn’t it nice? Yes it is.

But I guess you also saw this from place to place

[ServiceConfiguration(LifeCycle = ServiceInstanceScope.Singleton)]
public class MyClass 
{
}

So instead of creating a new instance every time you ask it to, IoC framework only creates the instance once and reuses it every time. You must think to yourself: even nicer, that would save you a lot of time and memory.

But is it (nicer)?

You might want to think again.

Singleton means one thing: shared state (or even worse, Global state). When a class is marked with Singleton, the instance of that class is supposed to be shared across the site. The upside, is, well, if your constructor is expensive to create, you can avoid just that. The downside, of course, shared state can be a real b*tch and it might come back to bite you. What if MyClass holds the customer address of current user. If I set my address to that, and because you get the same instance, you’ll gonna see mine. In Sweden it’s not a real problem as you can easily know where I live (even my birthday if you want to send a gift, or flowers), but I guess in the bigger parts of the world that is a serious privacy problem. And what if it’s not just address?

And it’s not just that, Singleton also make things complicated with “inherited singleton”. Let’s take a look at previous example. Now we see Singleton is bad, so let’s remove that on our class. But what if other class depends on our little MyClass:

[ServiceConfiguration(LifeCycle = ServiceInstanceScope.Singleton)]
public class MyOtherClass 
{
   private MyClass _myClass;
   public MyOtherClass(MyClass myClass)
   {
        _myClass = myClass;
   }
}

Now I hope you see where the problem is. One instance of MyOtherClass is shared inside the side. And it comes with an attached MyClass instance. Even if you don’t intend to, that MyClass instance will also be shared. Same problem after all.

Singleton was there to solve one problem (or two), but it can also introduce other problems if you don’t really think about if your instance should be shared or not. And not just your class, the classes which have dependency on your class as well.

And it’s not just Singleton , HttpContext and Hybrid might also subject to same problem, but to a lesser extend. Any lifecycle that shares state should be considered: if you really need it and what you are sharing.

Lifecycle is hard, but it can also work wonder, so please take your time to make it right. It’s worth it.

Adding backslash ending to your URLs by UrlRewrite

It’s generally a best practice to add a backslash to your URLs, for performance reasons (let’s get back to that later in another post). There are several ways to do it, but the best/simplest way, IMO, is using UrlRewrite module. In most of the case, processing the Urls before it reaching the server code will be most effective, and here by using UrlRewrite we trust IIS to do the right thing. Our application does not even need to know about it. It’s also a matter of reusable. You can simply copy a working, well tested rule to another sites without having to worry much about compatibility.

Before getting started, it’s worth noting that UrlRewrite is an IIS module which is not installed by default, if you want to use it, then you would have to install it explicitly. Download and install it from https://www.iis.net/downloads/microsoft/url-rewrite .

And then we can simple add rules to <system.webServer><rewrite><rules> section in web.config. For the purpose of this post, this rule should be enough:

<rule name="Add trailing slash" stopProcessing="true">
  <match url="(.*[^/])$" />
  <conditions>
    <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />
    <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />
  </conditions>
  <action type="Redirect" redirectType="Permanent" url="{R:1}/" />
</rule>

If you are new to UrlRewrite, may be it’s useful to explain a bit on the rule. Here we want to add a rule which matches every url, except the urls which already end with backslash (/) already.

Continue reading “Adding backslash ending to your URLs by UrlRewrite”

Maintaining your indexes

Indexes are crucial to SQL Server performance. Having the right indexes might make the difference of day and night with your application performance – as I once talked here.

However, even having the right indexes is not everything. You have to keep them healthy. Indexes, as any other kinds of storage, is subjected to fragmentation. SQL Server works best if the index structure is compact and continuous, but with all of the inserts/updates/deletes, it’s inevitable to get fragmented. When the fragmentation grows, it starts affecting the performance of SQL Server: Instead of having to read just one page, it now have to read two, which increases both time and resource needed, and so on and so forth.

Continue reading “Maintaining your indexes”

Episerver caching issue with .NET 4.7

Update 1: The bug is fixed in .NET 4.7.1 (thanks to Pascal van der Horst for the information)

Update 2: The related bug is fixed in CMS Core 10.10.2 and 9.12.5. If upgrading to that version is not an option, you can contact Episerver support service for further assistance.

Original post:

If you are using Episerver and update to .NET 4.7 (even involuntarily, such as you are using DXC/Azure to host your websites. Microsoft updated Azure to .NET 4.7 on June 26th) , you might notice some weird performance issues. If your servers are in Europe, Asia or Australia, then you can see a peak in memory usage. If your servers in North America, then you can see the number of database calls increased. In both cases, your website performance is affected, the former can cause your websites to constantly restarts as memory usage reaches a threshold limit, and even more obvious in the latter. Why?

It was a known issue in .NET 4.7, as mentioned here: https://support.microsoft.com/en-us/help/4035412/fix-expiration-time-issue-when-you-insert-items-by-using-the-cache-ins

Continue reading “Episerver caching issue with .NET 4.7”