If you have been developing with Episerver CMS for a while, you probably know about its embedded “ORM”, called Dynamic Data Store, or DDS for short. It allows you to define strongly typed types which are mapped to database directly to you. You don’t have to create the table(s), don’t have to write stored procedures to insert/query/delete data. Sounds very convenient, right? The fact is, DDS is quite frequently used, and more often than you might think, mis-used.
As Joel Spolsky once said Every abstraction is leaky, an ORM will likely make you forget about the nature of the RDBMS under neath, and that can cause performance problems, sometime severe problems.
Let me make it clear to you
DDS is slow, and it is not suitable for big sets of data.
If you want to store a few settings for your website, DDS should be fine. However, if you are thinking about hundreds of items, it is probably worth looking else. Thousands and more items, then it would be a NO.
I did spend some time trying to bench mark the DDS to see how bad it is. A simple test is to add 10.000 items to a store, then query by each item, then deleted by each item, to see how long does it take
The item is defined like this, this is just another boring POCO:
internal class ShippingArea : IDynamicData { public Identity Id { get; set; } public string PostCode { get; set; } public string Area { get; set; } public DateTime Expires { get; set; } }
The store is defined like this
public class ShippingAreaStore
{
private const string TokenStoreName = "ShippingArea";
internal virtual ShippingArea CreateNew(string postCode, string area)
{
var token = new ShippingArea
{
Id = Identity.NewIdentity(),
PostCode = postCode,
Area = area,
Expires = DateTime.UtcNow.AddDays(1)
};
GetStore().Save(token);
return token;
}
internal virtual IEnumerable<ShippingArea> LoadAll()
{
return GetStore().LoadAll<ShippingArea>();
}
internal virtual IEnumerable<ShippingArea> Find(IDictionary<string, object> parameters)
{
return GetStore().Find<ShippingArea>(parameters);
}
internal virtual void Delete(ShippingArea shippingArea)
{
GetStore().Delete(shippingArea);
}
internal virtual ShippingArea Get(Identity tokenId)
{
return GetStore().Load<ShippingArea>(tokenId);
}
private static DynamicDataStore GetStore()
{
return DynamicDataStoreFactory.Instance.CreateStore(TokenStoreName, typeof(ShippingArea));
}
}
Then I have some quick and dirty code in QuickSilver ProductController.Index to measure the time (You will have to forgive some bad coding practices here ;). As usual StopWatch
should be used on demonstration only, it should not be used in production. If you want a good break down of your code execution, use tools like dotTrace. If you want to measure production performance, use some monitoring system like NewRelic or Azure Application Insights ):
var shippingAreaStore = ServiceLocator.Current.GetInstance<ShippingAreaStore>();
var dictionary = new Dictionary<string, string>();
for (int i = 0; i < 10000; i++)
{
dictionary[RandomString(6)] = RandomString(10);
}
var identities = new List<ShippingArea>();
var sw = new Stopwatch();
sw.Start();
foreach (var pair in dictionary)
{
shippingAreaStore.CreateNew(pair.Key, pair.Value);
}
sw.Stop();
_logger.Error($"Creating 10000 items took {sw.ElapsedMilliseconds}");
sw.Restart();
foreach (var pair in dictionary)
{
Dictionary<string, object> parameters = new Dictionary<string, object>();
parameters.Add("PostCode", pair.Key);
parameters.Add("Area", pair.Value);
identities.AddRange(shippingAreaStore.Find(parameters));
}
sw.Stop();
_logger.Error($"Querying 10000 items took {sw.ElapsedMilliseconds}");
sw.Restart();
foreach (var id in identities)
{
shippingAreaStore.Delete(id);
}
sw.Stop();
_logger.Error($"Deleting 10000 items took {sw.ElapsedMilliseconds}");
Everything is ready. So a few tries gave us a fairly stable result:
2019-12-02 13:33:01,574 Creating 10000 items took 11938
2019-12-02 13:34:59,594 Querying 10000 items took 118009
2019-12-02 13:35:24,728 Deleting 10000 items took 25131
And this is strictly single-threaded, the site will certainly perform worse when it comes to real site with a lot of traffic, and thus multiple insert-query-delete at the same time.
Can we do better?
There is a little better attribute that many people don’t know about DDS: you can mark a field as indexed, by adding [EPiServerDataIndex]
attribute to the properties. The new class would look like this.
[EPiServerDataStore]
internal class ShippingArea : IDynamicData
{
public Identity Id { get; set; }
[EPiServerDataIndex]
public string PostCode { get; set; }
[EPiServerDataIndex]
public string Area { get; set; }
public DateTime Expires { get; set; }
}
If you peek into the database during the test, you can see that the data is now being written to Indexed_String01
and Indexed_String02
columns, instead of String01
and String02
as without the attributes. Such changes give us quite drastic improvement:
2019-12-02 15:38:16,376 Creating 10000 items took 7741
2019-12-02 15:38:19,245 Querying 10000 items took 2867
2019-12-02 15:38:44,266 Deleting 10000 items took 25019
The querying benefits greatly from the new index, as it no longer has to do a Clustered Index Scan, it can now do a non clustered index seek + Key look up. Deleting is still equally slow, because the delete is done by a Clustered Index delete on the Id column, which we already have, and the index on an Uniqueidentifier column is not the most effective one.
Before you are happy which such improvement, keep in mind that there are two indexes added for Indexed_String01
and Indexed_String02
separately. Naturally, we would want a combination, clustered even, on those columns, but we just can’t.
What if we want to go bare metal and create a table ourselves, write the query ourselves? Our repository would look like this
public class ShippingAreaStore2
{
private readonly IDatabaseExecutor _databaseExecutor;
public ShippingAreaStore2(IDatabaseExecutor databaseExecutor)
{
_databaseExecutor = databaseExecutor;
}
/// <summary>
/// Creates and stores a new token.
/// </summary>
/// <param name="blobId">The id of the blob for which the token is valid.</param>
/// <returns>The id of the new token.</returns>
internal virtual ShippingArea CreateNew(string postCode, string area)
{
var token = new ShippingArea
{
Id = Identity.NewIdentity(),
PostCode = postCode,
Area = area,
Expires = DateTime.UtcNow.AddDays(1)
};
_databaseExecutor.Execute(() =>
{
var cmd = _databaseExecutor.CreateCommand();
cmd.CommandText = "ShippingArea_Add";
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add(_databaseExecutor.CreateParameter("Id", token.Id.ExternalId));
cmd.Parameters.Add(_databaseExecutor.CreateParameter("PostCode", token.PostCode));
cmd.Parameters.Add(_databaseExecutor.CreateParameter("Area", token.Area));
cmd.Parameters.Add(_databaseExecutor.CreateParameter("Expires", token.Expires));
cmd.ExecuteNonQuery();
});
return token;
}
internal virtual IEnumerable<ShippingArea> Find(IDictionary<string, object> parameters)
{
return _databaseExecutor.Execute<IEnumerable<ShippingArea>>(() =>
{
var areas = new List<ShippingArea>();
var cmd = _databaseExecutor.CreateCommand();
cmd.CommandText = "ShippingArea_Find";
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add(_databaseExecutor.CreateParameter("PostCode", parameters.Values.First()));
cmd.Parameters.Add(_databaseExecutor.CreateParameter("Area", parameters.Values.Last()));
var reader = cmd.ExecuteReader();
while (reader.Read())
{
areas.Add(new ShippingArea
{
Id = (Guid)reader["Id"],
PostCode = (string)reader["PostCode"],
Area = (string)reader["Area"],
Expires = (DateTime)reader["Expires"]
});
}
return areas;
});
}
/// <summary>
/// Deletes a token from the store.
/// </summary>
/// <param name="token">The token to be deleted.</param>
internal virtual void Delete(ShippingArea area)
{
_databaseExecutor.Execute(() =>
{
var cmd = _databaseExecutor.CreateCommand();
cmd.CommandText = "ShippingArea_Delete";
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add(_databaseExecutor.CreateParameter("PostCode", area.PostCode));
cmd.Parameters.Add(_databaseExecutor.CreateParameter("Area", area.Area));
cmd.ExecuteNonQuery();
});
}
}
And those would give us the results:
2019-12-02 16:44:14,785 Creating 10000 items took 2977
2019-12-02 16:44:17,114 Querying 10000 items took 2315
2019-12-02 16:44:20,307 Deleting 10000 items took 3190
Moral of the story?
DDS is slow and you should be avoid using it if you are working with fairly big set of data. If you have to use DDS for whatever reason, make sure to at least try to index the columns that you query the most.
And in the end of the days, hand-crafted custom table + query beats everything. Remember that you can use some tools like Dapper to do most of the works for you.