We at Episerver takes performance seriously – as one of the feature that constantly monitored and fine-tuned. This is especially true for database accesses, as they are usually the bottlenecks of the system (accessing databases are I/O operations and in most of the cases it’s much more expensive than reading/writing to memory, or even some complex computation in promotions)
However, we can’t always make our queries blazing fast. In cases when the data set is simply too big, it will take time for SQL Server to complete it, no matter how smart the query was written, or how efficient the indexes were added. In some extreme cases when the data set is big enough, it will result in the infamous exception ” System.Data.SqlClient.SqlException: Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.”
Of course, in such cases, the best solution is to take another approach. Is it possible to restructure your data (for example, catalog), to make it smaller chunks that SQL Server can swallow? Or instead of loading all at once, you can try to load by small batches?
The another approach might not always be possible. Sometimes a query is meant to be slow and you can just want it to finish without the timeout exception. Is there a way to configure .NET so it will simply wait for SQL Server to complete the hard work, instead growing impatient and declared that SQL Server is not responding?
Yes, there is. However, it’s not in the
connectionString configuration, but in the
SqlDataProvider configuration, as below.
<add name="SqlDataProvider" type="Mediachase.Data.Provider.SqlDataProvider, Mediachase.SqlDataProvider" connectionStringName="EcfSqlConnection" applicationName="ECApplication" commandTimeout="120"/>
Here we are setting the
commandTimeout to 120 seconds, to let the data layer to wait for that time before throwing exception. If that attribute is missing, SqlDataProvider will use the default value (30 seconds), as configured by
You might ask, why not configure to a much higher value by default. What’s the drawback of that. Well, the queries, in general, are meant to be fast. If they take a longer time than expected (30 seconds), it means that something has gone wrong. Either the queries are ineffective (bad conditions, no or useless indexes), or the SQL Server is overloading, … The reason to have a 30 seconds timeout is to make sure those issues visible as early as possible, instead of hiding them until everything explodes.
It’s worth noting that this configuration should be used with caution. As mentioned above, this should not, normally, be used. Only configure
commandTimeout to a greater value if you know for sure that a query will fail without it, and all other options (better queries, better indexes, better approach) did not work.