Indexes are crucial to SQL Server performance. Having the right indexes might make the difference of day and night with your application performance – as I once talked here.
However, even having the right indexes is not everything. You have to keep them healthy. Indexes, as any other kinds of storage, is subjected to fragmentation. SQL Server works best if the index structure is compact and continuous, but with all of the inserts/updates/deletes, it’s inevitable to get fragmented. When the fragmentation grows, it starts affecting the performance of SQL Server: Instead of having to read just one page, it now have to read two, which increases both time and resource needed, and so on and so forth.
On a customer database where the fragmentation level of the indexes in NodeEntryRelation
table is more than 90%, they reported issues with timeout exceptions and deadlocks, mostly around queries to that table.
I decided to run this test.
set statistics io on set statistics time on exec ecf_Catalog_GetChildrenEntries @catalogId = 1 set statistics time off set statistics io off
Before:
Table ‘NodeEntryRelation’. Scan count 9, logical reads 2813, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘CatalogEntry’. Scan count 9, logical reads 9185, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘Workfile’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 859 ms, elapsed time = 351 ms.
SQL Server Execution Times:
CPU time = 859 ms, elapsed time = 359 ms.
And after. Rebuilding the index really made a big difference:
(0 row(s) affected)
Table ‘NodeEntryRelation’. Scan count 1, logical reads 1531, physical reads 0, read-ahead reads 35, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘CatalogEntry’. Scan count 1, logical reads 8365, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 140 ms, elapsed time = 167 ms.
SQL Server Execution Times:
CPU time = 140 ms, elapsed time = 210 ms.
So CPU time is reduced 5 times, while total time is reduced two times! Just for a few seconds of rebuilding indexes.
In Commerce 10.5, we added a new scheduled job, named “Maintain Database Indices Job”, to allow you regularly maintain the health of your indexes. The job will try to reorganize, or rebuild the indexes based on thresholds. By default, if the fragmentation level is more than 10% but less than 30%, it will reorganize, and if it’s more than 30%, it will rebuild. You can control the thresholds by setting episerver::LowFragmentationThreshold
and episerver::HighFragmentationThreshold
, respectively, in appSettings
section of web.config
.
Keeping your indexes correct, and healthy, is the key to have overall good performance, and to prevent your headaches from appearing.
you have to finally support persistable parameters for jobs. would be so much cooler 😉
Would be very nice to have – it’s just not in my power. 😉