Query Performance Tuning – General Guidelines

Query performance tuning – Introduction

Query performance tuning is a process of improving system performance to make it faster and more scalable. It is a creative and iterative process which has a clear and well-defined objective. To achieve this goal, we have to follow certain defined steps.

Why we need this

Enterprise data changes frequently over time, which may impact the performance of existing queries. In most of the cases, a slight change in the index or in the query can improve the performance of the query significantly. In such cases, the improvement in performance can be much better than a CPU or memory upgrade.

How to tune a query

Performance tuning process has certain steps to follow in a defined way to achieve the clearly defined objective.

Query Performace Tuning - General Guidelines - Flow chart
Query performance tuning

In the above flow chart, you see that we begin by setting objective, followed by getting the current performance value. In case of any performance issue, modify one step at a time from the identified bottleneck problems and check that “Is this performance acceptable or objective met?” Continue the process of “one step modification” until you achieve your objective.

Key areas to focus

Below are the most common key areas;

  1. Query design issues: Here are some of the best practices in key points designing SQL queries:
    • Use column names instead of * in your SELECT queries – Retrieving more columns will need more time.
    • JOINS should be preferred over sub queries – In most of the cases, JOINS are better than sub queries and nested queries. Joins can also accommodate multiple columns.
    • Use EXISTS instead of IN and NOT EXISTS instead of NOT IN – IN statement generates a complete result set before further processing. EXISTS quits the processing as it finds the first hit assuming that the condition is true.
    • Use SELECT ‘X’ (any constant value) with EXISTS to check data existence for given condition instead of COUNT (*) – EXISTS quits processing when it finds the first record but COUNT (*) needs to scan all the records of the given table.
    • Do not use functions or arithmetic operators on columns used in WHERE clause– Using function or arithmetic operators in your WHERE clause may deny the use of indexes. It will force the optimizer to choose a table or index scan instead of index seek. Example; WHERE Table1.ColumnA + 1 = 100, should be written as WHERE Table1.ColumnA = 100 – 1
    • Think before using DISTINCT – DISTINCT requires extra effort to unify the final result set. It may be the case that you have missed some join conditions in your query and are getting duplicate rows in the final result set. In such cases, cross check your queries instead of just putting a DISTINCT after SELECT.
    • Prefer UNION ALL over UNION – UNION concatenates the result sets and then removes the duplicate values from the final result set using DISTINCT. If you have unique records or duplicate values are allowed in your result set, use UNION ALL.
    • Be aware while using LIKE clause – A leading wildcard character in your LIKE condition can force to scan entire table or index and will make the index worthless. Example: WHERE ColumnA LIKE ‘%MyText%’ will not be benefited from index created on ColumnA and will perform index scan. WHERE ColumnA LIKE ‘MyText%’, will perform seek operation on index created on ColumnA, which is better than the index scan.
    • Avoid query hints – SQL Server generates cost based execution plan depending on indexes, statistics, parameter values etc. It is always better to leave the optimizer to use its own strategy without forcing any query hint like JOIN types (hash, merge, loop), which can degrade the performance seriously sometimes in case of changes in table /index data or structure.
    • Prefer BETWEEN operator over IN and OR operators, if possible – In case you have a choice among IN, OR and BETWEEN operators, prefer BETWEEN operator, as optimizer can locate rows much faster with BETWEEN operator than IN and OR operators. Example; WHERE ColumnA BETWEEN 1 AND 6 should be preferred over WHERE ColumnA IN (1, 2, 3, 4, 5, 6).
    • Avoid NULLable columns – NULLable columns may need extra overhead when querying data and also demands same space for fixed length columns (CHAR, NCHAR). If you have to allow NULL values in your column, use variable length data type like; VARCHAR and NVARCHAR.
    • Be careful, when getting next value for identity column – Suppose, you have a table TableA with identity column and a FOR INSERT trigger to insert data in TableB with another identity column;
      • SCOPE_IDENTITY – Last identity value on the connection in same scope regardless of the table. It will return TableA identity value.
      • IDENT_CURRENT – Last identity value created in a table, regardless of the scope and connection. It will return TableA or TableB identity value depending on the call e.g. IDENT_CURRENT (‘TableA’) or IDENT_CURRENT (‘TableB’) will return identity values on TableA and TableB respectively.
      • @@IDENTITY – Last identity value on the connection regardless of table and scope of the statement. It will return TableB identity value in above example.
    • Table variables and Temp Tables: – Table variables should be preferred in case of less number of records (e.g. around 100). Table variables do not require locking overheads which might be efficient for temporary storage. But optimizer always estimates 1 row for execution plan creation in case of a table variable, as they do not store statistics, which may lead to a bad execution plan. So, in case of large datasets, a temp table will perform much better than table variables, because temp tables are very similar to fixed tables except it’s created for a session in TempDB, and support all indexing and statistics features. It might return good value for your investment of extra IO and locking costs in such kind of scenarios.
    • SET NOCOUNT ON – By default, SELECT, INSERT, UPDATE and DELETE statements return the number of rows effected. SET NOCOUNT ON helps to prevent this.
    • Do not prefix “sp_” with the name of user defined stored procedures– System procedures are prefixed with “Sp_” and any procedure prefixed with “sp_” is treated as system procedure and searched in master database first and then in current database. Also if you have same name as any existing system procedure, it may result in unexpected way.
    • Avoid HAVING clause, if you can – Having clause should be used to filter aggregated data only. WHERE narrows the result set by restricting the rows to participate in processing whereas HAVING works after all processing and aggregation on the data just before final output.
    • Use small transactions – A long running transaction will lock the resources for long time which may result in deadlocks. So, we should keep our transaction small to avoid such sort of situations.
  2. Inaccurate and poor indexes: Indexes plays a vital role on query performance by helping to retrieve and operate on lesser data, which in turn reduces the IO count, memory usage, and CPU time. This benefit comes with an extra cost of storage and maintenance of indexes apart from table data pages. In case of missing or inaccurate indexes, optimizer has to read and process all the records of the given table, which in turn leads to larger amount of data to operate.Following are few recommendations to help design better:
    • Keep clustered indexes small In case of larger tables with a lot of inserts, a clustered index key should be unique, changeless and always increasing.
    • Use narrow and fewer number of columns for indexes – Indexes requires data to be ordered and this should be maintained during DML operations on the table. Having more columns in your index will lead to wide storage as well as high maintenance cost. Narrow index helps by reducing IO cost during read operation because more data can be accommodated in one page of 8KB. We should also keep in mind the data type of the columns participating in index, as small data type column can be accommodated in fewer data pages.
    • Create clustered indexes first, if possible – Creating a clustered index on the table will update its key value in all non-clustered indexes already created. We can avoid this additional cost by creating clustered index first, followed by all non-clustered indexes.
    • Avoid clustered index on frequently updating columns – Clustered index key is referred by all non-clustered indexes, any update on clustered index will force to update its key value referred anywhere else.
    • Use column store indexes for aggregations – In SQL Server 2012 version, Column store indexes are introduced. These indexes store data column wise instead of traditional row wise storage. This helps in cases where large volume of data needs to be aggregated and retrieved.
    • Type of indexes should be considered too – We can create only one clustered index on the table because clustered index is the actual table data and we cannot order a table in more than one way. If you have to order the table on a column repeatedly and it is not going to be updated frequently, then, prefer clustered index over non-clustered. For more details, refer this click here.
    • Use high selectivity column in indexes – Index should be created on a column with a very high selectivity (distinct values in table), as high selectivity leads to better index seek. Low selectivity column leads to index scan instead of index seek. For example: Index created on column “Gender” which has only male and female values is a low selectivity index.
    • Use filtered indexes on lower selectivity columns – If you have a column with few unique values, create a filtered index for such cases.
  3. Set Based queries instead of Cursors and WHILE loops: SQL Server is not optimized for RBAR (row by agonizing row i.e. one row at a time). T-SQL, the scripting language of SQL Server, has mechanism to deal efficiently with SET based data in terms of Joins and sub queries. Cursors and loop needs to process one row at a time which is not good for database engine. In most of the cases (excluding some exceptional cases), we can rewrite our logic without these high performance killers. Even a recursive CTE should be replaced with SET based approach, if possible.
  4. Database design issue: Database design should be neither over-normalized nor under-normalized. Over-normalized database needs more joins to fetch a record which will increase the processing time. Also, under-normalized database keeps lot of redundant data, which may result in slow processing and inconsistent data. A balanced database between over and under normalization is best from performance point of view.
  5. Fragmentation issues: Non-contiguous storage of data in memory is known as fragmentation. It occurs due to the changes in table’s data. Indexes created on the table are stored in a given order which needs to be maintained during insert, update or delete operations on the table. Internal fragmentation (unused spaces between records inside a page) increases the number of pages to read and thus higher IO. And external fragmentation (non-contiguous storage of extents in memory because of page splits) increases the switches between extents and thus higher disk rotation. Both types of fragmentation kill the performance badly and should be analyzed and removed periodically from database.
  6. Inaccurate statistics issues: Statistics have the data distribution information on specific column or columns of a table by sampling the table data. SQL Server optimizer uses statistics data to decide the type of joins and data access methods (seek or scan) for execution plans. Inaccurate statistics can lead to awful execution plans for the given query and kill the performance drastically. Auto create and Auto update features of statistics should be left ON to benefit the optimizer. In case you have modified these settings, you should create a manual maintenance plan for statistics to execute it periodically.
  7. Execution plan issues: Sometimes, optimizer can lead to a bad execution plan for the given stored procedure because of poor statistics and / or parameter sniffing. SQL server creates a plan for stored procedure when it executes for the first time depending on the parameter values supplied, which is known as parameter sniffing. This execution plan is being used each time this stored procedure gets executed. Parameter sniffing can cause an ineffective plan due to the cardinality of the parameter values supplied at first execution. A bad execution plan impacts the query performance heavily and must be removed. Sometimes WITH RECOMPILE or OPTIMIZE FOR query hints can help in case of parameter sniffing with its own pros and cons. We can verify such type of issues in the execution plan generated.

Golden rule of performance tuning – The Pareto principle

20% effort of performance tuning can make up to 80% improvement in the performance but for remaining 20%, you need to put another 80% effort, which is known as “The Pareto principle” or “80:20 rule” or golden rule of the performance tuning.

What I have written above are general recommendations and may vary from scenario to scenario. However, I would request you to share your experiences that may or may not be covered in this document but which you think could be fruitful for all. Thanks for the reading and please do share and comment on this post, if you like this.

Rate This
[Total: 6 Average: 4.5]

Leave a Comment

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

This site uses Akismet to reduce spam. Learn how your comment data is processed.