There are few rules for optimizing slow running query. Let us look at them one by one see how it can help.
Rule # 1 : Always look at query plan first.
I always start looking at query plan. There is always something which catches eyes. I pay special attention to part which has taken the most expensive part of whole execution plan.
Rule # 2 : Table scan or clustered index scan needs to be optimized to table seek (if your table is small it does not matter and table scan gives you better result). Table scan happens when index are ignored while retrieving the data or index does not exist at all.
Rule # 3 : Bookmark lookup are not good as it means correct index is not being used or index is not sufficient for all the necessary data to retrieve. The solution of bookmark lookup is to create covering index. Covering index will cover all the necessary columns which needs to improve performance of the query. It may be possible that covering index is slower. Try and Error is good method to find the right match for your need.
Rule # 4 : Experiment with Index hints. In most cases database engines picks up the best index for the query. While determining which index is best for the query, database engine has to make certain assumption for few of the database parameters (like IO cost etc). It may be possible database engine is recommending incorrect index for query to execute. I usually try with few of my own indexes and test if database engine is picking up most efficient index for queries to run. I use Index Hint for this purpose.
Rule # 5 : Avoid functions on columns. If you use any functions on column which is retrieved or used in join or used in where condition, it does not take benefit of index. If I have situation like this, I usually create separate column and populate it ahead of time using simple update query with the value which my function will return. I put index on this new column and this way performance is increased. Creating indexed view is good option too.
Rule # 6 : Do not rely on execution plan only. I understand that this contradicts the Rule # 1, however execution plan is not the only benchmark for indexes. There may be cases that something looks less expensive on execution plan but in real world it takes more time to get data. Test your indexes on different benchmarks.
Rule # 7 : ___________________________________________
Let me know what should be the Rule # 7.
Reference : Pinal Dave (https://blog.sqlauthority.com)
#7 — Better to have a THIN Index over FAT Index. If Index is on BIG Varchar column see if those can be substituted to INT column
My Rule # 7 would be: Ensure there aren’t missing or outdated statistics
Luciano Evaristo Guerche (Gorše)
Taboão da Serra, SP, Brazil
Rule # 7 : Avoid SP names that begin with sp_
Rule # 8 : Use SET NOCOUNT ON, where ever possible
A lot depends on the usage of the DB – if low data updates and high volume of queries, create a covering index for the most highly used queries since data can be obtainied directly from the index and no lookup to the data table needs to be done.
If possible, store indexes on a separate drive than from the data files.
Well Query Optimizations rules are not limited.
It depends on business needs as well,
For example we always suggest to have a relationship between tables but if they are heavily used for Update insert delete, I personally don’t recommended coz it will effect performance as I mentioned it all depends on Business needs;
Here are few more tips I hope will help you to understand.
One: only “tune” SQL after code is confirmed as working correctly.
(use top (sqlServer) and LIMIT to limit the number of results where appropriate,
SELECT top 10 jim,sue,avril FROM dbo.names )
Two: ensure repeated SQL statements are written absolutely identically to facilate efficient reuse: re-parsing can often be avoided for each subsequent use.
Three: code the query as simply as possible i.e. no unnecessary columns are selected, no unnecessary GROUP BY or ORDER BY.
Four: it is the same or faster to SELECT by actual column name(s). The larger the table the more likely the savings.
Five: do not perform operations on DB objects referenced in the WHERE clause:
Six: avoid a HAVING clause in SELECT statements – it only filters selected rows after all the rows have been returned. Use HAVING only when summary operations applied to columns will be restricted by the clause. A WHERE clause may be more efficient.
Seven: when writing a sub-query (a SELECT statement within the WHERE or HAVING clause of another SQL statement):
— use a correlated (refers to at least one value from the outer query) sub-query when the return is relatively small and/or other criteria are efficient i.e. if the tables within the sub-query have efficient indexes.
— use a noncorrelated (does not refer to the outer query) sub-query when dealing with large tables from which you expect a large return (many rows) and/or if the tables within the sub-query do not have efficient indexes.
— ensure that multiple sub-queries are in the most efficient order.
— remember that rewriting a sub-query as a join can sometimes increase efficiency.
Eight: minimize the number of table lookups especially if there are sub-query SELECTs or multicolumn UPDATEs.
Nine: when doing multiple table joins consider the benefits/costs for each of EXISTS, IN, and table joins. Depending on your data one or another may be faster.
‘IN is usually the slowest’.
Note: when most of the filter criteria are in the sub-query IN may be more efficient; when most of the filter criteria are in the parent-query EXISTS may be more efficient.
Ten: where possible use EXISTS rather than DISTINCT.
Remember that many performance problems cannot be reproduced in your development environment, and examining the issue in the production environment is essential. When users report a performance problem, use SQL Server Profiler to insure you are attempting to optimize the correct procedure, and determne exactly how much time and resources it is comsuming. This can later be used a benchmark to measure how much improvement (if any) your attempts at performance optimization have made.
Also, if users report that a database operation, which normally runs within an acceptable amount of time, will sporatically run for significantly longer, use sp_who2 or server traces to determine if process blocking is at the root of the problem and then what conditions (available memory, another process that competes for the same resources, etc.) contribute to the blocking.
Rule #7 use set statistics io,time on in order to get scan count, logical reads etc