SQL SERVER – Stored Procedure Optimization Tips – Best Practices

We will go over how to optimize Stored Procedure with making simple changes in the code. Please note there are many more other tips, which we will cover in future articles.

SQL SERVER - Stored Procedure Optimization Tips - Best Practices spoptimizationtricks

  • Include SET NOCOUNT ON statement: With every SELECT and DML statement, the SQL server returns a message that indicates the number of affected rows by that statement. This information is mostly helpful in debugging the code, but it is useless after that. By setting SET NOCOUNT ON, we can disable the feature of returning this extra information. For stored procedures that contain several statements or contain Transact-SQL loops, setting SET NOCOUNT to ON can provide a significant performance boost because network traffic is greatly reduced.
CREATE PROC dbo.ProcName
AS
SET NOCOUNT ON;
--Procedure code here
SELECT column1 FROM dbo.TblTable1
-- Reset SET NOCOUNT to OFF
SET NOCOUNT OFF;
GO
  • Use schema name with object name: The object name is qualified if used with schema name. Schema name should be used with the stored procedure name and with all objects referenced inside the stored procedure. This help in directly finding the complied plan instead of searching the objects in other possible schema before finally deciding to use a cached plan, if available. This process of searching and deciding a schema for an object leads to COMPILE lock on stored procedure and decreases the stored procedure’s performance. Therefore, always refer the objects with qualified name in the stored procedure like
SELECT * FROM dbo.MyTable -- Preferred method
-- Instead of
SELECT * FROM MyTable -- Avoid this method
--And finally call the stored procedure with qualified name like:
EXEC dbo.MyProc -- Preferred method
--Instead of
EXEC MyProc -- Avoid this method
  • Do not use the prefix “sp_” in the stored procedure name: If a stored procedure name begins with “SP_,” then SQL server first searches in the master database and then in the current session database. Searching in the master database causes extra overhead and even a wrong result if another stored procedure with the same name is found in master database.
  • Use IF EXISTS (SELECT 1) instead of (SELECT *): To check the existence of a record in another table, we uses the IF EXISTS clause. The IF EXISTS clause returns True if any value is returned from an internal statement, either a single value “1” or all columns of a record or complete recordset. The output of the internal statement is not used. Hence, to minimize the data for processing and network transferring, we should use “1” in the SELECT clause of an internal statement, as shown below:
IF EXISTS (SELECT 1 FROM sysobjects
WHERE name = 'MyTable' AND type = 'U')
  • Use the sp_executesql stored procedure instead of the EXECUTE statement.
    The sp_executesql stored procedure supports parameters. So, using the sp_executesql stored procedure instead of the EXECUTE statement improve the re-usability of your code. The execution plan of a dynamic statement can be reused only if each and every character, including case, space, comments and parameter, is same for two statements. For example, if we execute the below batch:
DECLARE @Query VARCHAR(100)
DECLARE @Age INT
SET @Age = 25
SET @Query = 'SELECT * FROM dbo.tblPerson WHERE Age = ' + CONVERT(VARCHAR(3),@Age)
EXEC (@Query)

If we again execute the above batch using different @Age value, then the execution plan for SELECT statement created for @Age =25 would not be reused. However, if we write the above batch as given below,

DECLARE @Query NVARCHAR(100)
SET @Query = N'SELECT * FROM dbo.tblPerson WHERE Age = @Age'
EXECUTE sp_executesql @Query, N'@Age int', @Age = 25

the compiled plan of this SELECT statement will be reused for different value of @Age parameter. The reuse of the existing complied plan will result in improved performance.

  • Try to avoid using SQL Server cursors whenever possible: Cursor uses a lot of resources for overhead processing to maintain current record position in a recordset and this decreases the performance. If we need to process records one-by-one in a loop, then we should use the WHILE clause. Wherever possible, we should replace the cursor-based approach with SET-based approach. Because the SQL Server engine is designed and optimized to perform SET-based operation very fast. Again, please note cursor is also a kind of WHILE Loop.
  • Keep the Transaction as short as possible: The length of transaction affects blocking and deadlocking. Exclusive lock is not released until the end of transaction. In higher isolation level, the shared locks are also aged with transaction. Therefore, lengthy transaction means locks for longer time and locks for longer time turns into blocking. In some cases, blocking also converts into deadlocks. So, for faster execution and less blocking, the transaction should be kept as short as possible.
  • Use TRY-Catch for error handling: Prior to SQL server 2005 version code for error handling, there was a big portion of actual code because an error check statement was written after every t-sql statement. More code always consumes more resources and time. In SQL Server 2005, a new simple way is introduced for the same purpose. The syntax is as follows:
BEGIN TRY
--Your t-sql code goes here
END TRY
BEGIN CATCH
--Your error handling code goes here
END CATCH

Reference: Pinal Dave (https://blog.sqlauthority.com)

Best Practices, SQL Coding Standards, SQL Scripts, SQL Server, SQL Stored Procedure
Previous Post
SQL SERVER – Difference Between Update Lock and Exclusive Lock
Next Post
SQLAuthority News – SQL Server Technical Article – The Data Loading Performance Guide

Related Posts

181 Comments. Leave new

  • Fortunately most applications have stored procedures that follow the same basic operations and design.

    So I always prefer some stored procedure generator so that sp becomes consistent.

    Tools4Sql.net has gr8 sp generator and it really generates optimized and efficient stored procedures for MS SQL Server

    Reply
  • Hi Dave,

    I have a typical situation. I have two tables, 1. user 2.address. each user contains more than 2 or 3 address. I have to update user table with all address ids and count concatenated string in a field.

    ex: user.address = 4;addid1;sddid2;addid3.

    I have used cursor, while loop and temp table to update. in all cases it is taking more than 24 hours. for production we don’t have that much time. I have to complete in 4 to 6 hours.

    can you please guide me?

    your help much appreciate.

    Regards,
    Seshadri

    Reply
  • VoodooPriest Tester
    February 25, 2011 3:22 pm

    I did a comparison executing and showing actual plan and cost and between the 2, they were both at 50% so could you provide an example of where there is a definite impact?

    The point I tried to verify was
    Use IF EXISTS (SELECT 1) instead of (SELECT *)

    Reply
  • I have used Tools4Sql.net and I found it wonderful but I think they are missing features of Encryption which is really necessary.

    Reply
  • Simple but useful tips. Thanks.

    Reply
  • George livingston
    April 26, 2011 11:59 am

    HI sir,

    Very nice.. It helped me a lottttt……

    Reply
  • Thanks a bunch…Great tips

    Reply
  • Hi Dave,

    Nice Article

    Reply
  • hi sir,

    i am using cursors for looping
    any alternative.

    problem statement is i want pass cursor value in where condition in select statement

    example:

    select count(*) from table

    — inner joins

    where id= cursor value

    Reply
  • I think this point is bit misleading: Use IF EXISTS (SELECT 1) instead of (SELECT *):

    The column list with SELECT in IF EXISTS () is ignored, so you provide column list or * it just ignores them, here is an example:

    if exists (select 1/0)
    select ‘yes’
    else
    select ‘no’
    GO

    select 1/0
    GO

    Reply
  • plz sombody resolve my problem
    i have a query in sql server and i have to change the results into xml form through stored procedure
    plz sombody tell
    thanks

    Reply
  • Hi sir

    please give a very importance point of store procedures and function Difference

    Reply
  • Hi sir

    please give a very importance point of store procedures and function Difference

    Reply
  • roopali pandey
    August 18, 2011 12:32 pm

    Hii Pinal,

    Nice artical.
    Thanks for the information or tips you provide for optimazation in sql.

    Reply
  • nice tips pinal thks

    Reply
  • Hi Pinal,

    Nice artical.

    But as you mentioned there in your article-
    “always refer the objects with qualified name in the stored procedure” like:
    —————————————————————————————————
    SELECT * FROM dbo.MyTable — Preferred method
    — Instead of
    SELECT * FROM MyTable — Avoid this method
    –And finally call the stored procedure with qualified name like:
    EXEC dbo.MyProc — Preferred method
    –Instead of
    EXEC MyProc — Avoid this method
    —————————————————————————————————
    we could face problem when we have more than one database position like a test database server and a live database and database admin are different or database admin can be changed time to time, in that case to maintain the synchronization we need to change the query regarding database admin, which is quite impossible for a developer.

    Reply
    • nakulvachhrajani
      September 29, 2011 8:36 pm

      Hello, Amit!

      User-schema separation introduced in SQL Server 2005 was meant to avoid this situation that you are talking about. Users are no longer directly linked to an object.

      In the qualified object name that Pinal demonstrates, the “dbo” is not the user name, but the default schema name.

      Reply
  • Hello Pinal Sir,

    Good Artical.

    Kamlesh Gupta

    Reply
  • Swapnil Kamble
    October 8, 2011 11:34 am

    Use schema name with object name:

    Hi sir i have some doubt regarding the above explanation

    I try select * from dbo.Categories in NorthWInd db
    execution plan showing Clustred index scan Estimated cpu cost 0.003125
    and select cached plan size 9B

    after that i try select * from Categories
    execution plan showing Clustred index scan Estimated cpu cost 0.003125
    and select cached plan size 9B

    both the execution plans are same.

    so why we need to put schema name with object name

    regards,
    swapnil K

    Reply
    • SQL Server takes some extra time to determine the schema of the object if it omitted. This time is really very mimimum that you cannot find a difference at all

      Reply
  • Re: Swapnil,

    For this situation to work, as Pinal is suggesting, you will have to take note of the time taken before the cached plan is found. Having the qualified schema name reduces the time taken to search for the correct plan.

    Obviously we’re splitting hairs here, but in the world of DBA’s that’s often what we do if there’s a requirement to attain the fastest possible response.
    Dallas

    Reply
  • Hi Pinal,

    very good article and informative too.

    Reply

Leave a Reply