Recompilation process is same as compilation and degrades server performance. In SQL Server 2000 and earlier versions, this was a serious issue but in SQL server 2005, the severity of this issue has been significantly reduced by introducing a new feature called Statement-level recompilation. When SQL Server 2005 recompiles stored procedures, only the statement that causes recompilation is compiled, rather than the entire procedure. Recompilation occurs because of following reason:
- On schema change of objects.
- Adding or dropping column to/from a table or view
- Adding or dropping constraints, defaults, or rules to or from a table.
- Adding or dropping an index to a table or indexed view if index is used by the plan.
- Adding or dropping trigger from a table
- Dropping statistics from a table that is used in plan.
- On change of the SET options: When a compiled plan is created, the SQL server also stores the environmental setting of a connection (SET option) with it. If the same stored procedure is executed by another connection that has a different SET option, then the existing cached plan is not reused. To reduce recompilation caused by Environment change, we should not change the SET options in the connection setting and stored procedure.
- On statistics change of tables: Every time the SQL Server uses an already cached compiled plan, it checks the optimality before using it. SQL Server decides whether the plan is optimal for current amount of data in underlying tables. It could be that the data amount in underlying tables changed so much that the previously created plan is not optimized. For each table & index, the SQL server maintains a modification counter and if the counter values exceed the defined threshold, the previously created compiled plan is considered stale plan and a new plan is created.
Detecting recompilations: The below query retrieves the top 10 statements for which the recompilation count is maximum. Here, plan_generation_num returns a number that indicates the recompilation count of a statement.
SELECT TOP 10
qs.plan_generation_num,
qs.execution_count,
DB_NAME(st.dbid) AS DbName,
st.objectid,
st.TEXT
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS st
ORDER BY plan_generation_num DESC
Another tool is to get recompilation details is the SQL Profiler. Along with the statements text, it also tells the reason of recompilation in EventClass and EventSubClass columns.
Reducing recompilations: We have no option to avoid “schema change” and “SET options” based recompilation. But we have following query and stored procedure hints to avoid “statistics change” based recompilation:
- KEEP PLAN hint: The modification counter threshold for a temporary table is 6. This implies that when a stored procedure that creates a temporary table inserts 6 or more rows into this table, Stored Procedures will be recompiled as soon as this table is accessed. For permanent tables, this threshold is at least 500. We can increase the first threshold for the temporary table (6) to same as that of the permanent table (500) by using the KEEP PLAN query hint in the statement where the temporary table is used. For example,
SELECT TT.col4, SUM(PermTable.col1)
FROM dbo.PermTable
INNER JOIN #TempTable AS TT
ON PermTable.col1 = TT.col2
OPTION (KEEP PLAN);
- KEEPFIXED PLAN hint: This hint completely avoids “statistics change” based recompilation of a query.
SELECT col1, col2
FROM dbo.PermTable
WHERE col3 < 100
OPTION (KEEPFIXED PLAN);
- Use Table variable instead of temporary tables: Because a change in cardinality of a table variable does not cause recompilations, consider using a table variable instead of a temporary table when faced with a problem of excessive recompilation. However, the use of table variables can lead to poorer query plans. Distribution statistics are not stored for table variables, and cardinality is only available during recompilation and not during initial compilation. One has to check whether this is the case and make an appropriate trade-off.
Reference: Pinal Dave (https://blog.sqlauthority.com)
11 Comments. Leave new
As for SET options, I’ve seen this happen a few times: a bunch of programmers and DBA’s sitting around scratching their heads as to why a particular stored procedure runs in 45 seconds when called from a web application, yet it runs in five seconds when the DBA runs it in SSMS. I ran into this quite a few times, and it’s often because of the manner in which execution plans are created. If a relatively poor execution plan is created because of a call to the procedure that is atypical, subsequent calls to that procedure may suffer.
One of the first things I would do when I heard a stored procedure was having atypical behavior:
sp_recompile – this forces the stored procedure to be recompiled in its entirety.
It is very important to keep SET options in mind when troubleshooting the performance of procedures. I would use Profiler to obtain them for the web servers, and this post explains real nice how to get them for your current SSMS session:
Great topic to bring up Pinal!
Hi,
Thanks for the query on finding out recompilations.
Thank you
This is really helpful
I have to load excel data to SQL Server….also while loading the data ,some coulmn in database should also get updated..than i have to select colum email addresss to send mail through SMTP connection.plz help me in analysing this. Actually ,Excel file could be located in any location..
thanks, this has really helped me reduce the cpu load!
Great Allen!
Hi,
Excellent post I mjust say, I have a related question I hope helps other users also, and is related to optimising precompilation before client deployment:
We are about to deploy our solution to the client site, and I want to ensure that this application (which is a silverlight app with SQL Server back end) runs as fast as possibloe first time. From my undertstanding I would need to run the stored proc before hand so that an entry is made to dm_exec_cached_plans.
My questions around this are:
1) If the data in the underlying tables are the same, will different parameter varaiations need to be called with the stored proc, so that new entries in _
dm_exec_cached_plans can be inserted, or will the same stored proc use the same dm_exec_cached_plans entry regardless of parameter values.
I ask as the article mentions that:
“It could be that the data amount in underlying tables changed so much that the previously created plan is not optimized”
This refers to the underlyting physical table, which is different to the derived table that my stored proc will return, which depending on parameters passed
can return very different amounts of data.
2) In the same vein as above do I need to log in as each user from the silverlight application, in order that an execution plan is stored for each one, or it is sufficient to simply log in as a simple user, which will in turn create an entry in dm_exec_cached_plans, and thus each user will get the page faster when they first log in?
Thus you can see what I want to do, and thus can set things up, whereby on a new deployment or restore we carry out a process of running cerstin stored procs or logging into application as different users, so that the user experience is as good as can be.
Any suggestions, to what I am sure is an existing and hopefully already solved issue would be greatly appreciated:-)
This is a very good Description. This topic really help me alot… Thanks :)
Hi Pinal
We have a Query Which contains multiple join statement and date as parameter .When we run the query it takes lot of time.But with OPTION(Recompile) it takes very less time.
Waht would you suggext in such case ?
Thanks In Advance
Hi Pinal
is order by plan_generation_num is correct , as per BOL its A sequence number that can be used to distinguish between instances of plans after a recompile.
https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-query-stats-transact-sql?view=sql-server-2017
Thanks
I have noticed an issue in production where a procedure is being recompiled at a very high frequency. Although it has not posed any problems directly to the system stability, but it has the potential to cause disruption on higher load. in that procedure I used temporary tables where I also used dynamic sql to add the new column in temporary table. Please provide me some solution.