Recently I was going over few advanced options of SQL Server 2012 in database properties and I found a new option in the property screen.
Properties screen of SQL Server 2008 R2
Properties screen of SQL Server 2012
I got little curious and decided to learn what does this new feature indicates. When I started to learn more about this subject, I had excellent learning experience.
The default value of this option is 0. This value is directly related to Checkpoint. When it is set to greater than 0 (zero) it uses indirect-checkpoints and establishes an upper-bound on recovery time for this database.
NOTE: Do not attempt this right away on your production database. Make sure that you try this out on your development server and validate before making it any changes on your production server. Make sure that you do not change this option unless you are 100% sure about what you are doing and understand the risk as well implications.
Indirect Checkpoints are alternative to automatic checkpoints (which should be good for most of the system and should be left alone if you are not sure about implications). If system crashes accidentally, indirect checkpoints provides potentially faster recovery time than automatic checkpoints.
Reasons for enabling indirect checkpoint
- It may improve database recovery time
- It may reduce checkpoint I/O as it writes continuously pages to the disk in the background
Reasons for NOT enabling indirect checkpoint
- In OLTP workload it can increase overall writes on server by writing continuously pages to the disk in the background which may reduce the performance.
I have now learned the pros and cons of this feature and I am going to test this feature out on various workload oriented servers and see what is the effect of this feature. You can read further on MSDN about this feature.
Reference : Pinal Dave (https://blog.sqlauthority.com)
Hello Pinal,.. I want to download SQL Server 2012. From where, can i get it? Plz…
exec(N’update membermaster set isactive = ”A” where msrno in (‘ + @MemberList + ‘ ) ‘) HEY PINAL WHAT IS N HERE AFTER EXEC
You can download it from
Thanks a lot @Balmukund Sir ! It may help me a lot..
Good to know about this. However, in the systems I work on, I don’t quite have a use-case where I see myself setting the “Target recovery time” value to anything other than the default (i.e. 0).
I would be interested to know of any case study or a hypothetical scenario where this can be useful. Also, @Pinal: I am looking forward to reading about the results of your workload oriented tests.
Good article on the checkpoint feature. Would be interested to know case studies/projects where this would be used.
We recently moved to 2012 on a huge 64 processor machine with 250gb RAM.- super fast and we are experiencing the dreaded “SQL Server has encountered NNNN occurrence(s) of I/O requests taking longer than 15 seconds to complete on file:…” I’m reading that the new Indirect checkpoints are supposed to “smooth out” I/O spikes, but is it possible for them to have a negative effect on I/O? Our recovery interval is still set to zero, so I am unclear if we even have Indirect checkpoints working.
Increasing the recovery time will have what kind of impact? At this point its far more important to us to maximize throughput, than recovery time, but I suspect we may be having the 833 issues because of occasional I/O spikes (Occasional spurts of lots of updates being issued between checkpoints). Would using Indirect checkpoints “smooth out” the I/O?. Form what I’m seeing, changing the recovery from zero to some other number would cause less checkpoints. and greater I/O spikes during the checkpoints.
The SAN people are insisting it must be SQL server, because the dont see anything about the SAN or network having issues, but we believe its a SAN issue not being able to handle the spikes (possibly duirng checkpoints) I dont particularly care what is to blame, I am simply looking for a way to stop the 833’s – Would issuing more frequent, and therefore smaller checkpoints smooth out the I/O?
Change the default Checkpoint frequency to something like 5 minutes and monitor the performance.
I am assuming you got everything OK like Update Stats with Async ON, You update stats everyday, Index fragmentation levels checked, query plans OK, no major missing Indexes, good Disk speed, MAXDOP set to limit number of cores using parallelism, decent Network, update usage run regularly. If you got very large tables can also Compress them if CPU usage is not an issue, can also consider partitioning. We use one file per file group. Hope this could be of some help.