I am fortunate to work on different challenges every week for SQL Server Performance Tuning. The other day I ended up a very interesting situation while working with Comprehensive Database Performance Health Check, with TempDB files. Let us understand the performance tuning story.
Brief History
Recently customer hired me to help them performance tuning issue. While we went through various health check items, we realized that they have a performance issue on one of the drive. The best possible solution was to move one of their busy databases to another drive. Now moving the database to another drive was not possible because the other drive had 32 tempdb files.
After careful investigation, we realized that over 97% of the TempDB was empty and they were just occupying big space as they were pre-sized initially. Additionally, looking at the 32 TempDB was absolutely not needed.
Lots of people think that they need to have one TempDB per Core or CPU thread. I have no idea where this started but it is not accurate all the time. The reality is that we need to have TempDB files as many as needed by our workload and not one per core/CPU thread.
I will write a detail blog post in the future, let us get back on the original topic of this blog about how to remove the TempDB.
Removing Extra TempDB Files
If you want to remove the TempDB files, you can use the following script. Please note that in SQL Server, you can’t remove any file if it is not empty. This is the reason, I am emptying the TempDB File first and right after that, I am removing the TempDB file. You have to run the entire script in a single batch.
USE [tempdb] GO DBCC SHRINKFILE (N'tempdev02', EMPTYFILE) GO ALTER DATABASE [tempdb] Â REMOVE FILE [tempdev02] GO
This worked perfectly fine for my client and I am confident that if you are facing any error while removing the TempDB file, you may also use the above script to remove the files.
Reference:Â Pinal Dave (https://blog.sqlauthority.com)
2 Comments. Leave new
My recollection is hearing about the rule of 1 tempdb file per processor in the mid-2000’s and that it was not a brand new rule at the time. This meant the rule pre-dates multi-core processors (and version 2005). No test data accompanied the rule, but there may have been a MS KB article citing contention on the PFS page(s).
I am inclined to believe that someone ran a test on a 4-way SMP system (the standard for high-end databases, may be a ProFusion 8-way), finding that 2 files was better than one and 4 files was better than 2, but not necessarily 2X better, perhaps just 1.2X better which would still be worthwhile.
I recall hearing that PFS contention was reduced in 2005 (SP), but the use of tempdb increased in some version. In any case, a test based on a 4-core system should not be extrapolated too far, perhaps 8, but definitely not 32 or 64, especially since the original reason has since been (partially) resolved.
In high-end systems, storage is distributed over multiple volumes and multiple IO channels. Heavy volume filegroups should one or two files per volume and path.
Its funny that working where you work you think that “if it fixed this SQL, it will fix em all” XD
Seriously tho, my current theory is that this worked because RCSI or Snapshots were not enabled or it was and ADR was enabled.
if those are enabled (without ADR) tempdb is being used for row versioning, and so far not sure how to remove them without maybe changing those settings, removing the files, and resetting them ? I need to test this on a mirror server.