SQL SERVER – NTFS File System Performance for SQL Server

Note: Before practicing any of the suggestion of this article, consult your IT Infrastructural Admin, applying the suggestion without proper testing can only damage your system. Let us learn about NTFS File System.

SQL SERVER - NTFS File System Performance for SQL Server NTFSlogo

Question: “Pinal, we have 80 GB of data, including all the database files, we have our data in NTFS file system. We have proper backups are set up. Any suggestion for our NTFS file system performance improvement. Our SQL Server box is running only SQL Server and nothing else. Please advise.”

When I receive questions which I have just listed above, it often sends me deep in thought. Honestly, I know a lot, but there are plenty of things, I believe, can be built with community knowledge base. Today I need you to help me to complete this list. I will start the list and you help me complete it.

NTFS File System Performance Best Practices for SQL Server

  • Disable Indexing on disk volumes
  • Disable generation of 8.3 names (command: FSUTIL BEHAVIOR SET DISABLE8DOT3 1)
  • Disable last file access time tracking (command: FSUTIL BEHAVIOR SET DISABLELASTACCESS 1)
  • Keep some space empty (let us say 15% for reference) on drive is possible
  • (Only on Filestream Data storage volume) Defragement the volume
  • Add your suggestions here…
The one which I often get a pretty big debate is NTFS allocation size. I have seen that on the disk volume, which stores filestream data, when increased allocation to 64K from 4K, it reduces the fragmentation. Again, I suggest you attempt this after proper testing on your server. Every system is different and the file stored is different. Here is when I would like to request you to share your experience with, related to NTFS allocation size.

If you do not agree with any of the above suggestions, leave a comment with reference and I will modify it. Please note that the above list prepared assuming the SQL Server application is only running on the computer system.

The next question does all these still relevant for SSD – I personally have no experience with SSD with large database so I will refrain from comment.

Reference: Pinal Dave (https://blog.sqlauthority.com)

Previous Post
SQL SERVER – Video – Beginning Performance Tuning with SQL Server Execution Plan
Next Post
SQL SERVER – SQL Server Statistics Name and Index Creation

Related Posts

3 Comments. Leave new

  • Joe Terrapin
    July 6, 2012 8:43 am

    thanks dave. much appreciated :-)

    Reply
  • Luiz Mercante
    July 7, 2012 3:02 am

    Really interesting article. I would like to understand, if data files are huge even using 8KB pages, i think it is util use 64KB blocks because it reduces the quantity of pointers in partition index and fragmentation in disk level. If anybody have considerations about that, i really apreciate.

    Reply
  • Ayman El-Ghazali
    July 7, 2012 8:19 am

    SSDs work best with Random seeks not sequential. Since database reads are random, then it would probably improve performance; plus SSDs are generally faster than regular magnetic disk. For Log files, that is a hard question to answer. SSDs are faster, but log files are sequential and so are regular HDD. I can’t provide any data to say which would be better in that case.

    One thing to keep in mind with SSDs is that when they fail, your data is GONE! There is no recovery like there is with Magnetic disk. So make sure you use some sort of RAID architecture with SSD;RAID 5 or 6 would be pretty good since it is more affordable for drives which cost a fortune.

    I would disagree a bit with the 15% empty space on the drive that you mentioned. It depends on the disk size; 15% of 1TB is different than 15% of 100GB. Larger drives can be a little slower, but they tend to keep data closer to the inner part of the disk platter which could improve performance. The reason for that is that there is high density of space on the drive, so the inner part of a 1TB drive could potentially hold the entire 80GB DBs. However, on a smaller drive it would be distributed further across the platter so the head would have to move a lot more for read seeks.

    It definitely needs a bit more research.

    Reply

Leave a Reply

Menu