SQL SERVER – Get File Statistics Using fn_virtualfilestats

Quite often when I am staring at my SSMS I wonder what is going on under the hood in my SQL Server. I often want to know which database is very busy and which database is bit slow because of IO issue. Sometime, I think at the file level as well. I want to know which MDF or NDF is busiest and doing most of the work. Following query gets the same results very quickly.

SELECT DB_NAME(vfs.DbId) DatabaseName, mf.name,
mf.physical_name, vfs.BytesRead, vfs.BytesWritten,
vfs.IoStallMS, vfs.IoStallReadMS, vfs.IoStallWriteMS,
vfs.NumberReads, vfs.NumberWrites,
(
Size*8)/1024 Size_MB
FROM ::fn_virtualfilestats(NULL,NULL) vfs
INNER JOIN sys.master_files mf ON mf.database_id = vfs.DbId
AND mf.FILE_ID = vfs.FileId
GO

When you run above query you will get many valuable information like what is the size of the file as well how many times the reads and writes are done for each file. It also displays the read/write data in bytes. Due to IO if there has been any stall (delay) in read or write, you can know that as well.

I keep this handy but have not shared on blog earlier.

Reference: Pinal Dave (https://blog.sqlauthority.com)

SQL Scripts, SQL Statistics, SQL View
Previous Post
SQL SERVER – DMV – sys.dm_exec_query_optimizer_info – Statistics of Optimizer
Next Post
SQL SERVER – master Database Log File Grew Too Big

Related Posts

9 Comments. Leave new

  • Hi Dave,

    This scripts is really helpful. I often being asked by user to provide some detail about this when they having performance issue.

    Is that possible to know which tables are busy as well?

    Regards,
    Jimmy Liew

    Reply
  • Great script, I have been watching an IO problem while we spec out new hardware and this is more elegant than my solution. Very helpful.

    Reply
  • Thank you for these, they are very helpful. Do you know if the counters can be cleared manually or do you have to restart the Instance?

    Reply
  • Hi Dave, thanks for the nice article. It would be great if you can provide some threshold levels to identify issues while reviewing the data.

    Thanks!

    Reply
  • Some of my larger tables, like TEMPDB, were throwing error “Arithmetic overflow error converting expression to data type int.”, so I had to make the following change to the script, casting Size as bigint in the expression.
    Here is the complete script with the fix:

    SELECT DB_NAME(vfs.DbId) DatabaseName, mf.name,
    mf.physical_name, vfs.BytesRead, vfs.BytesWritten,
    vfs.IoStallMS, vfs.IoStallReadMS, vfs.IoStallWriteMS,
    vfs.NumberReads, vfs.NumberWrites,
    (cast(Size as bigint)*8)/1024 Size_MB
    FROM ::fn_virtualfilestats(NULL,NULL) vfs
    INNER JOIN sys.master_files mf ON mf.database_id = vfs.DbId
    AND mf.FILE_ID = vfs.FileId

    Reply
  • The proble with this dynamic management function is, it gives cumulative value. We should see the difference vakue and not cumulative value. Since these numbers are big, it’s really difficult to find the difference.

    Reply
  • dharmeshghelani
    May 24, 2013 8:42 am

    how much delay is reasonable ? I have seen latencies ranging from single digits to 375 ms. the application hosted is Ms Axapta and this is production environment.

    Reply
  • Hi Pinal ,

    Good Article , plese can you help me how to find out the IO problem using the resultys . i found the stalls columns resulted some values .

    i am not getting how to confirm that , there is problem in IO .. but i am getting Latch_IO..wait types some times . but there is no much blockings.

    please help me it .. i am facing this problem since 1 month .. not able to figure out how to fix this

    Reply
  • Przemek von Wielebny
    December 18, 2013 6:20 pm

    Hi Pinal

    I have real problem with understanding IoStallWriteMS column we have in fn_virtualfilestats view and also @@IO_BUSY.

    Unfortunately both of them seems wrong… I know it’s a little bold statement, but please take a look at my test and try to point out errors.

    So my tests are divided on 2 parts, first part create empty database , create table and create procedure which inserts 200000 rows to table. Second part – do snapshot of dm_io_virtual_file_stats and @@IO_BUSY before and after running procedure.

    So main problem is :

    without transaction timing on dm_io_virtual_file_stats seems correct, but with transaction it is totally strange – io_stall_write_ms is 2x bigger then elapsed time of whole test..- it could be possible with parallel processing but @@CPU_BUSY is just to low (Write time is much higher then CPU Usage – it’s impossible in my universe :-) ) on the other hand @@IO_BUSY is surprisingly low for example. Whole test take 30 seconds and difference in snapshot on @@IO_BUSY is just 1 second..

    The whole details about the test, scripts used and screenshots I put in different forum. Below you can find a link:

    Can you help me to find out how it really works?

    Reply

Leave a Reply