SQL SERVER – Detect Virtual Log Files (VLF) in LDF

In one of the recent training engagements, I was asked if it true that there are multiple small log files in the large log file (LDF). I found this question very interesting as the answer is yes. Multiple small Virtual Log Files commonly known as VLFs together make an LDF file. The writing of the VLF is sequential and resulting in the writing of the LDF file is sequential as well. This leads to another talk that one does not need more than one log file in most cases.

However, in short, you can use following DBCC command to know how many Virtual Log Files or VLFs are present in your log file.

DBCC LOGINFO

You can find the result of above query to something as displayed in following image.

You can see the column which is marked as 2 which means it is active VLF and the one with 0 which is inactive VLF.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

16 thoughts on “SQL SERVER – Detect Virtual Log Files (VLF) in LDF

  1. Hi Pinal,
    It is always a pleasure to read your blog. Wish you and your family a happy and prosperous New Year 2011. Wishing you continued success in your profession as well.
    Ramdas

    Like

  2. It’s worth mentioning that too many VLF’s can imapct performance – usually caused by very small autogrow i.e. make sure you size your log file as accurately as you can.
    If you have thousands of VLF’s this can really affect replication as well, so if your replciation is performing badly you might want to check for VLF’s.
    We have been through the pain with replication and VLF’s and hope no-one else has to :-)

    Like

  3. As rightly said by John, one should keep a watch on VLF. More number of VLFs associated with log file, more impact on DML performance. This impact is also indirect, and people may find it difficult to figureout that VLFs are causing the DML performance degrade. On an average, I have seen around 300 + VLFs in production environments which is a very bad number. It is always advisable to have transaction logs are properly planned for Maintenance perspective. You can find very nice blogs written by Kimberly Tripp and Paul Randal who are my favourites as Pinal is :)

    Like

  4. Pingback: SQL SERVER – Reduce the Virtual Log Files (VLFs) from LDF file Journey to SQL Authority with Pinal Dave

  5. Dear John and Phani,

    I’m newbie to Virtual Log Files (VLF). I have a Microsoft SQL Server 2008 (64-bit) that used and installed Microsoft SharePoint 2007. I notieced the WSS_CONTENT has 938 VLF.

    Can you give some lights and advice?

    How to trim down the VLF logs?

    Also, I noticed the database log grew over 200 GB.

    Thanks,
    Edwin

    Like

  6. Hi Edwin,
    My guess would be that your databse is set to Full recovery model and you aren’t backing up the transaction log, so it keeps auto-growing by a small amount.
    You need to backup the transaction log, so that you can then shrink it (you may need to repoeat this 2 or 3 times).
    Once done you will need to set up a regular backup for both the data and log files e.g. full database backup once a day and log backups once per hour.
    Hope this helps – you will find polenty of information on the web.
    Cheers, John

    Like

  7. Hi John,

    I’m not very good in T-SQL programming, etc.

    So, I manually setup Microsoft SQL Server 2008 backup as follow:
    •Daily Database Full backup at 3:00 AM
    •Daily Log Backup at 7:00 PM, 11:00 PM, 3:00 AM, and 6:00 AM
    •Differential backup on Monday at 1:00 AM, Wednesday at 1:00 AM and Saturday at 1:00 AM

    Please see my scripts:

    — Daily Backup
    BACKUP DATABASE WSS_CONTENT_SP01
    TO DISK =’\\10.xx.xx.5\temp_sql\WSS_CONTENT_SP01.bak’
    WITH INIT, FORMAT, CHECKSUM, STOP_ON_ERROR
    GO

    BACKUP LOG WSS_CONTENT_SP01
    TO DISK = ‘\\10.xx.xx.5\temp_sql\WSS_CONTENT_SP01.trn’
    WITH INIT, FORMAT, CHECKSUM, STOP_ON_ERROR
    GO

    — Database Backup WSS_Content_SP01 Daily at 3 AM
    BACKUP DATABASE WSS_CONTENT_SP01
    TO DISK =’\\10.xx.xx.5\temp_sql\WSS_CONTENT_SP01.bak’
    WITH INIT, FORMAT, CHECKSUM, STOP_ON_ERROR
    GO

    — Log Backup WSS_Content_SP01 Daily at 7:00AM
    BACKUP LOG WSS_CONTENT_SP01
    TO DISK = N’\\10.xx.xx.5\temp_sql\WSS_CONTENT_SP01_Log_7PM.trn’
    WITH INIT, FORMAT, CHECKSUM, STOP_ON_ERROR
    GO

    — Log Backup WSS_Content_SP01 Daily at 11:00PM
    BACKUP LOG WSS_CONTENT_SP01
    TO DISK = N’\\10.xx.xx.5\temp_sql\WSS_CONTENT_SP01_Log_11PM.trn’
    WITH INIT, FORMAT, CHECKSUM, STOP_ON_ERROR
    GO

    — Log Backup WSS_Content_SP01 Daily at 3:00PM
    BACKUP LOG WSS_CONTENT_SP01
    TO DISK = N’\\10.xx.xx.5\temp_sql\WSS_CONTENT_SP01_Log_3AM.trn’
    WITH INIT, FORMAT, CHECKSUM, STOP_ON_ERROR
    GO

    — Log Backup WSS_Content_SP01 Daily at 6:00AM
    BACKUP LOG WSS_CONTENT_SP01
    TO DISK = N’\\10.xx.xx.5\temp_sql\WSS_CONTENT_SP01_log_6AM.trn’
    WITH INIT, FORMAT, CHECKSUM, STOP_ON_ERROR
    GO

    — Differential WSS_Content_SP01 backup Monday at 1:00 AM
    BACKUP DATABASE WSS_Content_SP01
    TO DISk = N’\\10.xx.xx.5\temp_sql\WSS_CONTENT_SP01_Monday_1AM.dif’
    WITH INIT, FORMAT, CHECKSUM, STOP_ON_ERROR
    GO

    — Differential WSS_Content_SP01 backup Wednesday at 1:00 AM
    BACKUP DATABASE WSS_Content_SP01
    TO DISk = N’\\10.xx.xx.5\temp_sql\WSS_CONTENT_SP01_Wednesday_1AM.dif’
    WITH INIT, FORMAT, CHECKSUM, STOP_ON_ERROR
    GO

    — Differential WSS_Content_SP01 backup Saturday at 1:00 AM
    BACKUP DATABASE WSS_Content_SP01
    TO DISk = N’\\10.xx.xx.5\temp_sql\WSS_CONTENT_SP01_Saturday_1AM.dif’
    WITH INIT, FORMAT, CHECKSUM, STOP_ON_ERROR
    GO

    I setup those schedule in JOBS.

    I’m not so clear how to shrink and I’m a bit consent since the database contnet the sharepoint data and in production.

    Would you please advice and help.

    Thanks,
    Edwin

    Like

  8. I also read Paul Randal’s posted “Why you should not Shrink You Data Files”.

    During my testing, I found out the SHRINKDATABASE will create a fragmentation almost 99 percent.

    Like

  9. Hi Edwin,
    You should only be looking to shrink the log file not the data file, or you will end up fragmenting the data as per Paul Randall’s post.
    In SSMS right-click on the database, select Tasks, then Shrink, then Files; in the Shrink File box that is shown, seect ‘Log’ as the Fiel Type.
    What you want to do is shrink the log file as close to 0 as possible, to remove any VLF’s.
    You can then set a sensible size and auto-grow in the database properties.
    Regards, John

    Like

  10. Hello John,

    Thanks.
    So, the DBCC SHRINKFILE (DB_Log, 1) should not create the fragmentation on my production database.

    DBCC SHRINKFILE (DB_Log, 1)

    Like

  11. I don’t where to put my question; I got this place and sending this to Pinal dev to get the solution;

    Hi Pinal,

    I am executing a sql query from avariable am getting error;
    I think I am missing the concatenate tricks

    My SQL is
    =====
    Set @Update_Set=’Update [NB_SF].[dbo].[mbrCustomer] Set ‘

    If @EVDESCRIPTION@EVDESCRIPTION_e And @EVDESCRIPTION”
    Begin
    Set @setcount=1
    Set @Update_Set=@Update_Set+'[EVDESCRIPTION]=’+@EVDESCRIPTION
    End
    Set @Update_set=@Update_set+’ Where [ID]=’+@ID
    –*******
    If @setcount>0
    Begin

    Print @Update_set

    EXEC (@Update_set)

    End
    ===== Getting rror below

    Update [NB_SF].[dbo].[mbrCustomer] Set [EVDESCRIPTION]=Test1 Where [ID]=D0134
    Msg 207, Level 16, State 1, Line 1
    Invalid column name ‘D0134′.

    Like

  12. Hi Pinal,
    I’m interested with this VLF files, so I can get my conclusion,
    I don’t need to set multiple ldf files for 1 database, because log file always run in sequential ?

    Is it better to set big initing log file ?
    than initiating small one?

    I mean,
    is it okay, if I setup 1 log file 30GB, and increment 1GB
    everyday I’ll always backup the transaction log ?
    rather than, I setup 10GB log file, with increment 1GB?

    Like

  13. HI XOCOOL,

    You need to understand the TLOG architecture first , Maintenance of Log Files.

    # What is the recovery model of your DB ?
    # If its Simple SQL server will take care of TLOG files then.
    # If its Full how frequently you are taking LOG backups ?

    If you need more details in LOG FILES.

    http://msdn.microsoft.com/en-us/library/ms179355.aspx

    I hope it will help, Then only any one can suggest the file size which is also depends on the DB Size, Like if you have 10 GB of DB then 30 GB log file dos’nt make any sence to me.

    I hope it will help

    Cheers
    Praveen Barath

    Like

  14. Pingback: SQL SERVER – Weekly Series – Memory Lane – #010 « SQL Server Journey with SQL Authority

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s