SQL SERVER – Detect Virtual Log Files (VLF) in LDF

In one of the recent training engagements, I was asked if it true that there are multiple small log files in the large log file (LDF). I found this question very interesting as the answer is yes. Multiple small Virtual Log Files commonly known as VLFs together make an LDF file. The writing of the VLF is sequential and resulting in the writing of the LDF file is sequential as well. This leads to another talk that one does not need more than one log file in most cases.

However, in short, you can use following DBCC command to know how many Virtual Log Files or VLFs are present in your log file.

DBCC LOGINFO

You can find the result of above query to something as displayed in following image.

SQL SERVER - Detect Virtual Log Files (VLF) in LDF vlfs

You can see the column which is marked as 2 which means it is active VLF and the one with 0 which is inactive VLF.

Reference: Pinal Dave (https://blog.sqlauthority.com)

SQL Scripts, VLF
Previous Post
SQLAuthority News – My Evaluation of Singapore SharePoint Conference
Next Post
SQLAuthority News – Community Service and Public Speaking Engagements

Related Posts

17 Comments. Leave new

  • Hi Pinal,
    It is always a pleasure to read your blog. Wish you and your family a happy and prosperous New Year 2011. Wishing you continued success in your profession as well.
    Ramdas

    Reply
  • It’s worth mentioning that too many VLF’s can imapct performance – usually caused by very small autogrow i.e. make sure you size your log file as accurately as you can.
    If you have thousands of VLF’s this can really affect replication as well, so if your replciation is performing badly you might want to check for VLF’s.
    We have been through the pain with replication and VLF’s and hope no-one else has to :-)

    Reply
  • As rightly said by John, one should keep a watch on VLF. More number of VLFs associated with log file, more impact on DML performance. This impact is also indirect, and people may find it difficult to figureout that VLFs are causing the DML performance degrade. On an average, I have seen around 300 + VLFs in production environments which is a very bad number. It is always advisable to have transaction logs are properly planned for Maintenance perspective. You can find very nice blogs written by Kimberly Tripp and Paul Randal who are my favourites as Pinal is :)

    Reply
  • Dear John and Phani,

    I’m newbie to Virtual Log Files (VLF). I have a Microsoft SQL Server 2008 (64-bit) that used and installed Microsoft SharePoint 2007. I notieced the WSS_CONTENT has 938 VLF.

    Can you give some lights and advice?

    How to trim down the VLF logs?

    Also, I noticed the database log grew over 200 GB.

    Thanks,
    Edwin

    Reply
  • Hi Edwin,
    My guess would be that your databse is set to Full recovery model and you aren’t backing up the transaction log, so it keeps auto-growing by a small amount.
    You need to backup the transaction log, so that you can then shrink it (you may need to repoeat this 2 or 3 times).
    Once done you will need to set up a regular backup for both the data and log files e.g. full database backup once a day and log backups once per hour.
    Hope this helps – you will find polenty of information on the web.
    Cheers, John

    Reply
  • Hi John,

    I’m not very good in T-SQL programming, etc.

    So, I manually setup Microsoft SQL Server 2008 backup as follow:
    •Daily Database Full backup at 3:00 AM
    •Daily Log Backup at 7:00 PM, 11:00 PM, 3:00 AM, and 6:00 AM
    •Differential backup on Monday at 1:00 AM, Wednesday at 1:00 AM and Saturday at 1:00 AM

    Please see my scripts:

    — Daily Backup
    BACKUP DATABASE WSS_CONTENT_SP01
    TO DISK =’\\10.xx.xx.5\temp_sql\WSS_CONTENT_SP01.bak’
    WITH INIT, FORMAT, CHECKSUM, STOP_ON_ERROR
    GO

    BACKUP LOG WSS_CONTENT_SP01
    TO DISK = ‘\\10.xx.xx.5\temp_sql\WSS_CONTENT_SP01.trn’
    WITH INIT, FORMAT, CHECKSUM, STOP_ON_ERROR
    GO

    — Database Backup WSS_Content_SP01 Daily at 3 AM
    BACKUP DATABASE WSS_CONTENT_SP01
    TO DISK =’\\10.xx.xx.5\temp_sql\WSS_CONTENT_SP01.bak’
    WITH INIT, FORMAT, CHECKSUM, STOP_ON_ERROR
    GO

    — Log Backup WSS_Content_SP01 Daily at 7:00AM
    BACKUP LOG WSS_CONTENT_SP01
    TO DISK = N’\\10.xx.xx.5\temp_sql\WSS_CONTENT_SP01_Log_7PM.trn’
    WITH INIT, FORMAT, CHECKSUM, STOP_ON_ERROR
    GO

    — Log Backup WSS_Content_SP01 Daily at 11:00PM
    BACKUP LOG WSS_CONTENT_SP01
    TO DISK = N’\\10.xx.xx.5\temp_sql\WSS_CONTENT_SP01_Log_11PM.trn’
    WITH INIT, FORMAT, CHECKSUM, STOP_ON_ERROR
    GO

    — Log Backup WSS_Content_SP01 Daily at 3:00PM
    BACKUP LOG WSS_CONTENT_SP01
    TO DISK = N’\\10.xx.xx.5\temp_sql\WSS_CONTENT_SP01_Log_3AM.trn’
    WITH INIT, FORMAT, CHECKSUM, STOP_ON_ERROR
    GO

    — Log Backup WSS_Content_SP01 Daily at 6:00AM
    BACKUP LOG WSS_CONTENT_SP01
    TO DISK = N’\\10.xx.xx.5\temp_sql\WSS_CONTENT_SP01_log_6AM.trn’
    WITH INIT, FORMAT, CHECKSUM, STOP_ON_ERROR
    GO

    — Differential WSS_Content_SP01 backup Monday at 1:00 AM
    BACKUP DATABASE WSS_Content_SP01
    TO DISk = N’\\10.xx.xx.5\temp_sql\WSS_CONTENT_SP01_Monday_1AM.dif’
    WITH INIT, FORMAT, CHECKSUM, STOP_ON_ERROR
    GO

    — Differential WSS_Content_SP01 backup Wednesday at 1:00 AM
    BACKUP DATABASE WSS_Content_SP01
    TO DISk = N’\\10.xx.xx.5\temp_sql\WSS_CONTENT_SP01_Wednesday_1AM.dif’
    WITH INIT, FORMAT, CHECKSUM, STOP_ON_ERROR
    GO

    — Differential WSS_Content_SP01 backup Saturday at 1:00 AM
    BACKUP DATABASE WSS_Content_SP01
    TO DISk = N’\\10.xx.xx.5\temp_sql\WSS_CONTENT_SP01_Saturday_1AM.dif’
    WITH INIT, FORMAT, CHECKSUM, STOP_ON_ERROR
    GO

    I setup those schedule in JOBS.

    I’m not so clear how to shrink and I’m a bit consent since the database contnet the sharepoint data and in production.

    Would you please advice and help.

    Thanks,
    Edwin

    Reply
  • I also read Paul Randal’s posted “Why you should not Shrink You Data Files”.

    During my testing, I found out the SHRINKDATABASE will create a fragmentation almost 99 percent.

    Reply
  • Hi Edwin,
    You should only be looking to shrink the log file not the data file, or you will end up fragmenting the data as per Paul Randall’s post.
    In SSMS right-click on the database, select Tasks, then Shrink, then Files; in the Shrink File box that is shown, seect ‘Log’ as the Fiel Type.
    What you want to do is shrink the log file as close to 0 as possible, to remove any VLF’s.
    You can then set a sensible size and auto-grow in the database properties.
    Regards, John

    Reply
  • Hello John,

    Thanks.
    So, the DBCC SHRINKFILE (DB_Log, 1) should not create the fragmentation on my production database.

    DBCC SHRINKFILE (DB_Log, 1)

    Reply
  • I don’t where to put my question; I got this place and sending this to Pinal dev to get the solution;

    Hi Pinal,

    I am executing a sql query from avariable am getting error;
    I think I am missing the concatenate tricks

    My SQL is
    =====
    Set @Update_Set=’Update [NB_SF].[dbo].[mbrCustomer] Set ‘

    If @EVDESCRIPTION@EVDESCRIPTION_e And @EVDESCRIPTION”
    Begin
    Set @setcount=1
    Set @Update_Set=@Update_Set+'[EVDESCRIPTION]=’+@EVDESCRIPTION
    End
    Set @Update_set=@Update_set+’ Where [ID]=’+@ID
    –*******
    If @setcount>0
    Begin

    Print @Update_set

    EXEC (@Update_set)

    End
    ===== Getting rror below

    Update [NB_SF].[dbo].[mbrCustomer] Set [EVDESCRIPTION]=Test1 Where [ID]=D0134
    Msg 207, Level 16, State 1, Line 1
    Invalid column name ‘D0134’.

    Reply
  • Hi Pinal,
    I’m interested with this VLF files, so I can get my conclusion,
    I don’t need to set multiple ldf files for 1 database, because log file always run in sequential ?

    Is it better to set big initing log file ?
    than initiating small one?

    I mean,
    is it okay, if I setup 1 log file 30GB, and increment 1GB
    everyday I’ll always backup the transaction log ?
    rather than, I setup 10GB log file, with increment 1GB?

    Reply
  • HI XOCOOL,

    You need to understand the TLOG architecture first , Maintenance of Log Files.

    # What is the recovery model of your DB ?
    # If its Simple SQL server will take care of TLOG files then.
    # If its Full how frequently you are taking LOG backups ?

    If you need more details in LOG FILES.
    https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms179355(v=sql.105)

    I hope it will help, Then only any one can suggest the file size which is also depends on the DB Size, Like if you have 10 GB of DB then 30 GB log file dos’nt make any sence to me.

    I hope it will help

    Cheers
    Praveen Barath

    Reply
  • Hi,

    I have a DB of size 3.5 TB which has 11,745 VLFs. Based on this numbers is that number of VLFs is too bad for a 3.5 tb size DB?

    Reply
  • Could you please explain the disadvantage of too few VLF ( in other words, large size VLFs) ? If i backup transaction log while a VLF is not full, but has no active transactions, would it truncate or mark it eligible for re-use ?

    Reply
  • If I have a series of VLFs where the bottom end are yet to be backed up, but ‘old’ transactions and the top is the current set, with some inactive (status 0) VLFs in the middle, will these be used?
    In a circular approach, I assume that the active head will come to the end of the log, and try to cycle to the start where it will meet a VLF still needed for backup. Will it then expand the log, or look to one of the status 0 vlfs in the middle of the log file?

    Reply

Leave a Reply