SQL SERVER – Multiple Log Files to Your Databases – Not Needed

The other day I was hired by a large consulting organization that is managing database operations for the bank Comprehensive Database Performance Health Check. The organizations wanted to have lots of performance issues and we worked through them one at a time. When we were done with the consulting work, I was asked a very interesting question by their DBA about log files. The question was – Is there ever need of multiple log files in SQL Server?

SQL SERVER - Multiple Log Files to Your Databases - Not Needed multiplelogfiles-800x267

Multiple Log Files and Performance

First of all, before you read this blog post, I must say that there is technically no need for multiple log files for your database. It does not improve the performance of help speed any of your queries at all. That said, I must also add that it is not necessary that by adding the additional log file, you will also degrade the performance. Performance is not directly related to the number of log files for the database. If there is an impact, I personally have not faced it as I do not practice multiple log files.

Log Files and Other Issues

Multiple log file actually creates significant issues when you recovery of your database happens as well as when you have to manage your database backups and storage.

The way SQL Server works is that it only writes to one log file at a time even though you have multiple log files. Additionally, all the writes are sequentially so when though there is a second log file available, it is going to stay ideal as all the writes will happen on the first file.

SQL Server will use the secondary log file only once it fills up the first log file and no there is no further space to write in the log file. This is the time it will write the data into the second log files.

Need for Second Log File

Personally, in my life, I have only added the second log file once when the first log file was in a disk which was filled up due to unrestricted growth of the log file and our database went into sudden stop mode where we were not able to anything at that time. We initially thought of emptying the log file but that was also not possible due to some of the active transactions.

During this time we immediately added the second log file on another disk and our database started to function properly. The very first task, which we did once our database recovered was to take a backup of the first log file and made some space inside it. We, later on, removed the secondary log file and our business was as usual.

As a proactive measure, we put the tracking and alter on the log file size as well as the available disk space so we can avoid such scenarios in the future.

Script to Add Second Log File

Even after reading all this you still want to add a second log file to your database, here is the script.

USE [master]
GO
ALTER DATABASE [TestDb] ADD LOG FILE ( NAME = N'NewLog', FILENAME = N'C:\NewLog.ldf')
GO

Once again, I would like to remind that there should not be any need of adding of Second log file in the normal circumstances.

Reference: Pinal Dave (https://blog.sqlauthority.com)

SQL FileGroup, SQL Log, SQL Scripts, SQL Server, Transaction Log
Previous Post
SQL SERVER – Fixing Freezing Activity Monitor
Next Post
SQL SERVER – Removing Additional Transactional Log Files

Related Posts

13 Comments. Leave new

  • Pinal, I think you could help some readers by clarifying that you are referring here to *transaction* logs. Some readers may not appreciate the distinction (from error logs, etc.). And even if they may, they may not realize it’s what you mean as the word transaction currently occurs only once, late in the article.

    Just adding that one word in the title and first sentence would solve it, though perhaps a sentence or two clarifying thier purpose would help ensure readers appreciate the rest of the concerns you raise.

    Of course, thanks for all you DO share.

    Reply
  • Hi Pinal,

    It’s not true, we have seen lots of improvement by adding additional log file, especially for TempDB, will see tremendous difference. Key factor is how many processors assigned to SQL server. I recommend number of processors assigned to SQL Server should be same as number of log file, at least for TempDB . Important thing is each log file should be on different drives otherwise disk R/W will become sequential and not advantage of additional log files.

    Regards,
    Mayura.

    Reply
    • This is absolutely incorrect and I strongly advise against it.

      What you are talking about is – Data file – MDF and not a Log File – LDF.

      Reply
    • I would like to see a test or link to the document where it says multiple log files are great.

      Again, not that I am talking about the Transaction Log file and not a Data files MDF or NDF.

      Reply
  • Hi Pinal,

    Yes, I am talking about Log File – LDF not Data file – MDF.

    I don’t have any document right now, will share if I get one. We have seen performance improvement after adding additional LDF file for all user databases and TempDB.

    Anyway I am big fan of you and appreciate your knowledge. I do follow most of your articles.

    Thanks,
    Mayura.

    Reply
    • Hi Mayura,

      I will be interested to see single documentation or a test which proves this one.

      In my career, I have never seen it and this will be a new discovery.

      Reply
  • subbaraju jayakumar
    January 11, 2021 6:18 am

    Hi Pinal, Good morning, My question is if transactional log full then we can add one more log file but different disk location. after that shall we need to truncate first log file or not? second question is once we truncate and get sufficient free space in first log , shall we need to delete second log file which we created during transactional log full? kindly brief me.

    Reply
    • 1) Truncate if it is on the simple recovery model. Take backup if it is on the full recovery model.
      2) If you do not need it further you can delete it.

      Reply
  • johan sebastian
    January 30, 2021 9:31 pm

    Multiple log files may be do not help the performance. But how about recovery? There are some tools which able to read those files right. I think multiple log files will help if we forced to use these tool right.
    It will be faster to read from 4 gig file than 40 gig file.

    Reply
    • I have yet to find a good tool which reads the log files completely. There are lots of tools which claim but when given a log file over 1 GB they eventually do not complete the task.

      Reply
  • Hi Pinal,

    I know this post is quite old, but I want to have your opinion on this topic
    I’ve found a strange behaviour on a DB with more than 1 LDF file.
    I’ve always used this function to check for file usage:

    SELECT [name] as FileName, [size]*8 as FileSizeKB, FILEPROPERTY([name], ‘SpaceUsed’ )*8 as FileUsedKB from sys.database_files

    I’ve found this doesn’t work with multiple log files !!!

    I know from MS documentation LDF files are filled in a sequential way, the second file starts to be used when the first file is full (or better to say, when last VLF in first file reaches the bottom of the file )

    But using the FILEPROPERTY function, the result shows all LDF files are equally used in percent, even if they have different size.
    I.E. : first LDF 1 GB, second LDF 10 GB. Doing some operations I can see both files used at 50%

    I didn’t find any doc explaining this, and all system functions available for checking log usage always shows a single result for log ( DBCC SQLPERF(logspace) , DBCC LOGINFO, sys.dm_db_log_space_usage )
    Seems like internally TLOG is always considered as a single “BLOB”

    Any idea on this ?

    Reply
  • Alexander Kirstein
    June 8, 2024 9:48 am

    It could be a job saving raise assuring restful night configuration to have an alternate/secondary log file available to mitigate a potential unresponsive server incident. Or script it for an Alert response to a full transaction log. Guaranteeing performance is a losing battle but ensuring production is up is a do or die aspect of business viability and job security. Transaction log size fluctuates for an ocean of reasons. It happens. This unorthodox but otherwise benign configuration is a key mitigation to an impending doom. If a DBA hasn’t faced a critical transaction log full situation, they’re still in diapers.

    Reply

Leave a Reply