I was recently engaged in Performance Tuning Engagement in Singapore. The organization had a huge database and had more than a million transactions every hour. During the assignment, I noticed that they were truncating the transactions log. This really alarmed me so I informed them this should not be continued anymore because there’s really no need of truncating or shortening the database log. The reason why they were truncating the database log was that it was growing too big and they wanted to manage its large size. I provided two different solutions for them. Now let’s venture more on these solutions.
If you are jumping over this post to leave a comment, please read first the two options as follows:
1) Convert the Recovery Model to Simple Recovery
If you are truncating the transaction logs, this means you are breaking the T-Log LSN (Log Sequence Numbers). This follows that if disaster comes, you would not be able to restore your T-Logs and there would be no option for you to do point in time recovery. If you are fine with this situation and there is nothing to worry, I suggest that you change your recovery model to Simple Recovery Model. This way, you will not have extra ordinary growth of your log file.
2) Start Taking Transaction Log Backup
If your business does not support loss of data or requires having point in time recovery, you cannot afford anything less than Full Recovery Model. In Full Recovery Model, your transaction log will grow until you take a backup of it. You need to take the T-Log Backup at a regular interval. This way, your log would not grow beyond some limits. If you are taking an hourly T-Log backup, your T-Log would grow until one hour but after this the T-Log backup would truncate all the ‘committed’ transactions once you take it. Doing this would lead the size of the T-Log not to go down much, but it would rather be marked as empty for the next hour’s T-Log to populate.
With this method, you can restore your database at Point of Time if a disaster ever happens at your server.
Let us run an example to demonstrate this. In this case, I have done the following steps:
- Create Sample Database in FULL RECOVERY Model
- Take Full Backup (full backup is must for taking subsequent backup)
- Repeat Following Operation
- Take Log Backup
- Insert Some rows
- Check the size of Log File
- Clean Up
After a short while, you will notice that the Log file (ldf) will stop increasing but the size of the backup will increase.
If you have an issue with your log file growth, I suggest that you follow either of the above solutions instead of truncating it.
/* FULL Recovery and Log File Growth */ USE [master] GO -- Create Database SimpleTran IF EXISTS (SELECT name FROM sys.databases WHERE name = N'SimpleTran') BEGIN ALTER DATABASE [SimpleTran] SET SINGLE_USER WITH ROLLBACK IMMEDIATE; DROP DATABASE [SimpleTran] END GO CREATE DATABASE [SimpleTran] GO -- Set Database backup model to FULL ALTER DATABASE [SimpleTran] SET RECOVERY FULL GO BACKUP DATABASE [SimpleTran] TO DISK = N'D:\SimpleTran.bak' WITH NOFORMAT, NOINIT, NAME = N'SimpleTran-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10 GO -- Check Database Log File Size SELECT DB_NAME(database_id) AS DatabaseName, Name AS Logical_Name, Physical_Name, (size*8)/1024 SizeMB FROM sys.master_files WHERE DB_NAME(database_id) = 'SimpleTran' GO -- Create Table in Database with Transaction USE SimpleTran GO IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[RealTempTable]') AND TYPE IN (N'U')) DROP TABLE [dbo].[RealTempTable] GO CREATE TABLE RealTempTable (ID INT) INSERT INTO RealTempTable (ID) SELECT TOP 50000 ROW_NUMBER() OVER (ORDER BY a.name) RowID FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Check the size of the Database SELECT DB_NAME(database_id) AS DatabaseName, Name AS Logical_Name, Physical_Name, (size*8)/1024 SizeMB FROM sys.master_files WHERE DB_NAME(database_id) = 'SimpleTran' GO -- Take Full Backup BACKUP LOG [SimpleTran] TO DISK = N'D:\SimpleTran.bak' WITH NOFORMAT, NOINIT, NAME = N'SimpleTran-Transaction Log Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10 GO -- Run following transaction multiple times and check the size of T-Log INSERT INTO RealTempTable (ID) SELECT TOP 50000 ROW_NUMBER() OVER (ORDER BY a.name) RowID FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Check the size of the Database SELECT DB_NAME(database_id) AS DatabaseName, Name AS Logical_Name, Physical_Name, (size*8)/1024 SizeMB FROM sys.master_files WHERE DB_NAME(database_id) = 'SimpleTran' GO /* Now run following code multiple times. You will notice that it will not increase the size of .ldf file but will for sure increasethe size of the log backup. */ -- Second Time -- START BACKUP LOG [SimpleTran] TO DISK = N'D:\SimpleTran.log' WITH NOFORMAT, NOINIT, NAME = N'SimpleTran-Transaction Log Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10 GO -- Run following transaction and check the size of T-Log INSERT INTO RealTempTable (ID) SELECT TOP 50000 ROW_NUMBER() OVER (ORDER BY a.name) RowID FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Check the size of the Database SELECT DB_NAME(database_id) AS DatabaseName, Name AS Logical_Name, Physical_Name, (size*8)/1024 SizeMB FROM sys.master_files WHERE DB_NAME(database_id) = 'SimpleTran' GO -- END --Clean Up USE MASTER GO IF EXISTS (SELECT name FROM sys.databases WHERE name = N'SimpleTran') BEGIN ALTER DATABASE [SimpleTran] SET SINGLE_USER WITH ROLLBACK IMMEDIATE; DROP DATABASE [SimpleTran] END
If you run the code that is listed between START and END, you would get the following results almost every time:
This validates our earlier discussion. After seeing this article, the Singapore team implemented Log Backup instead of Log Truncate right away. Let me know what you think about this article.
Reference : Pinal Dave (https://blog.sqlauthority.com)
48 Comments. Leave new
How to truncate log ?
When you do the full backup use the WITH Truncate option , in TSQL code
What should be the ideal size of the data file and log file?
Hi Pinal,
Thanks for sharing your experiences with us. Even I have noticed in couple of my consulting engagements and suggested the second solution (They need the point in time recovery) for better maintenance of the transaction log file.
In addition to what you have mentioned in the blog, I would like to share the command “DBCC LOGINFO” which helps the DBAs in getting an idea of how many virtual LDFs got created. More number of Virtual log files lead to performance degrade of the DML operations.
Let me try answering the above questions:
Coming to Kalpesh’s question, I dont think there is any ideal size of data file and the log file (Bench mark numbers). It all depends on the number of DML operations on the database and the growth. During capacity planning exercise, we need to do the estimates of number of DML operations on the database and accrodingly determine the transaction log backup duration. It cannot be too short nor too long where in the transaction log file size becomes HUGE.
Sumit – There will be no need for truncating the log file if the backup process is appropriately kept in place. As mentioned in this blog, truncating log file would mean that point in time recovery will not be possible in case of transaction log truncation.
Regards,
Phani.
Neat explanation, as one takes scheduled log backups one would normally have each log backup with a different name with a dattime included in the file name correct? for example:
logbackup_092020100700 – logbackup at 700 am on 09/120/2010
and also one would do a full backup at the end of the day (at night..)
Thanks for reply Phani.
Does transaction log backup duration impact the truncate operation of the committed transactions?
I thinks if the duration is small, the number of logs we have to restore will be more, which has the advantage (over the big timespan) that in case of disaster we will loss the minimam data with the disadvantage that it will increase load on db server. Let me correct if I am wrong.
My concern here is that I have database whose log file is 3-times the datafile, and having 90 % free space(it has log backup already in-placed), is it normal, or I should take some step, the database transaction/sec for the database is normal
Hey Gaud,
Refer to Pinal’s result in the above
Great article.
How do you determine the number of DML operations on the database and the growth?
Is it a good idea to limit the log file growth to, for example, 2GB instead of unrestricted growth with 10%?
I have noticed that after our weekly maintenance (like either rebuild the index or re-organize the index and stat update), the log file growth very large even though we have log file backup every 10-15 minutes. What is a good way managing t-log files in this situation?
Hi Pinal
In my company one DBA did mistake. On production server without looking at query he ran the query given my programmer. Actually it was a delete Query which was affecting 3.5 million of rows. He use Begin Trans and at end it was commit Trans. But commit never reached. T-Log got full. As commit was not done taking backup doesn’t work. Log was getting bigger and the situation reported to above levels. And SQL was restarted. So it was really big mistake.
If some one will give me these type of query then I will use batch wise delete query of around 10000 rows at one time and so on. What is your opinion?
Yes it is always recommemded to delete large number of rows in a batch.
Dear sir
i have tried the steps you mentioned above.. but the log is growing every time i insert the rows..not only the log backup
Hi Pinal,
Please assist me from huge log file growth while loading data into database from mainframe source.
Database log file has been grown from 10 gb to 60 gb after data were loaded (Date loading happen for 4 hrs). we are also taking log backup for every half an hour.
Database recovery model is BULK_LOGGED. we dont want to put db in Simple recovery model, because it is production db.
Earlier the database was in FULL recovery model. Due to this issue we have changed to BULK_LOGGED.
I have checked why the log was not reusing while data loding using the below query.
select log_reuse_wait_desc from sys.databases where name = ‘mydatabasename’ This query has been displayed everytime as “ACTIVE_TRANSACTION”.
Please let me know how can I restrict log file size from growing hugely in this scenario..
Thanks in advance…!
Regards
Sureddy.
Is your database in Mirroring? if so , change it high performance mode
Hi
I have forget to mention here a point in the earlier post. we have been implemented triggers for Insert statements. Before enabling triggers the log growth is very small that is recommended as per my company policy. But here the scenario is we need to enable the triggers while data loading. After enabling these triggers the log growing hugely (log size before dataload is 15 GB, while data loading log size is crossing 80 GB at the same time data file size is 103 GB where as data file size before data loading is 98 GB).
Here data will be loaded to another server through linkedserver and into the replica tables in the same source server and database.
Please assist me for this issue
Thanks in advance…!!!
Regards
Sureddy.
You can create teh tables and triggers as “not for Replication”. That way, the triggers won’t be activated for the data pumped through replication. Try this and see if u see any difference.
Regards,
Puneet
After running check dB, rebuild index and reorganize index my log file increase. So kindly suggest how to stop increase log file.
Regards-sadhu ram
hi pinal i want to count number of colum which repeat the same value there is 30 colum acording to month and want to count leave, attendence, present , holiday for a emplyoee give me solution please
HI Pinal,
We have database in Simple Recovery mode but still our database log file increasing heavily.
I tried to shrink database, but it doesn’t reduce the log file size.
In daily maintenance we Rebuild indexes & Update Statistics.
We take full backup of the database daily.
Can you please suggest, what
Check out Replication, mirroring mode
USE [master]
GO
/****** Object: StoredProcedure [dbo].[ShrinkLog2008] Script Date: 04/13/2012 19:51:25 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER proc [dbo].[ShrinkLog2008]
as
SET NOCOUNT ON
CREATE TABLE #TransactionLogFiles (DBName VARCHAR(150), LogFileName VARCHAR(150) )
DECLARE
DBList CURSOR FOR
SELECT name FROM
sys.databases
WHERE NAME NOT IN (‘master’,’tempdb’,’model’,’msdb’,’distribution’)
and name not like ‘%SharePoint%’
and name not like ‘%WSS%’
and state_desc = ‘ONLINE’
DECLARE @DB VARCHAR(100)DECLARE @SQL VARCHAR(8000)
OPEN DBList
FETCH NEXT FROM DBList INTO @DB
WHILE @@FETCH_STATUS -1
BEGIN
SET @SQL = ‘USE ‘ + @DB + ‘ INSERT INTO #TransactionLogFiles(DBName, LogFileName) SELECT ”’
+ @DB + ”’, Name FROM sysfiles WHERE FileID=2′
exec(@SQL)
FETCH NEXT FROM DBList INTO @DB END
DEALLOCATE DBList DECLARE
TranLogList CURSOR FOR
SELECT DBName, LogFileName
FROM #TransactionLogFiles
DECLARE @LogFile VARCHAR(100)
OPEN TranLogList
FETCH NEXT FROM TranLogList
INTO @DB, @LogFile
WHILE @@FETCH_STATUS -1
BEGIN –PRINT @DB +’,’ + @LogFile
SELECT
@SQL = ‘EXEC sp_dbOption ‘ + @DB + ‘, ”trunc. log on chkpt.”, ”True”’
EXEC(@SQL)
SELECT
@SQL = ‘USE ‘ + @DB + ‘ DBCC SHRINKFILE(”’ + @LogFile + ”’,”truncateonly”) WITH NO_INFOMSGS’
EXEC(@SQL)
SELECT
@SQL = ‘EXEC sp_dbOption ‘ + @DB + ‘, ”trunc. log on chkpt.”, ”False”’
EXEC(@SQL) FETCH NEXT FROM TranLogList
INTO @DB, @LogFile
END
DEALLOCATE TranLogList
DROP TABLE #TransactionLogFiles
database log file size autometic increase how to stop this
Hi Pinal,
i have a database that log file size is 55 GB. In this database we configured mirroring but right now mirror server is not connected but on my priniciple server satus is sinchronized.i tired shrink file and take log backup .but still persisting the problem . im not try change simple recovery model.but after backup the log file size is only 500 MB.please help me
Hi Pinal,
We have a database log file of 75GB and we are using Symantec Backup Exec to take full backup of SQL.
How can we decrease the size of log file?
You mentioned that after the full backup the log file should not increase but in our case it is increasing.
Hi Pinal,
We have a business application using SQL Server 2005 with a considerably big DB size (almost 80GB) . We have configured a DR setup to copy data from primary database (PR) to the DR server with SQL Server LOG SHIPPING method. The PR data is log shipped on an interval of 15 minutes to the DR Sql Server DB. The problem we are facing is at times the transaction log file of the PR server grows beyond a limit and impacts normal application performance resulting in the application becomes non-responsive. To bring it to normal stage we have to then forcefully run Transaction log backup with truncate only command which reduces the transaction log backup size. But then this impacts our DR setup Log Shipping jobs and it breaks/stops to carry out DR log shipping.
I need help on how to keep the DR setup in sync throughout and keep the transaction log backup at lower space.
Thanks for you advice.
Rgds.
Vijay
Hi I have a situation where i don’t have any space to take full backup. database size is huge (1TB) and logfile size is 300MB. I don’t have any space to take a fullbackup. I am not a SQL expert. Could anyone suggest a best option how can i stop log file grow but if there any issue with the DB I can recover it. egarly waiting for any suggestion.