SQL SERVER – Shrinking Truncate Log File – Log Full

UPDATE: Please follow link for SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008.

Sometime, it looks impossible to shrink the Truncated Log file. Following code always shrinks the Truncated Log File to minimum size possible.

USE DatabaseName
GO
DBCC SHRINKFILE(<TransactionLogName>, 1)
BACKUP LOG <DatabaseName> WITH TRUNCATE_ONLY
DBCC SHRINKFILE(<TransactionLogName>, 1)
GO

[Update: Please note, there are much more to this subject, read my more recent blogs. This breaks the chain of the logs and in future you will not be able to restore point in time. If you have followed this advise, you are recommended to take full back up right after above query.]

UPDATE: Please follow link for SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008.

Reference : Pinal Dave (http://www.SQLAuthority.com), BOL

SQL Backup and Restore, SQL Log, SQL Scripts
Previous Post
SQL SERVER – Fix : Error 14274: Cannot add, update, or delete a job (or its steps or schedules) that originated from an MSX server. The job was not saved.
Next Post
SQL SERVER – Simple Example of Cursor

Related Posts

314 Comments. Leave new

  • It helped me alot, I was trying to backup the file and it was taking forever
    Thanks.

    Reply
  • Chandrashekar
    May 4, 2012 1:00 pm

    Hi Pinal,

    I will be regularly reading all your blogs and have got solutions for many of my Database problems and learned many new concepts. Thanks a lot.

    For the first time I’m posting my problem here, please help me on this.

    I have a batabase of size 3GB (Log file 500MB) and has been configured Every Saturday Full and Every 8 hours Transactional Log Backup. Problem is Log backup will be happenning very frequently every hour (even then its configured for every 8 hours). Due to this, its taking too much of my drive space, also provided drive space for backup is less only, I cant expect more drive space also. When ever Log file size becomes abnormal will shrink the Log file alone. Also transactions on this particular database is average only, not too much.

    Can you please help me out for fixing this problem. I tried searching reason for this, but dint succeed.

    Reply
  • Salvador Rodriguez
    June 5, 2012 1:03 am

    Thanks… Helped me too

    Reply
  • Hamza Farooq
    July 11, 2012 4:56 pm

    I prefer this method , easy to script as well.

    C:Program FilesMicrosoft SQL Server100ToolsBinnosql.exe -E -S Servername -d “DatabaseName”
    ALTER DATABASE DATABASENAME SET RECOVERY SIMPLE;
    go
    DBCC SHRINKFILE (‘DATABASENAME_log’, 200);
    go
    ALTER DATABASE DATABASENAME SET RECOVERY FULL;
    go

    Reply
  • Hello, How can I do this on SQL SERVER 2008? If try to run this query the SQL show me:
    Msg 155, Level 15, State 1, Line 1
    ‘TRUNCATE_ONLY’ is not a recognized BACKUP option.

    Reply
  • Hi Sir, I have Used a wrong update command in sql server i want come back on same database value what can i do Sir?

    Reply
  • Thank u sir

    Reply
  • very helpful!!!

    Reply
  • Thanks Pinal, you saved me a day!

    Reply
  • Marcelo Rassini
    October 2, 2012 11:49 pm

    Hi Pinal. I was following your posts since time ago, and I take a brief time to thank you. I’m grateful for all of your articles due I found them very useful for my daily activity. I’m a software developer and use to deal with databases but I’m not a db administrator and time to time I have to become a little bit into it due the maintenance problems of a database. Time to time I stop to think myself what is your motivation to write and share your experiences with all of us. Far to need an answer, I can feel your inspiring sense of generosity. Again I thank you for everything here.

    Reply
  • In our case the log backup was running for 10 hours then failing, and the transaction log was 100x the size of the data file. The generic error reported in the log was not helpful but further investigation led to an error related to the log containing transactions pending replication.

    This was strange because the db was not published for replication.

    Anyway, what worked for ua was to run sp_repldone command first,…

    msdn.microsoft.com/en-us/library/ms173775.aspx

    …and then follow the instructions in this article.

    Hope this helps anyone with the same problem

    Reply
  • nice post, related to Backup to file.

    Reply
  • thanks, i did it in QA envi as we are already running out of disk space.

    Reply
  • i have one concern why we use twice DBCC SHRINKFILE() command ?

    Reply
  • I tried but it’s not worked.I know better option as changingn the recovery model and then shrink but ican’t do this because of Disater recovery plan.So,anyone have other option.

    Reply
  • Tom Wheatley
    March 7, 2013 5:45 am

    Very nice, worked well for my situation, thank you :-)

    Reply
  • Hi, can you please help me?

    At this moment I’m using this for purge the log files:

    USE myDB

    DECLARE @LOG_FILE VARCHAR(100)
    SELECT @LOG_FILE = UPPER(LTRIM(RTRIM(NAME))) FROM SYSFILES
    WHERE RIGHT(LTRIM(RTRIM(FILENAME)),3) = N’LDF’

    CHECKPOINT
    BACKUP LOG mYDB WITH NO_LOG
    DBCC SHRINKFILE (@LOG_FILE,TRUNCATEONLY)

    Is that correct? I need to know if it is eficient, if this command doesnt generate a performance damage to sqlserver.

    Do I need to purge tempdb in the same process?

    I run this command every 30min.

    Thanks in advance

    Reply
    • Marc Jellinek
      June 11, 2013 8:22 am

      Every 30 minutes? You are tripling or quadrupling the I/O on your disk without appreciable benefit. Why are you doing this?

      You’ve broken the backup log chain, so unless you are taking full backups, you have lost the ability to restore to a point in time or do a valid transaction log restore. Any further transaction log backups are useless.

      You’re writing to the transaction log file as transactions are submitted to the database. Then you are taking a backup with NO_LOG (removed from SQL Server 2012) . Then you do the SHRINKFILE. Then, with the next transaction, you autogrow the transaction log and start the cycle all over again.

      Since transaction log files do not benefit from instant file initialization, as you grow the transaction log file, it must first be zero’d out, then it can be written to. With each autogrow, you are pausing the database. If your autogrow increment is small, you have massive file-system level fragmentation; if it is large, you have noticeable pauses.

      Last, with NOTRUNCATE, only empty pages at the end of the transaction log file are released. If you are constantly filling up the transaction log, you are likely not accomplishing anything at all.

      Reply
      • Marc, thanks for your reply.

        I really dont use the logs, I use full backups. But what I have really noticed is when I truncate or purge the logs, my app’s performance increase dramatically, that is the reason why I truncate every 30min.

        What I need is:
        – the best scenario for truncate the logs, thinking that I dont need them
        – what I must do whit tempdb, do I need to truncate it?
        – the correct sql command (transact sql) for doing this

        Thanks again

  • A tiny layout-tip:
    The first line of code should read:
    USE
    (that makes the markup more/100% consistent) :cool:

    Reply
  • can log be full for a particular table, even if there is enough space in the drive?

    Reply
  • Hi, Please help me to find answer for the follwing:

    I am using SQL Server 2008 Standard Edition.

    1. Can i take compressed backup with Standard Edition?

    2. What is correct time to shrink the transaction log? Before or After Full backup?

    3. Is it necessary to backup transaction log if Full backup is completed successfully?

    4. If I shrink transaction log once Full backup is completed, will i loose any data?

    Reply

Leave a Reply