SQL SERVER – Shrinking NDF and MDF Files – A Safe Operation

Just a day ago I have received following email from Siddhi and I found it interesting so I am sharing with all of you.

Hello Pinal,

I have seen many blogs from you on SQL server and i have always found them useful and easy to understand. Thanks for all the information you provide.

I have one query about shrinking NDF and MDF files.

Can we shrink NDF and MDF files?? If you do so is there any data loss?

I have been shrinking the .LDF files every now and then but I am not too sure about NDF and MDF files.

Can you please answer my query.

Waiting for your early response.

Regards,
Siddhi

Answer:
Shrinking MDF and NDF file is possible and there is no chance of data loss.

It is not always advisable to shrink those file as those files are usually growing. There are cases when one database is separated in multiple database of any large table is dropped from database MDF and NDF can contain large empty space. This is the time they should be shrank. Shrinking database can be many hours process but it there are very slim chances of data lose.
Following is the script to shrink whole database.
DBCC SHRINKDATABASE (dbName)

Following is the script to shrink single file.
DBCC SHRINKFILE (logicalLogFileName)

To find logicalLogFileName following command has to be ran.
USE dbName
EXEC sp_helpfile

Let us understand this using database AdventureWorks.
/* Shrink Whole AdventureWorks Database */
DBCC SHRINKDATABASE (AdventureWorks)
GO
/* Get the Logical File Name */
USE AdventureWorks
EXEC sp_helpfile
GO
/* Shrink MDF File of AdventureWorks Database */
DBCC SHRINKFILE (AdventureWorks_Data)
GO
Following image of the same process show when whole process is done there will be resultset with information about the new states of the database files.
SQL SERVER - Shrinking NDF and MDF Files - A Safe Operation shrinkdb

Reference : Pinal Dave (https://blog.sqlauthority.com)
SQL Data Storage, SQL Scripts, SQL Server DBCC, SQL Server Security
Previous Post
SQLAuthority News – Download Microsoft SQL Server 2005 Data Mining Add-ins for Microsoft Office 2007
Next Post
SQLAuthority News – Happy 60th Republic Day to India – Database Tip

Related Posts

58 Comments. Leave new

  • Hi Pinal,
    Thanks for nice articles :-)
    Shrinking a Data File brings considerable percentage of Logical Fragmentation and I don’t recommend it.
    Shrink on Log File doesn’t have such impact.
    The best solution to shrink a Data File without fragmentation is to re-create all indexes and move them to a new Filegroup. This operation will reorganize Data Pages, recovers from fragmentation and doesn’t waste Pages. Hence shrinkage is achieved!

    Mehran

    Reply
  • Hi Mehran,

    Nice idea

    Reply
  • What is the cause of Shrinking a Syetm Database ?

    Reply
  • sql server
    file attech (“mdf” file full name
    and “Ldf” full name)

    Reply
  • Hi Pinal,
    I came across the problem of Tempdb database ndf files.
    I created 8 ndf files and put them in a drive having 100gb’s size.
    All mdf,Ldf and ndf files were supposed to be created there.
    All gave file size as 15.5 gbs.
    System could not create all the files due to space restriction.I worked around and brought back mdf and ldf back t o 4gbs as required.
    But I am unable to reduce ndf files.I cannot modify,add,delete those files.I cannot see any logical file information through “sp_helpfile”, but physical file is getting created each time i delete ndf files or restart the service.
    I believe I might need to increase the drive space to let it create all the files so that SQL engine can commit the change in sys table.
    Please guide me for best solution.

    ~~
    Regards
    Abhishek

    Reply
  • Hello Abhishek,

    To reduce the size of .ndf files first take the database in single user mode and then use DBCC SHRINKFILE command with new file size.

    However, please note that it is absolutely not recommended to shrink the files as they will create fragmentation in your file and it will reduce the performance. You need to proper set up the backup strategy.

    Regards,
    Pinal Dave

    Reply
  • NO NO NO…. Shrinkfile or alter db will work only on default logical files and not on files added further in single/minimal configuration mode.
    Moreover in my scenario logical NDF files do not exist so there is no point that you can shrink these files.
    I believe as I said earlier increasing drive space is the only viable option as of now.
    Please comment.

    ~~
    Regards
    Abhishek

    Reply
  • Mr Dave,

    Once again thank you. I read your articles often but with little to say other than “thank you” for the information, working example and expected results. This time I have a question.

    Are there any reasons not to shrink temp files?

    Our system has eight .ndf files on two 107Gb drives which grew from rebuilding alot of indexes, over 100,000 or 100K. Thats 100k index per PeopleSoft database – four DB total.

    Thanks in advance.
    Allen

    Reply
    • Hi Allen,

      I think by temp files you mean .ndf files. Yes, its recommonded that you do not shrink .ndf files as they cause fragmentation in tables data.
      If you shrink then perform the rebuild or reindex to all indexes.

      Regards,
      Pinal Dave

      Reply
      • Hello Pinal, Is shrinking logfile of tempdb database advisable. I have two tempdb log files each on different drive. Now one of my drive is full and it has other database datafiles. I want to shrink the templog file which is in this drive. And if I shrink the templog will I require disk free space on the drive for shrink operation?
        Thanks,
        Anil.

  • Bill Gibbons
    May 24, 2010 11:48 pm

    Pinal,

    I am trying to SHRINK an .ndf . It is one of many. All of the others will shrink. One keeps telling me “File ID 15 of database ID 8 cannot be shrunk as it is either being shrunk by another process or is empty”.

    I check running procedures and there are none. Can you tell me how to discover the culprit ? There is very little disk space on that volume. Could this be the real cause ?

    I’m still using SQL 2005.

    Microsoft SQL Server Management Studio 9.00.1399.00
    Microsoft Analysis Services Client Tools 2005.090.1399.00
    Microsoft Data Access Components (MDAC) 2000.085.1117.00 (xpsp_sp2_rtm.040803-2158)
    Microsoft MSXML 2.6 3.0 4.0 5.0 6.0
    Microsoft Internet Explorer 6.0.2900.2180
    Microsoft .NET Framework 2.0.50727.3053
    Operating System 5.1.2600

    Any ideas ?

    Reply
  • Aashish Vaghela
    June 4, 2010 6:16 am

    Hello Pinal,

    Just a couple of quick questions here.

    (1) What are the USEDPAGES & ESTIMATEDPAGES that
    appear on the Results Pane after using the
    DBCC SHRINKDATABASE (NorthWind, 10) ??

    (2) What is the difference between Shrinking the
    Database using DBCC command like the one above &
    Shrinking it from the Enterprise Manager Console,
    by Right-Clicking the database, going to TASKS & then
    Select SHRINK Option, on a SQL Server 2000 environment ?

    Thanks in Advance,

    Aashish.

    Reply
  • Aashish Vaghela
    June 4, 2010 6:19 am

    Oh,

    Forgot to ask one more thing.

    MDF is the actual Database File, whereas LDF
    is the LOG file.

    But what is this NDF file that is discussed above.
    Never heard of it. What is it used for ? Is it used
    by end-users, DBAs or the SERVER/SYSTEM itself ??

    Thanks again,

    Aashish.

    Reply
  • Hi Pinal.

    We have several large databases. Due to their size we have been forced to spread them across several drives.
    The largest one have been spread to 5 different drives in the following pattern:

    drive:actual sisze(KB)
    J:314 468 352
    D:312 252 288
    N:143 158 208
    O:204 800 000
    P:153 600 000

    The J drive is now full, with only 10.0 MB free.
    Shrink takes forever, and it seems to fail often.

    I can put in extra drives, but is this really the only option?

    Best regards
    Dan

    Reply
    • Hello there.

      With all due respect, this seems to be a case where proper database sizing was not made since the very beginning. What you can do, and this is just my two cents, is to monitor the growth of your DB over time, and start from there. I’m pretty sure that you’ll need to add additional disks if moving historical data to a data warehouse is not an option.

      best regards,
      calin

      Reply
  • Imran Mohammed
    June 5, 2010 8:32 am

    Hello Aashish,

    Before I explain you the concept of Shrink Database. Lets understand about Database Files.

    Typically when we create a new database inside SQL Server, SQl Server creates two physical files on Operating System.

    One with .MDF Extension and other with .LDF Extension.
    .MDF is called as Primary Data File.
    .LDF is called as Transactional Log file.

    If you add (one or more) data files to a database, the physical file that will be created on operating system will have extension of .NDF,it is called as Secondary Data File.

    Where as, when you add (one or more) log files to a database, the physical file that will be created on Operating System will have the same extension as .LDF.

    Question is, why new data file has different extension (.NDF) and why is it called as secondary data file and why .MDF file is called as primary data file.

    Answer:
    Note: Explanation is based on my limited knowledge of SQL Server, Experts please comment.

    Data file with .MDF extension is called as Primary Data File, the reason behind it is, it contains Database Catalogs.

    Catalogs means Meta Data. Meta Data is Data about Data.

    An example for meta data includes system objects that stores information about other objects, but not the data stored by users,

    sysobjects stores information about all objects in that database.
    sysindexes stores information about all indexes and rows of every table in that database.
    syscolumns stores information about all columns each table has in that database.
    sysusers stores how many users that database has.

    So Metadata is storing information about other objects, meta data is not the transactional data that user enters, its a system data about the data.

    Because Primary Data File (.MDF) has important information about database, it is treated special and is given the name, Primary Data file, because it contains Database Catalogs. This file is present in Primary File Group.

    You can always create additional objects (Tables, indexes etc.) in Primary data file (This file is present in Primary File group), by mentioning that you want to create this object under primary file group.

    Any additional data file that you add to the database will have only transactional data but no metadata, and is why called as Secondary data file with the extension .NDF, so that you can easily identify Primary Data File and Secondary Dat File(s).

    There are many advantages of storing data in different Files (Under different filegroup). You can put your read only tables in one file (file group) and read write tables in another file (file group) and take backup of only filegroup that has read write data, so that you can avoid taking backup of read only data that never changes. Creating additional files on different physical hard disks also improves I/O performance.

    A real-time scenario where we use Files could be,
    Say, you have created a database MYDB on D-Drive, it has 50 GB Space. And you have 1 database file (.MDF) and 1 Log File on D-Drive and say all of that 50 GB space has been used and you do not have any free space left, you want to add additional space to the database, one easy option would be to add one more physical hard disk to server, add new data file to MYDB database and create this new data file on new hard disk and move some of the objects from one file to other file, and make the filegroup under which you added new file as default File group, so that any new object that is created gets created in new files, unless specified.

    Now that we got basic idea of what data files are and what type of data they store and why they are named the way they are, lets move to next topic, Shrinking.

    First of all, I disagree with Microsoft terminology for naming this feature as Shrinking. Shrinking in regular terms means reducing size of a file by compressing it. BUT in SQL Server, shrinking DOES NOT means compressing, shrinking in SQL Server means, removing empty space from database files and releasing the empty space either to operating system or to SQL Server.

    Lets understand this with an example.
    Say you have a database MyDB with size 50 GB and Free Space 20 GB, what this means is, 30GB in the database is filled with data and 20 GB of space is free in the database that is not currently utilized by SQl Server(Database), it is reserved but not in use. If you choose to shrink database and to release empty space to Operating system, MIND YOU, you can only shrink the database size to 30 GB (in our example) , you cannot shrink the database to size less than space filled with data.

    So, if you have a database that is full with no empty space in data file and log file (you dont have extra disk space to set Auto growth option ON), YOU CANNOT issue SHRINK Database/File command, because of two reasons,

    1. There is no empty space to be released, because shrink command does not compress database, it only removes empty space from the database files and there is no empty space.
    2. Remember, Shrink command is a logged operation, When we perform Shrink operation, this information is logged in log file, and if there is no empty space in log file, SQl Server cannot write to log file because there is no empty space in logfile and that is why you cannot shrink a database.

    Shrinking best practices:
    1. Use DBCC Shrinkfile (‘Filename’, Estimated_File_Size_After_Shrink (in MB) ) instead of DBCC Shrinkdatabase command.
    2. Do not shrink file/database in big intervals, shrink in small intervals and issue shrink command multiple times,

    Say, you have a database file in database MYDB, reserve space for this file is 50 GB with 20 GB free space (Which means 30 Gb is occupied space). Dont shrink this file from 50 GB to 30 Gb in one shot. shrink in small intervals like shrink <5GB in 1 shot, repeat this process 4 times. This will be more effective.

    3. Do not shrink your database when running backup jobs, backup jobs will fail.
    4. Always Rebuild your indexes after you shrink database, because you are removing empty space from the database, this means data pages will be rearranged creating lot of (External and Internal) fragmentation. SQL Server 2008, how ever provides an option that can aviod this case (ofcourse by adding extra overhead on the server).
    Note: Rebuilding Clustered Indexe will put a lock on the tables and table will not be available for use untill SQl Server rebuilds clustered index on the table, So do not rebuild your clustered indexe when users are connected to the database or when database is in use.
    5. Do not Stop Shrink Process in middle, If stopped database status might be changed from ONLINE to some other status, in simple words, if you do not have a database backup, you are SCREWED BIG TIME. Don;t ever try to attempt to stop Shrink Command manually, give time to complete its process.

    Now answering your question,
    (1) What are the USEDPAGES & ESTIMATEDPAGES that appear on the Results Pane after using the DBCC SHRINKDATABASE (NorthWind, 10) ??
    According to Books Online (For SQl Server 2000), it means
    UsedPages: The number of 8-KB pages currently used by the file.
    EstimatedPages: The number of 8-KB pages that SQL Server estimates the file could be shrunk down to.

    Important Note: Before asking any question, make sure you go through books online or search on google once.
    Reason for doing so have many advantages,
    1. if some one else already have had this question before, changes that it is answered are more than 50 %.
    2. This reduces your waiting time for the answer.

    (2) What is the difference between Shrinking the Database using DBCC command like the one above & Shrinking it from the Enterprise Manager Console, by Right-Clicking the database, going to TASKS & then Select SHRINK Option, on a SQL Server 2000 environment ?
    As far as my knowledge goes, there is no difference, both will work the same way, one advantage of using this command from query analyzer is, your console wont be freezed. You can do peform your regular activities using Enterprise Manager.

    (3) what is this NDF file that is discussed above. Never heard of it. What is it used for ? Is it used by end-users, DBAs or the SERVER/SYSTEM itself ??
    NDF File is secondary data file. You never heard of it because when database is created, SQL Server creates database by default with only 1 data file (.MDF) and 1 log file (.LDF) or how ever you model database has been setup, because model database is template used for every time you create new database using CREATE DATABASE Command. Unless you have added an extra data file, you will not see it. This file is used by SQL Server to store data saved by users.

    Hope this information helps.

    Experts please comment, if what I understand is not what Microsoft guys meant.

    Reply
  • Aashish Vaghela
    June 5, 2010 6:37 pm

    Thanks a Trillion Imran for such an exhaustive explanation.
    Now I know for sure what is an NDF file & how it could be used. Also, what DBCC SHRINKDATABASE (DBName, PctVal)
    does.

    Thanks again & God Bless !!

    Reply
  • Hello Imran/Pinal,

    I want to know the difference between shrinkfile and shrink database. Is it about shrinking mdf and ldf files or more than that ?

    Thanks,
    Sanju

    Reply
    • Hello there.

      It is more than mdf and ldf.

      First of all, let’s discuss the shrinking of data files (data means mdf and ndf). You will encounter two different methods, that use the same mechanism internally: DBCC SHRINKFILE, DBCC SHRINKDATABASE.
      Let me give you and example that actually happened a while ago to a customer: they were scripting a database, so we ended up having a database that was defined as having 100 GB data file (only one data file), but with no data in it. Because the customer wanted to save space, we tried to shrink the database by using DBCC SHRINKDATABASE. It had no effect whatsoever, because you cannot go lower than the initial definition of the database (100 GB in our case). By using DBCCSHRINKFILE we could actually modify the data file to a lower value, so the database size was lowered.

      Second of all, for the log files (ldf). A transaction log file has a different structure and behavior compared to a data file. Every now and then, some portion of the transaction log is marked as “not needed anymore”, so it can be overwritten. This is called truncation. It simply tells SQL Server that there is space within transaction log that can be reused. If you need to recover that space and give it back to the OS, you can only do that after the log is truncated (which, in turn, relates to recovery model and whether you have taken a backup or not).

      I do apologize for making it a bit too technical.

      best regards,
      calin

      Reply
  • .ndf is for secondary data file extension, But some database will not have this file. For more information please search ‘database architecture’.

    Reply
  • sir,

    i go through your blog.and i use

    USE DatabaseName
    GO
    DBCC SHRINKFILE(, 100)
    My log file size was 38 gb after this command it is 34 gb.

    while it should be 100mb.

    do the needfull for further steps

    Reply
  • Aashish Vaghela
    July 28, 2010 12:12 am

    Hello Pinal,

    Once again, looks like I’m running into some sorts of trouble
    keeping the LDF Log file under check. I was wondering if I can
    automate the process by some trigger. This trigger can fire, at a specific time whenever the size of the LOG file exceeds a particular value in MB. Lets say if the size of the LOG file (.LDF) reaches 100MB, it should be auto-truncated.

    Can this be achieved…?? I think so. However, I’m not much of a T-SQL Programmer. So, I’d definitely appreciate if I can get help on such a TRIGGER.

    Thanks.

    Aashish. Vaghela

    Reply
  • Aashish Vaghela
    July 28, 2010 12:46 am

    Hello Pinal,

    One more thing, in reference to the request above.
    Is there a way we can use OSQL (SQL 2000) or SQLCMD based scripts to accomplish this task of monitoring the LOG file.

    I’m planning to create a simple DOS BATch file to perform such a task. If regular DOS BAT file isn’t going to be sufficient, then probably we can think of creating a VBScript file to do the same. What do you say ??

    Regards,
    Aashish. Vaghela

    Reply
  • hi guys…as a lot of information exist on the saint internet i summarize this for shrinking the log file:
    –first we backup the transaction log
    go
    BACKUP LOG [database name] TO
    DISK = ‘d:tempaa.log’ WITH NOFORMAT, NOINIT,
    NAME = N’databasename-Transaction Log Backup’, SKIP, NOREWIND, NOUNLOAD,
    STATS = 10
    GO

    –find the name for the log file
    use database
    select name,physical_name from sys.database_files

    –change the recovery model in order to shrink the database…otherwise it will not work…

    ALTER DATABASE [databasename] SET RECOVERY Simple
    WITH NO_WAIT
    GO
    –shrink the log file
    DBCC SHRINKFILE (N’name of the log file’ , 50)
    GO

    ALTER DATABASE [databasename] SET RECOVERY FULL

    –rebuild index…

    if this operation is done in the evening when nobody works it would be great:)

    Reply
    • Thanks Brutus !

      One more thing that I added to your script above was

      DBCC UpdateUsage(MyDBName) &&

      SP_UpdateStats

      However, my 2nd question on July 28th, 12:46am still remains ? Any idea of Automating this process of monitoring & truncating the DB LOG (.ldf) file … ? Is is possible to set some TRIGGER to monitor the LOG Size & the moment it grows beyond a certain pre-defined limit, it gets truncated & the DB is Shrunk …. ?

      See if you can share some insight on to this part ??

      Reply

Leave a Reply