Couple of the days ago, I received following email and I find this email very interesting and I feel like sharing with all of you.
Note: Please read the whole email before providing your suggestions.
“Hi Pinal,
If you can share these details on your blog, it will help many.
We understand the value of the master database and we take its regular back up (everyday midnight). Yesterday we noticed that our master database log file has grown very large. This is very first time that we have encountered such an issue. The master database is in simple recovery mode; so we assumed that it will never grow big; however, we now have a big log file.
We ran the following command
USE [master]
GO
DBCC SHRINKFILE (N'mastlog' , 0, TRUNCATEONLY)
GO
We know this command will break the chains of LSN but as per our understanding; it should not matter as we are in simple recovery model.
After running this, the log file becomes very small.
Just to be cautious, we took full backup of the master database right away.
We totally understand that this is not the normal practice; so if you are going to tell us the same, we are aware of it.
However, here is the question for you? What operation in master database would have caused our log file to grow too large?
Thanks,
[name and company name removed as per request]”
Here was my response to them:
“Hi [name removed],
It is great that you are aware of all the right steps and method. Taking full backup when you are not sure is always a good practice.
Regarding your question what could have caused your master database log to grow larger, let me try to guess what could have happened.
Do you have any user table in the master database? If yes, this is not recommended and also NOT a good practice. If have user tables in master database and you are doing any long operation (may be lots of insert, update, delete or rebuilding them), then it can cause this situation.
You have made me curious about your scenario; do revert back.
Kind Regards,
Pinal”
Within few minutes I received reply:
“That was it Pinal. We had one of the maintenance task log tables created in the master table, which had many long transactions during the night. We moved it to newly created database named ‘maintenance’, and we will keep you updated.”
I was very glad to receive the email. I do not suggest that any user table should be created in the master database. It should be left alone from user objects. Now here is the question for you – can you think of any other reason for master log file growth?
Reference: Pinal Dave (https://blog.sqlauthority.com)
14 Comments. Leave new
Very useful tip, there are times where such things like creating user objects (tables) are done on the master database. Needs to be part of best practices – not to create user defined tables in master database.
BTW i have seen cases where UDF’s are created in the master database, is that a good practice.
Ramdas
Reindexing?
good one, accepted we should never create user defined tables in the master db, although we can create a few stored procs and not so big tables if they are being used to perform some audit operation when the sql server/ system starts. And to you question, i having large number of sql server objects, creating and droping large number of objects might increase the master db log file size.
Edit to my last comment, having large number of logon accounts, droping altering and creating them…
hi pinal,
What is the benefit of taking daily or periodic backup of masterdata.
If the database is corrupted you can take it from a backup file
can you give a detail about benefit of taking backup of log or any database. can i get detail of transaction from log file data.if yes how it is possible if not then what is the benefit to take backup of it.
The ldf file logs all the transactions with the corresponding system change number and is therefore important for instance recovery. All the DML in an instance is logged in here.
So I say it wouldnt be a good idea to delete the log.
If the recovery model of the database is not simple, you can get informations from log file with the help file third party tools
will i get the information that which query,procedure or something else were executed? if yes can you give me any link or query or method to get that.
Hi Pinal
I have been using bulk insert commands to insert data from text files to my DB. As a result of which(as I feel), the log file of my DB grows enormously (In GBs) and I run out of space in my system. To continue with normal operation I use the command
Backup log with no log
and then use DB->Task->Shrink->File to shrink the log file, which reduces the log file size to 1024 KBs in most cases.
As I have come to know through your blogs that this is not a good practice. I would like to know that what other options do we have to :
1. Prevent the log file to such huge sizes in first case;
2. reduce the size in case it has grown.
Hi Yogesh,
A simple solution can be changing the recovery model of your DB from full to bulk recovery model. And before doing this, it is highly recommended that you take a backup (Log and if possible full as well). After performing the bulk operation, do take a log and full backup. For better know how of the recovery models, refer the following link
msdn.microsoft.com/en-us/library/ms189275.aspx
Hi Dave,
I have one doubt over here. if the masterDB in simple recovery model then how the Log will grow even if we created any user table in masterDB?
Please I need your help.We have two very big Db’s in one instance and other small DB’s.We have a job runnig which takes a full backup every midnight and everyother day the job fails because of disk space issue. Also we have transaction log backup. Now we are thinking to implement differential back.What do you think can it help us?
Thank you.