It has been quite a while since SQL Server 2017 has been introduced, however just recently I have been seeing interest for this project. Lots of users are just switching to SQL Server 2017 and I believe that is normal. In the year 2017 during my Comprehensive Database Performance Health Check, I encountered mostly SQL Server 2014 and SQL Server 2016. Recently, during the consultancy, I received a very interesting question about log space.
DBAs at my customers place wanted to know more about their log file. Though they had kept the log file in Simple Recovery mode, the issue which they were facing was they were not sure when exactly to take backup of the log file. They often observed that in the same interval their log file was growing either too big or too small based on the business transactions. What they actually wanted to achieve was to take log backup as soon as there is transaction of certain amount.
As they were using SQL Server 2017, I proposed following script which exactly does what they were looking for. When you run following script in SQL Server 2017, it will give you various important information about your log file.
USE YourDB; SELECT total_log_size_in_bytes/1048576.0 AS [Total Log Size in Bytes], used_log_space_in_bytes/1048576.0 AS [Used Log Size in Bytes], used_log_space_in_percent [Used Log Space in %], (total_log_size_in_bytes - used_log_space_in_bytes)/1048576.0 AS [Free log space in MB], log_space_in_bytes_since_last_backup/1048576.0 [Log Since Last Log Backup] FROM sys.dm_db_log_space_usage;
When you run above script, it will give you size your log file with how much log file has been filled up and how much is empty.
The best part is that this script also provides you details about how much your log generated since your last log backup. This is very important information. If you want, you can read this column in your schedule job and skip taking log file backup if your generated log is below certain threshold. For example, my customer did not want to take log backup if the log file is less than 1 MB. This way they can avoid generating a less number of log files, so in case of the disaster they do not have to restore many (nearly) empty log files and delay their restore process.
You can also use above query to monitor your log space growth and also log generated since the last log backup. Please note that this script will work for any (full or simple) recovery model in SQL Server 2017. However, it will not work for version earlier than SQL Server 2017.
Reference: Pinal Dave (https://blog.sqlauthority.com)
4 Comments. Leave new
Hi Pinal,
Wonderful feature , really helpful.
By the way DMV works in SQL 2016 too, did not work for 2012.
Thanks
Braj
Hi, How are they able to backup log in Simple recovery model? As I know
Typo. It should be full.
You know its always hard to get that large transaction that’s making a huge update (like REBUILD INDEX or SELECT …INTO, etc.) and needs loads of transactionlog .LDF space.
Is the log_space_in_bytes_since_last_backup really showing the size of COMMITED Statements or also the UNCOMMITED ones? Because if its really showing COMMITED Statements they would be able to be backuped and the .ldf may be truncated before the next ten big-tables get mass data changes into the transactionlog.
In the past triggering a transactionlog backup via an alert was useless (while the transaction was ongoing).
This might be good news, after working around the issue for (16) years ;-) Guess I’m not alone?