SQL SERVER – Database source control with ApexSQL software

This article covers the main features of ApexSQL tools that are involved with SQL source control. For the purpose of the article, we will use ApexSQL Source Control, ApexSQL Diff and ApexSQL Build.

The latest version for any of the mentioned tools can be downloaded from ApexSQL download section, and you can play along while reading the article.

Apex SQL Source Control

ApexSQL Source Control is a SQL Server Management Studio add-in that allows version controlling SQL database objects. It natively supports all major source control systems such as Git, Mercurial, Perforce, Subversion and Team Foundation Server (including Visual Studio Online) requiring no additional client tools to be installed

Source control development “models”

ApexSQL Source Control supports two database development models: Shared and Dedicated.

Dedicated model means that each developer is working on a local instance of a database. It allows developers to work independently and test changes without affecting the rest of the team. However, it requires a separate license of SQL Server for each developer. Developers have to be aware of conflicts that may occur in case more than one developer is working on the same object and want to commit it to the repository. We will explain conflicts later in this article.

Shared model means that all developers are working on the same database and that all changes are applied directly against a database, and after that committed to the repository. Conflicts cannot occur in shared model. However, a developer can override other’s change, so it is essential that the entire team is aware of all changes and all objects that are currently in use by someone else.

Initial database linking

To link a database, right-click on it in the Object Explorer pane and from the ApexSQL Source Control menu click the Link database to source control option:

image001 SQL SERVER   Database source control with ApexSQL software

Choose the source control system and database development model:

image002 SQL SERVER   Database source control with ApexSQL software

In case the shared model is selected, an additional option allows you to choose a host database for the framework objects that ApexSQL Source Control is using.

Select objects to be included in version control. By default all supported objects are included (Learn more):

image003 SQL SERVER   Database source control with ApexSQL software

Specify credentials for the repository, along with the repository path and particular project inside the repository if needed (Learn more):

image004 SQL SERVER   Database source control with ApexSQL software

After the linking is done, all objects included in the linking process will appear in the Action center tab. This is the main window used by ApexSQL Source Control in communication with the repository.

It shows the comparison results for each object, comparing its current state in the database with the latest version on the repository.

image005 SQL SERVER   Database source control with ApexSQL software

After the initial commit is performed, the Action center tab will show that the local database and the repository are synced. Each change made after the initial commit will appear upon refreshing the Action center tab, no matter which user made a change.

Check out an object

To check out an object, right click on it from the Object Explorer, and choose the Check out option from the ApexSQL Source Control menu:

image006 SQL SERVER   Database source control with ApexSQL software

Check out serves for information purposes, meaning that it can be overridden by other developer.

Lock an object

To prevent other developers from working on an object, it is necessary to use the Checkout and lock option. It is located in the ApexSQL Source Control menu:

image007 SQL SERVER   Database source control with ApexSQL software

By locking an object, the rest of the team will be prevented from making any changes, until an object is unlocked, or after the change made against an object is committed to the repository.

Each object status is followed by the appropriate icon in the Object Explorer pane, so the entire team is aware of objects that are checked out, locked, or edited:

image008 SQL SERVER   Database source control with ApexSQL software

Besides following the object status in the Object Explorer pane, the same information is available through the Object status form, where all object are listed along with the information about the status of an object and the user who performed an action:

image009 SQL SERVER   Database source control with ApexSQL software

Resolving conflicts in dedicated development model

Conflicts occur within the dedicated database development model in cases when developer A modified an object and committed changes to the repository while developer B was working on the same object version prior to the change. In this situation, when developer B tries to commit the change, a conflict occurs, since the version that developer B used was not the latest one from the repository. ApexSQL Source Control does not allow any action (push to the repository or pull to a database) until the conflict is resolved. For resolution, all changes from the database can be committed to the repository (keep local), the latest changes from the repository can be applied to a database (take remote), or each conflict can be resolved manually by using either the built-in line by line merging tool or one of the 3rd party merging tool (Learn more):

image010 SQL SERVER   Database source control with ApexSQL software

Using Labels/Tags

A Label or Tag represents the current state of the repository (snapshot of the repository or the check point) and it is often used when a stage of database development is finished/milestone is achieved, or a testing is finished. In this case, a label is created, so the database can be easily reverted to the previous state if needed.

To create a label using ApexSQL Source Control, right click the database in the Object Explorer and from the ApexSQL Source Control menu click the Create label option. For reviewing and applying specific labels, use the Show labels option from the same menu:

image011 SQL SERVER   Database source control with ApexSQL software

Applying a label with ApexSQL Source Control will initiate the dependency aware mechanism that will take the created label and create a script that will be applied against the database. The entire process is wrapped as a transaction which means that either all is applied or nothing. Before applying a label, a developer can explore the script, modify it or save it for later usage:

image012 SQL SERVER   Database source control with ApexSQL software


Through the history form, a developer will be able to explore each commit (change set) and to compare each version of an object committed to the repository with the current version from the database or any other version as well. Even more, the selected version can be retrieved from the repository and applied to the database with a single click (Learn more):

image013 SQL SERVER   Database source control with ApexSQL software

Compare/sync the repository with database

Using ApexSQL Diff, a SQL Server database schema comparison, and synchronization tool, a database can be compared with the repository.

ApexSQL Diff will compare specified sources, by comparing the latest version from the repository with the database, or it can be set to compare the database with the specific label.

image014 SQL SERVER   Database source control with ApexSQL software

Source control from the Destination section can be set in a way that the latest version on the repository is used for comparison, or any label specified:

Source control, as the destination, will by default be the latest version in the repository, unless a label is specified

image015 SQL SERVER   Database source control with ApexSQL software

Build a database from the repository

ApexSQL Build can be used to take the latest changes from the repository (or a specific label) and create a deployment and/or immediately create/update the target database.

To build a database directly from the source control repository, select the Build new database option:

image016 SQL SERVER   Database source control with ApexSQL software

Specify the mechanism of deployment:

image017 SQL SERVER   Database source control with ApexSQL software

Specify the repository that holds the database to be deployed:

image018 SQL SERVER   Database source control with ApexSQL software

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – ERROR: Autogrow of file ‘MyDB_log’ in database ‘MyDB’ was cancelled by user or timed out after 30121 milliseconds

One of the most difficult and haunted issue for any DBA is to do a root cause analysis or RCA of a performance issue with SQL Server. That is why monitoring of SQL Server is very important and companies spend a lot of time in buying tools for that. Sometimes, when there is no monitoring then we have to look at all possible logs available. I have been a big believer of checking SQL Server ERRORLOG for any unexpected behavior. Sometimes we do get good piece of information from there. If you don’t know the location of ERRORLOG file, then you should read below blog.

SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location

Below is one of the error message that I have seen in my one of my client’s ERRORLOG file.

Autogrow of file ‘<File Name>’ in database ‘<Database Name>’ was cancelled by user or timed out after %d milliseconds.  Use ALTER DATABASE to set a smaller FILEGROWTH value for this file or to explicitly set a new file size.

The error indicates that the transaction/session cannot wait until the file growth completes and then the session is been cancelled. Due to that, the file growth is also cancelled. The reason is that the file growth cannot complete in time. In above message, we need to note few thing: a) which file, b) which database and c) how much time before timeout. When the database file (MDF or LDF) is full and it is set to grow automatically, the growth would kick in and transaction would be halted until growth is complete. The time taken to grow would be dependent on two factors. 1) size of growth 2) Disk speed.

Let’s assume that database transaction log size is 50 GB and the file growth is 10%, it will grow by 5 GB. The time needed for this size increase exceeds the time of the session which it can wait. Most of the application have timeout of 30 seconds, which is more than enough for auto-grow to complete. If the disk can response much faster, the issue may not appear at all.

A seasoned DBA performs the transaction log backup regularly and avoid such a large size of transaction log. With a large transaction log, sometimes it can also cause a long database startup time after a restart if some problem happens previously. To avoid the issue in our case, we can try to set the file growth to a smaller value such as 1GB or 500 MB. Microsoft has always recommended doing manual growth so that size is in predictable range. Autogrow is like a band-aid. As per SQL industry best practices, we should avoid file auto-growth and initialize the file to a reasonable size according to the application requirement.  Then we can leave the file growth to 1GB or a large size according to the disk response time. This applies to both data file (MDF or NDF) and transaction log file (LDF).

Since SQL Server 2005, SQL Server has a feature called “instant file initialization” which can accelerate the file growth. Its important to keep in mind that only data files can be initialized instantaneously by this feature. Instant file initialization would be activated when the SQL Server (MSSQLSERVER) service account has been granted SE_MANAGE_VOLUME (Perform Volume Maintenance Tasks). https://technet.microsoft.com/en-us/library/ms175935.aspx

To summarize:

  • Increase the file size in a smaller value. Prefer manual growth or fixed size auto-grow.
  • Consider providing “Perform Volume Maintenance Tasks” permission to SQL Service account. This would allow “instant file initialization” for faster growth of data files.
  • Perform transaction log backup regularly so avoid huge size transaction log in full recovery model.
  • Make sure disk performance is good.

Have you ever seen such errors on your production environment?

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #036 – What is the difference between @@DATEFIRST and SET DATEFIRST?

Question: What is the difference between @@DATEFIRST and SET DATEFIRST with respect to SQL Server?

Answer: SQL Server with US English as default language, SQL Server sets DATEFIRST to 7 (Sunday) by default. We can reset any day as the first day of the week using


This will set Friday as the first day of the week.
@@DATEFIRST returns the current value, for the session, of SET DATEFIRST.

----This will return result as 1(Monday)
SET LANGUAGE us_english
----This will return result as 7(Sunday)

In this way @@DATEFIRST and SET DATEFIRST are related. When I learned about this feature I was very glad as our company has started to server global clients and simple feature like this helps a lot to avoid confusion.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Who Dropped Table or Database?

I used to run a series on Hero’s and Sherlock Holmes is one of my favorite. There is always a need to get to the bottom of the problem statement and whether you are a developer or a DBA, at least once you might have been asked one of the following “Who” question after a disaster.

  • Who dropped table in the database? From which application? When?
  • Who dropped database? What was the date and time?
  • Who created database on production server?
  • Who altered the database?
  • Who dropped the schema?
  • Who altered the schema?

And there are many other similar questions. Generally rich companies have auditing enabled and they have many tools which might be capturing all DDLs via either Auditing or via 3rd party tools.  Many of DBAs and developer don’t know that there is a default trace which runs in the background in every SQL Server installation

Here are few usage of default traces which are via SSMS.

SQL SERVER – SSMS: Configuration Changes History

SQL SERVER – SSMS: Schema Change History Report

Let’s look at the events captured by the default trace.

SELECT DISTINCT Trace.EventID, TraceEvents.NAME AS Event_Desc
FROM   ::fn_trace_geteventinfo(1) Trace
,sys.trace_events TraceEvents
WHERE Trace.eventID = TraceEvents.trace_event_id

Here is what we would get in SQL Server 2014

Event_ID Event_Desc
18 Audit Server Starts And Stops
20 Audit Login Failed
22 ErrorLog
46 Object:Created
47 Object:Deleted
55 Hash Warning
69 Sort Warnings
79 Missing Column Statistics
80 Missing Join Predicate
81 Server Memory Change
92 Data File Auto Grow
93 Log File Auto Grow
94 Data File Auto Shrink
95 Log File Auto Shrink
102 Audit Database Scope GDR Event
103 Audit Schema Object GDR Event
104 Audit Addlogin Event
105 Audit Login GDR Event
106 Audit Login Change Property Event
108 Audit Add Login to Server Role Event
109 Audit Add DB User Event
110 Audit Add Member to DB Role Event
111 Audit Add Role Event
115 Audit Backup/Restore Event
116 Audit DBCC Event
117 Audit Change Audit Event
152 Audit Change Database Owner
153 Audit Schema Object Take Ownership Event
155 FT:Crawl Started
156 FT:Crawl Stopped
164 Object:Altered
167 Database Mirroring State Change
175 Audit Server Alter Trace Event
218 Plan Guide Unsuccessful

As we can see there are various interesting events. Object:Altered, Object:Created and Object:Deleted can help us in identifying who dropped, altered or created any object. Once we get event class ID for the event of interest, we can use below query and replace the Event class ID.

-- read all available traces.
DECLARE @current VARCHAR(500);
DECLARE @start VARCHAR(500);
SELECT @current = path
FROM sys.traces
WHERE is_default = 1;
SET @current = REVERSE(@current)
SELECT @indx = PATINDEX('%\%', @current)
SET @current = REVERSE(@current)
SET @start = LEFT(@current, LEN(@current) - @indx) + '\log.trc';
FROM::fn_trace_gettable(@start, DEFAULT)
WHERE EventClass IN (92,93) -- growth event

In above, I have used EventClass 92 and 93 to track database auto-growth events. Here is the query to find who dropped / created or altered object in database or database itself.

-- read all available traces.
DECLARE @current VARCHAR(500);
DECLARE @start VARCHAR(500);
SELECT @current = path
FROM sys.traces
WHERE is_default = 1;
SET @current = REVERSE(@current)
SELECT @indx = PATINDEX('%\%', @current)
SET @current = REVERSE(@current)
SET @start = LEFT(@current, LEN(@current) - @indx) + '\log.trc';
WHEN 46 THEN 'Object:Created'
WHEN 47 THEN 'Object:Deleted'
WHEN 164 THEN 'Object:Altered'
END, DatabaseName, ObjectName, HostName, ApplicationName, LoginName, StartTime
(@start, DEFAULT)
WHERE EventClass IN (46,47,164) AND EventSubclass = 0 AND DatabaseID <> 2

Have you ever heard of someone being fired based on such auditing?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to View Objects in mssqlsystemresource System Database?

The series of interview questions are always fun. Sometimes I get to hear strange questions and are worth a detailed post. This blog is in continuation to one such question that landed into my Inbox which I personally felt had to have little explanation. During interviews there would be at least one question asked about system databases and someone might ask about hidden system database is – mssqlsystemresource. There are a few facts which are known to most of us:

  1. It is a hidden system database. ID of this database is 32768.
  2. It stores schema of system object.
  3. It helps in faster patching because there is no need to ALTER system objects. Since the schema is stored in this database, it would be just replacing mdf and ldf files.

There are a few mythsmyths about this database, none of below is true.

  1. It can be used to rollback service pack. Just replace old files and you are done. This is INCORRECT! Service pack is not just this database replacement.
  2. Backup and restore needed for this database. This is also INCORRECT! This database is just like any other binary like exe and DLL which are needed for SQL Server to run. In case the files are damaged, you need same version of the file from some other SQL instance.

Here are few blogs which I have written earlier:

SQL SERVER – mssqlsystemresource – Resource Database

SQL SERVER – Location of Resource Database in SQL Server Editions

If you want to see the objects under this database, there is a little trick.

  • Stop the SQL Server service
  • Copy the mssqlsystemresource.mdf and mssqlsystemresource.ldf to a new path. Location of these files are listed in my earlier blog listed above,
  • Start the SQL Server Service.
  • Use the following command to attach the data and log file as a new user database.

USE [master]
CREATE DATABASE [mssqlsystemresource_copy] ON
(FILENAME = N'E:\temp\mssqlsystemresource.mdf' ),
FILENAME = N'E:\temp\mssqlsystemresource.ldf' )

  • Here is what we would see in management studio. Note that there are NO tables in this database. Just the views and procedures which are in sys schema. Generally they are visible under system views in other databases

sysres 01 SQL SERVER   How to View Objects in mssqlsystemresource System Database?

We can also have a look at the definition of views. Note that this option won’t come for system objects.

sysres 02 SQL SERVER   How to View Objects in mssqlsystemresource System Database?

In practical scenario, there is no need to ever do this but it is always good for a DBA to know what is happening under the hood in SQL Server. I hope this will give you more opportunity to explore.

Note: Please DONOT change the system ResourceDB or replace the same in production environments. It is not advisable. This blog has to be seen as educational and for exploration purposes only.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQLAuthority News – Ryan Adams Running for PASS Board of Directors

RyanAdams SQLAuthority News   Ryan Adams Running for PASS Board of DirectorsI recently learned that Ryan Adams is running for the PASS Board of Directors. I care for PASS which is a Professional Association of SQL Server. I care about who runs PASS as it directly affects my daily routine. Every year, I spend a good amount of time in making sure that I vote right person as a board of directors for PASS because they are the one who is going to make a major impact on the life of people like me. Just like every year, this year as well, we have amazing sets of candidates and I am going to carefully vote for them.

Ryan Adams, who is my dear friend is also one of the candidates for PASS board of directors. Here is the brief note about Ryan. He is one guy who makes positive impacts on people’s life and I am very confident he will do the same as a PASS board of director.

Ryan Adams is running for the 2015 PASS Board of Directors. Ryan has been a dedicated volunteer in the organization for over 10 years. What makes him, particularly suited for this job is the breadth with which he has volunteered. He is involved at the local, regional, and national levels.

Ryan has served on the local North Texas SQL Server User Group Board of Directors for over 5 years holding every position on the board. In his time with NTSSUG he has helped with 5 successful SQLSaturday events, a SQLRally event, grown the membership, and provided a solid financial foundation.

On the regional level, he has been a Regional Mentor for the South Central US Region for the last 4 years. The region has grown from 10 to 15 user groups in that time.

On the national level, he has helped the PASS Performance Virtual chapter for the last 3 years and has served as President for the last 2 years. In that time attendance has grown from about 30 people to around 200 per month. He also started their annual Performance Palooza event of 8 back to back sessions, which just had its 4th event. This year saw a 174% growth in attendance.

All this involvement gives Ryan a view into the organization from many different levels. Since Directors at Large could be given any portfolio he would be well suited to many of them due to his volunteer experience, which is huge for any candidate to possess. Ryan has visions of how PASS can better utilize its IT resources and grow the community by getting involved with younger generations around STEM (Science, Technology, Engineering, and Mathematics).

Reference: Pinal Dave (http://blog.sqlauthority.com)

Hey DBA – Do You Study Mistakes? – Notes from the Field #097

[Note from Pinal]: This is a 97th episode of Notes from the Field series. If you know Mike Walsh, he is an amazing person to meet and know. I am fortunate to be his friend. Every single time I meet him, I learn something as a human.

mikewalsh Hey DBA   Do You Study Mistakes?   Notes from the Field #097

In this episode of the Notes from the Field series database expert Mike Walsh gives a challenge to all of us. He explains to us why we should study our mistakes. He is indeed very right that every mistake gives us opportunity to correct ourselves.

So another post from me on my friend Pinal’s blog that is on the periphery of technology and professional development.

This post is admittedly inspired by a scare I had with a chainsaw this past weekend. I blogged about that over on my personal site but I wanted to talk about a mindset of studying the failures (both your own and those of others) and trying to always learn from mistakes. You can learn any technology you want, you can be the best DBA you could possibly be – but if you don’t study and learn from failures, you’ll never be as good as you could be.

mistake1 Hey DBA   Do You Study Mistakes?   Notes from the Field #097I can talk about this a bit for two reasons. One is – I’m a human. And as one, I make mistakes from time to time. So I’m qualified to talk about failure – because I’ve done it so much. The other is – I’m a consultant. At Linchpin People – we do a lot of things with SQL Server for our clients. One of them is rescuing projects gone bad. I’ve picked up other people’s mistakes and tried to piece them together for them, and do it regularly.

So on to a few thoughts about learning from mistakes.

What Do You Mean, “Study Mistakes”?

In the post on my own blog, I talked about how chainsaw protective pants/chaps saved me a really bad day. I didn’t always wear safety gear when using chainsaws until recently. I’m not sure why, but I didn’t. What made me change my mind this year? I studied mistakes.

You see – I have to do a lot of work at my house in the woods for clearing fields for farm animals (In my spare time, when I’m not helping clients tune million dollar systems, I’m cleaning up after sheep and chickens – and more animals once I’m done clearing more space) this year. And I’ve used a chainsaw before here and there – but never with the size trees that I’ve had to cut this year. Never at the frequency that I’ve been cutting.  So I started talking to people who’ve been there and done that.  I asked questions and listened to advice. But I also read accident reports (I speak on aviation disasters and learning from them, In the Fire Service we study Line of Duty Deaths a lot to see what mistakes others made. It sounds macabre but it is a vital learning tool). I read a lot of accident reports – and I saw mistakes and common patterns in chainsaw users who died or had serious injuries. In a lot of the cases there were nearly always mistakes made, but then a lot of times they were compounded by not having the right safety gear on when those mistakes happened. I learned about being intentional and avoiding mistakes and changed some habits – but I also saw that over half of the accidents wouldn’t have been reported if the right protective gear was on. So I bought it. And wear it. And it works.

We can do that as technologists too.

DBAs – What can you do?

mistake2 Hey DBA   Do You Study Mistakes?   Notes from the Field #097Study Mistakes – Read blog posts. Look at the forums. Look and see what gets people “hurt” with SQL Server. Look at the horror stories about how SANs really can fail. Learn what goes wrong. Go to lessons learned and “you’re doing it wrong” style presentations. When you go to these – don’t get the “this couldn’t happen to me” attitude. Understand this CAN happen to you and ask yourself what you can do differently.

Be Creative – When I was doing my various levels of emergency medical technician training we would often engage in scenario based discussions. We’d think up “what if” situations. We’d think of the time of dispatch (how a call would come in) through the time responding to that type of call, to arriving, and we’d think of various what-if’s along the way. We’d change the presentation of the patient, we’d think about a turn for the worse. All of this training developed muscle memory, it got us thinking about the unknown. So when we were thrown into the unknown, it wasn’t a totally foreign feeling.

We can do that as technologists also! We can think of what could go wrong, we can think of the things a few levels deep and imagine where things can break. And then we can do something about the scenarios that we should deal with. A good way to start this is writing checklists for upgrades or migrations or deployments. As you write the checklist, you can begin to analyze what may happen at various steps and build time and solutions in to deal with those situations.

Be Honest – It’s 2015, so I am assuming that you are a human if you are reading this. So you’ll make mistakes. It will happen. It may be big, it may be small, and it won’t be your last one either. In the moments after you recover you have options. One is ignore what happened, brush it under the rug and hope it doesn’t happen again. Another is blame others. Still another, though, is to investigate what happened. Understand the mechanics. Do a root cause analysis and see what you can do differently next time. Learn from your own mistakes, encourage your team to have open and honest lessons learned meetings where everyone talks, everyone listens and the uncomfortable conversations can be had. If that can’t happen? Then you’re doomed to repeat history.

If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Column Alias and Usage in Where Clause

You can assign a new name for a derived column in the SELECT statement which is also known as an alias name. But you need to understand how column alias are assigned and its scope

Let us create the following dataset

CREATE TABLE #temp (prod_id INT, transaction_Date DATETIME, amount DECIMAL(12,2))
SELECT 1,'2010-10-10',2700.00 UNION ALL
SELECT 2,'2010-12-07',340 UNION ALL
SELECT 3,'2011-09-03',1200.50 UNION ALL
SELECT 2,'2011-08-11',4100 UNION ALL
SELECT 1,'2012-01-01',17650.50

Suppose you want to find the year of the transaction date and display in the SELECT statement and also order by itself. You can use

SELECT YEAR(transaction_date) AS YEAR,amount FROM #temp

Which gives you the following result

year        amount
----------- --------------
2010        2700.00
2010        340.00
2011        4100.00
2011        1200.50
2012        17650.50

But what happens when you run the following?

SELECT YEAR(transaction_date) AS YEAR,amount FROM #temp

You get an error

Msg 207, Level 16, State 1, Line 2
Invalid column name ‘year’.

It is because the column alias are not immediately known to the WHERE clause, whereas it is known in the ORDER BY clause because ORDER BY is executed lastly after the entire column list is known.

For more information about the order of execution in the statement, refer this my earlier blog.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Identify Time Between Backups Calculation

As part of my script archives, I stumbled upon a script that I had written a long time back and thought this must get to the blog. This script was inspired from a simple question asked by an DBA when I had visited them for a session. The question was to identify the time taken between backups. I was immediately in asking why they wanted it.

The DBA told me a story that they were taking backups on a tape drive. After such backups being taken, one of the days their latest backup was corrupt. So they resorted to an old backup taken before that. They wanted to know the amount of data loss which might potentially happen because of this roaster.

I personally felt this was a simple requirement and needed to be addressed in some way. I made a rudimentary script to attack this requirement as shown below:

CREATE TABLE #backupset (backup_set_id INT, database_name NVARCHAR(128), backup_finish_date DATETIME, TYPE CHAR(1), next_backup_finish_date DATETIME);
INSERT INTO #backupset (backup_set_id, database_name, backup_finish_date, TYPE)
SELECT backup_set_id, database_name, backup_finish_date, TYPE
msdb.dbo.backupset WITH (NOLOCK)
WHERE backup_finish_date >= DATEADD(dd, -14, GETDATE())
database_name NOT IN ('master', 'model', 'msdb', 'ReportServer', 'ReportServerTempDB');
CREATE CLUSTERED INDEX CL_database_name_backup_finish_date ON #backupset (database_name, backup_finish_date);
UPDATE #backupset
SET next_backup_finish_date = (SELECT TOP 1 backup_finish_date FROM #backupset bsNext WHERE bs.database_name = bsNext.database_name AND bs.backup_finish_date < bsNext.backup_finish_date ORDER BY bsNext.backup_finish_date)
FROM #backupset bs;
SELECT bs1.database_name, MAX(DATEDIFF(mi, bs1.backup_finish_date, bs1.next_backup_finish_date)) AS max_minutes_of_data_loss,
'SELECT bs.database_name, bs.type, bs.backup_start_date, bs.backup_finish_date, DATEDIFF(mi, COALESCE((SELECT TOP 1 bsPrior.backup_finish_date FROM msdb.dbo.backupset bsPrior WHERE bs.database_name = bsPrior.database_name AND bs.backup_finish_date > bsPrior.backup_finish_date ORDER BY bsPrior.backup_finish_date DESC), ''1900/1/1''), bs.backup_finish_date) AS minutes_since_last_backup, DATEDIFF(mi, bs.backup_start_date, bs.backup_finish_date) AS backup_duration_minutes, CASE DATEDIFF(ss, bs.backup_start_date, bs.backup_finish_date) WHEN 0 THEN 0 ELSE CAST(( bs.backup_size / ( DATEDIFF(ss, bs.backup_start_date, bs.backup_finish_date) ) / 1048576 ) AS INT) END AS throughput_mb_sec FROM msdb.dbo.backupset bs WHERE database_name = ''' + database_name + ''' AND bs.backup_start_date > DATEADD(dd, -14, GETDATE()) ORDER BY bs.backup_start_date' AS more_info_query
FROM #backupset bs1
GROUP BY bs1.database_name
ORDER BY bs1.database_name
DROP TABLE #backupset;

backupset 01 SQL SERVER   Identify Time Between Backups Calculation

For each of your databases, it lists the maximum amount of time you want between backups over the last two weeks. You can take this query to for your requirements. I had in the past written few queries with variations of these concepts which are worth a note and here for reference:

Get Database Backup History for a Single Database

Finding Last Backup Time for All Database – Last Full, Differential and Log Backup – Optimized

If you change the query, please let me know via comments so that it will help others using the script too.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #035 – Different Ways to Identify the Open Transactions

Question: What are different ways to identify open transactions in SQL Server?

Answer: There are many ways to identify the open transactions. If there are some transactions which were started and not yet committed or rollback, you can use any of these to find them

Use DBCC command

There is a specific DBCC command for this.




This will list out more details about the open transactions



Reference: Pinal Dave (http://blog.sqlauthority.com)