SQLAuthority News – Ryan Adams Running for PASS Board of Directors

RyanAdams SQLAuthority News   Ryan Adams Running for PASS Board of DirectorsI recently learned that Ryan Adams is running for the PASS Board of Directors. I care for PASS which is a Professional Association of SQL Server. I care about who runs PASS as it directly affects my daily routine. Every year, I spend a good amount of time in making sure that I vote right person as a board of directors for PASS because they are the one who is going to make a major impact on the life of people like me. Just like every year, this year as well, we have amazing sets of candidates and I am going to carefully vote for them.

Ryan Adams, who is my dear friend is also one of the candidates for PASS board of directors. Here is the brief note about Ryan. He is one guy who makes positive impacts on people’s life and I am very confident he will do the same as a PASS board of director.

Ryan Adams is running for the 2015 PASS Board of Directors. Ryan has been a dedicated volunteer in the organization for over 10 years. What makes him, particularly suited for this job is the breadth with which he has volunteered. He is involved at the local, regional, and national levels.

Ryan has served on the local North Texas SQL Server User Group Board of Directors for over 5 years holding every position on the board. In his time with NTSSUG he has helped with 5 successful SQLSaturday events, a SQLRally event, grown the membership, and provided a solid financial foundation.

On the regional level, he has been a Regional Mentor for the South Central US Region for the last 4 years. The region has grown from 10 to 15 user groups in that time.

On the national level, he has helped the PASS Performance Virtual chapter for the last 3 years and has served as President for the last 2 years. In that time attendance has grown from about 30 people to around 200 per month. He also started their annual Performance Palooza event of 8 back to back sessions, which just had its 4th event. This year saw a 174% growth in attendance.

All this involvement gives Ryan a view into the organization from many different levels. Since Directors at Large could be given any portfolio he would be well suited to many of them due to his volunteer experience, which is huge for any candidate to possess. Ryan has visions of how PASS can better utilize its IT resources and grow the community by getting involved with younger generations around STEM (Science, Technology, Engineering, and Mathematics).

Reference: Pinal Dave (http://blog.sqlauthority.com)

Hey DBA – Do You Study Mistakes? – Notes from the Field #097

[Note from Pinal]: This is a 97th episode of Notes from the Field series. If you know Mike Walsh, he is an amazing person to meet and know. I am fortunate to be his friend. Every single time I meet him, I learn something as a human.

mikewalsh Hey DBA   Do You Study Mistakes?   Notes from the Field #097

In this episode of the Notes from the Field series database expert Mike Walsh gives a challenge to all of us. He explains to us why we should study our mistakes. He is indeed very right that every mistake gives us opportunity to correct ourselves.

So another post from me on my friend Pinal’s blog that is on the periphery of technology and professional development.

This post is admittedly inspired by a scare I had with a chainsaw this past weekend. I blogged about that over on my personal site but I wanted to talk about a mindset of studying the failures (both your own and those of others) and trying to always learn from mistakes. You can learn any technology you want, you can be the best DBA you could possibly be – but if you don’t study and learn from failures, you’ll never be as good as you could be.

mistake1 Hey DBA   Do You Study Mistakes?   Notes from the Field #097I can talk about this a bit for two reasons. One is – I’m a human. And as one, I make mistakes from time to time. So I’m qualified to talk about failure – because I’ve done it so much. The other is – I’m a consultant. At Linchpin People – we do a lot of things with SQL Server for our clients. One of them is rescuing projects gone bad. I’ve picked up other people’s mistakes and tried to piece them together for them, and do it regularly.

So on to a few thoughts about learning from mistakes.

What Do You Mean, “Study Mistakes”?

In the post on my own blog, I talked about how chainsaw protective pants/chaps saved me a really bad day. I didn’t always wear safety gear when using chainsaws until recently. I’m not sure why, but I didn’t. What made me change my mind this year? I studied mistakes.

You see – I have to do a lot of work at my house in the woods for clearing fields for farm animals (In my spare time, when I’m not helping clients tune million dollar systems, I’m cleaning up after sheep and chickens – and more animals once I’m done clearing more space) this year. And I’ve used a chainsaw before here and there – but never with the size trees that I’ve had to cut this year. Never at the frequency that I’ve been cutting.  So I started talking to people who’ve been there and done that.  I asked questions and listened to advice. But I also read accident reports (I speak on aviation disasters and learning from them, In the Fire Service we study Line of Duty Deaths a lot to see what mistakes others made. It sounds macabre but it is a vital learning tool). I read a lot of accident reports – and I saw mistakes and common patterns in chainsaw users who died or had serious injuries. In a lot of the cases there were nearly always mistakes made, but then a lot of times they were compounded by not having the right safety gear on when those mistakes happened. I learned about being intentional and avoiding mistakes and changed some habits – but I also saw that over half of the accidents wouldn’t have been reported if the right protective gear was on. So I bought it. And wear it. And it works.

We can do that as technologists too.

DBAs – What can you do?

mistake2 Hey DBA   Do You Study Mistakes?   Notes from the Field #097Study Mistakes – Read blog posts. Look at the forums. Look and see what gets people “hurt” with SQL Server. Look at the horror stories about how SANs really can fail. Learn what goes wrong. Go to lessons learned and “you’re doing it wrong” style presentations. When you go to these – don’t get the “this couldn’t happen to me” attitude. Understand this CAN happen to you and ask yourself what you can do differently.

Be Creative – When I was doing my various levels of emergency medical technician training we would often engage in scenario based discussions. We’d think up “what if” situations. We’d think of the time of dispatch (how a call would come in) through the time responding to that type of call, to arriving, and we’d think of various what-if’s along the way. We’d change the presentation of the patient, we’d think about a turn for the worse. All of this training developed muscle memory, it got us thinking about the unknown. So when we were thrown into the unknown, it wasn’t a totally foreign feeling.

We can do that as technologists also! We can think of what could go wrong, we can think of the things a few levels deep and imagine where things can break. And then we can do something about the scenarios that we should deal with. A good way to start this is writing checklists for upgrades or migrations or deployments. As you write the checklist, you can begin to analyze what may happen at various steps and build time and solutions in to deal with those situations.

Be Honest – It’s 2015, so I am assuming that you are a human if you are reading this. So you’ll make mistakes. It will happen. It may be big, it may be small, and it won’t be your last one either. In the moments after you recover you have options. One is ignore what happened, brush it under the rug and hope it doesn’t happen again. Another is blame others. Still another, though, is to investigate what happened. Understand the mechanics. Do a root cause analysis and see what you can do differently next time. Learn from your own mistakes, encourage your team to have open and honest lessons learned meetings where everyone talks, everyone listens and the uncomfortable conversations can be had. If that can’t happen? Then you’re doomed to repeat history.

If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Introduction to Disk Systems

I generally on a drive to learn something new and if I can quickly learn from my friends who are expert in that field, then I just jump immediately to become a student. With the “teacher’s day” being celebrated last week, I was fortunate enough to get wishes from so many people. Feeling blessed, I thought I must become a student. In this blog, let me bring the notes on different disk system technologies that can be important to understand when working SQL Server database and transaction log files. More information about disk systems and considerations with SQL Server can be located here.


SCSI stands for Small Computer System Interface and is simply a protocol for disk storage. The term is most commonly used when discussing what type of disk drive would be installed on a computer (IDE vs SCSI). The most important point here is that many SCSI disk drives work with disk controllers that support read and/or write caching. Therefore, it is important to discuss with DBAs whether their caching controller has an alternate power source (such as battery backed) so if system failures occur, writes are guaranteed to be flushed to disk (and in the order in which they were submitted).


IDE stands for Integrated Drive Electronics that later standardized under the name AT Attachment, or ATA.  Many IDE drives can support large sizes (150Gb+) at inexpensive prices. One common characteristic of IDE drives is that the disk drive itself supports caching. A possible issue with some of these IDE drives that support disk caching is described in KB 234656. Many of these caches are not backed by a battery source and therefore cannot guarantee writes are flushed to disk. Furthermore, the use of the disk drive cache could result in writing reordering in which the transaction log writes may not be flushed to disk before page writes. This could potentially result in a violation of the WAL protocol.

Because of these issues, use caution when using IDE drives with SQL Server databases. Many of the drive manufactures do support disabling of the disk cache via Windows Disk Management. This is recommended for IDE drives, but could slow down expected performance.


SATA stands for Serial ATA.   These are the most common drives used in personal computers today.  The SATA data cable has one data pair for differential transmission of data to the device and one pair for differential receiving from the device. That requires that data be transmitted serially.


SAS stands for Serial Attached SCSI.  SAS is a new generation serial communication protocol for devices designed to allow for much higher speed data transfers. SAS also includes features such as advanced queuing up to 256 levels and out of order queuing. The SAS backplane is designed in a way that enables the use of both SAS and SATA drives within the same system. The benefits of the SAS design are obvious.


SAN stands for Storage Area Network. SANs are designed to share large disk storage across multiple servers while not requiring close proximity between the disk storage and computer server.  A good resource to understand the basics of SAN technologies comes from the IBM website.

A SAN is literally a network of disk storage (the actual disk media can be a different geographical location), but does not use standard network protocols to transmit data. From the perspective of the local computer accessing this disk storage, it is a local disk drive. In other words, the disk storage is not a network based mapped drive or UNC path.

SAN devices are connected using a technology called fiber channel. It is up to the SAN system to ensure read/write consistency just as though the disk was attached locally to the computer.


Network Attached Storage is simply a file server dedicated to storage that can be accessed across the network. Technically, this is not different than using a network share from another computer because any access to the NAS device is through the Windows network redirector. The difference is that NAS device is a dedicated disk storage that can reside on the network instead of disk attached to a computer. However, both a network share and NAS device must be accessed via a network mapped drive or UNC path. The best source of information on SQL Server’s support for NAS devices is in KB article 304261.

Use of NAS devices or network shares for database files can lead to some unexpected OS errors such as OS Error 64, “The specified network name is no longer available” when SQL Server fails during an I/O operation (Msg 823).


iSCSI stands for Internet Small Computer System Interface. The simplest way to think of iSCSI is that it is the implementation of the SCSI protocol across the TCP/IP network protocol. Some people are calling it “SAN over IP”. The concept is to try to cheaply implement a SAN without requiring all of the infrastructure that makes up a SAN system (including fibre channel). So effectively with iSCSI, NAS devices can be more reliable given that the SCSI protocol is used to transmit data now back and forth between the NAS device and the client computer vs using the network protocol to transmit data.

Since iSCSI devices can operate across the internet, there are of course performance concerns. Therefore, the issue is not a prevalent storage technology used in the marketplace, especially for SQL Server databases. Like SAN devices, iSCSI devices appear to the application as a local disk drive and therefore do not require trace flag 1807 to create databases on these devices. See KB article 304261 for more information about trace flag 1807 and network based database files.

Final words

I think I am exhausted by assimilating all this information and tons of learning that I got in one sitting of 30 minutes. I wrote this blog to make sure I use the notes from paper as an electronic topic for this as a reference for future.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Column Alias and Usage in Where Clause

You can assign a new name for a derived column in the SELECT statement which is also known as an alias name. But you need to understand how column alias are assigned and its scope

Let us create the following dataset

CREATE TABLE #temp (prod_id INT, transaction_Date DATETIME, amount DECIMAL(12,2))
SELECT 1,'2010-10-10',2700.00 UNION ALL
SELECT 2,'2010-12-07',340 UNION ALL
SELECT 3,'2011-09-03',1200.50 UNION ALL
SELECT 2,'2011-08-11',4100 UNION ALL
SELECT 1,'2012-01-01',17650.50

Suppose you want to find the year of the transaction date and display in the SELECT statement and also order by itself. You can use

SELECT YEAR(transaction_date) AS YEAR,amount FROM #temp

Which gives you the following result

year        amount
----------- --------------
2010        2700.00
2010        340.00
2011        4100.00
2011        1200.50
2012        17650.50

But what happens when you run the following?

SELECT YEAR(transaction_date) AS YEAR,amount FROM #temp

You get an error

Msg 207, Level 16, State 1, Line 2
Invalid column name ‘year’.

It is because the column alias are not immediately known to the WHERE clause, whereas it is known in the ORDER BY clause because ORDER BY is executed lastly after the entire column list is known.

For more information about the order of execution in the statement, refer this my earlier blog.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Identify Time Between Backups Calculation

As part of my script archives, I stumbled upon a script that I had written a long time back and thought this must get to the blog. This script was inspired from a simple question asked by an DBA when I had visited them for a session. The question was to identify the time taken between backups. I was immediately in asking why they wanted it.

The DBA told me a story that they were taking backups on a tape drive. After such backups being taken, one of the days their latest backup was corrupt. So they resorted to an old backup taken before that. They wanted to know the amount of data loss which might potentially happen because of this roaster.

I personally felt this was a simple requirement and needed to be addressed in some way. I made a rudimentary script to attack this requirement as shown below:

CREATE TABLE #backupset (backup_set_id INT, database_name NVARCHAR(128), backup_finish_date DATETIME, TYPE CHAR(1), next_backup_finish_date DATETIME);
INSERT INTO #backupset (backup_set_id, database_name, backup_finish_date, TYPE)
SELECT backup_set_id, database_name, backup_finish_date, TYPE
msdb.dbo.backupset WITH (NOLOCK)
WHERE backup_finish_date >= DATEADD(dd, -14, GETDATE())
database_name NOT IN ('master', 'model', 'msdb', 'ReportServer', 'ReportServerTempDB');
CREATE CLUSTERED INDEX CL_database_name_backup_finish_date ON #backupset (database_name, backup_finish_date);
UPDATE #backupset
SET next_backup_finish_date = (SELECT TOP 1 backup_finish_date FROM #backupset bsNext WHERE bs.database_name = bsNext.database_name AND bs.backup_finish_date < bsNext.backup_finish_date ORDER BY bsNext.backup_finish_date)
FROM #backupset bs;
SELECT bs1.database_name, MAX(DATEDIFF(mi, bs1.backup_finish_date, bs1.next_backup_finish_date)) AS max_minutes_of_data_loss,
'SELECT bs.database_name, bs.type, bs.backup_start_date, bs.backup_finish_date, DATEDIFF(mi, COALESCE((SELECT TOP 1 bsPrior.backup_finish_date FROM msdb.dbo.backupset bsPrior WHERE bs.database_name = bsPrior.database_name AND bs.backup_finish_date > bsPrior.backup_finish_date ORDER BY bsPrior.backup_finish_date DESC), ''1900/1/1''), bs.backup_finish_date) AS minutes_since_last_backup, DATEDIFF(mi, bs.backup_start_date, bs.backup_finish_date) AS backup_duration_minutes, CASE DATEDIFF(ss, bs.backup_start_date, bs.backup_finish_date) WHEN 0 THEN 0 ELSE CAST(( bs.backup_size / ( DATEDIFF(ss, bs.backup_start_date, bs.backup_finish_date) ) / 1048576 ) AS INT) END AS throughput_mb_sec FROM msdb.dbo.backupset bs WHERE database_name = ''' + database_name + ''' AND bs.backup_start_date > DATEADD(dd, -14, GETDATE()) ORDER BY bs.backup_start_date' AS more_info_query
FROM #backupset bs1
GROUP BY bs1.database_name
ORDER BY bs1.database_name
DROP TABLE #backupset;

backupset 01 SQL SERVER   Identify Time Between Backups Calculation

For each of your databases, it lists the maximum amount of time you want between backups over the last two weeks. You can take this query to for your requirements. I had in the past written few queries with variations of these concepts which are worth a note and here for reference:

Get Database Backup History for a Single Database

Finding Last Backup Time for All Database – Last Full, Differential and Log Backup – Optimized

If you change the query, please let me know via comments so that it will help others using the script too.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #035 – Different Ways to Identify the Open Transactions

Question: What are different ways to identify open transactions in SQL Server?

Answer: There are many ways to identify the open transactions. If there are some transactions which were started and not yet committed or rollback, you can use any of these to find them

Use DBCC command

There is a specific DBCC command for this.




This will list out more details about the open transactions



Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to Enable or Disable All the Triggers on a Table and Database?

Here is the quickest way to disable all the triggers for a table. Please note that when it is about the table, you will have to specify the name of the table. However, when we have to enable or disable trigger on the database or server, we just have to specify word like database (and keep the current context of the database where you want to disable database) and all server.

Enable Triggers on a Table
ENABLE TRIGGER safety ON TableName;

Enable Triggers on a Database

Enable Triggers on a Server

Disable Triggers on a Table
DISABLE TRIGGER safety ON TableName;

Disable Triggers on a Database

Disable Triggers on a Server

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Configure the Backup Compression Default Server Configuration Option

When I get a chance to talk to Enterprise customers working with a really smart DBA team, I generally look out for options they use on a daily basis and are often missed in the bigger scheme of things. One such feature of SQL Server Enterprise Edition is the ability to do Backup Compression. It is really a powerful feature and the fine prints is that – it is disabled by default. In this blog we will look at how to view or configure the backup compression default server configuration option in SQL Server by using SQL Server Management Studio or Transact-SQL. The backup compression default option determines whether the server instance creates compressed backups by default. When SQL Server is installed, the backup compression default option is off.

Changing the setting using SSMS

To configure this setting, use the following step.

  1. In Object Explorer, right-click a server and select Properties.
  2. Click the Database settings node.
  3. Under Backup and restore, Compress backup shows the current setting of the backup compression default option. This setting determines the server-level default for compressing backups, as follows:
    • If the Compress backup box is blank, new backups are uncompressed by default.
    • If the Compress backup box is checked, new backups are compressed by default.

compression setting default 01 SQL SERVER   Configure the Backup Compression Default Server Configuration Option

If you are a member of the sysadmin or serveradmin fixed server role, you can also change the default setting by clicking the Compress backup box.

TSQL – Steps to configure backup compression default option

The T-SQL command to achieve the same will be as described below:

  1. Connect to the Database Engine.
  2. From the Standard bar, click New Query.
  3. Copy and paste the following example into the query window and click Execute. This example shows how to use sp_configure to configure the server instance to create compressed backups by default.

EXEC sp_configure 'backup compression default', 1;

If you want to find out if backup compression is enabled by default, then we can use the following TSQL command to find the same.

SELECT value
FROM sys.configurations
WHERE name = 'backup compression default';

This example queries the sys.configurations catalog view to determine the value for backup compression default. A value of 0 means that backup compression is off, and a value of 1 means that backup compression is enabled.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – What’s the Difference between ROW_NUMBER, RANK, and DENSE_RANK? – Notes from the Field #096

Kathi SQL SERVER   What’s the Difference between ROW NUMBER, RANK, and DENSE RANK?   Notes from the Field #096[Note from Pinal]: In this episode of the Notes from the Field series database expert Kathi Kellenberger explains about T-SQL Window Functions. Kathi is an amazing instructor, she was the SQL author I have read in my early career. The reason, I love SQL Server because her writing has instigated love for this technology in me. Today she brings a relatively interesting topic for database experts. Read the experience of  Kathi in her own words.

Microsoft introduced the first T-SQL window functions ten years ago with SQL Server 2005!  They introduced more T-SQL window functions with SQL Server 2012. I’ve been on a mission to educate the SQL Server and developer communities about T-SQL window functions since then.

Feedback from my sessions tell me that ROW_NUMBER is the most used T-SQL window function. ROW_NUMBER returns a unique number for every row in the results. (NOTE: If the PARTITION BY option is used, it returns a unique number within each partition.)

Did you know that two other similar functions, RANK and DENSE_RANK, were introduced at the same time? RANK and DENSE_RANK are comparable to ROW_NUMBER, but they handle ties in the ORDER BY expression differently.

To demonstrate the differences, I have chosen a customer from AdventureWorks that placed two orders on the same day. These three functions require an ORDER BY expression in the OVER clause. This tells SQL Server in which order to apply the numbers generated by the window function. In this case, I am using OrderDate, which is not unique.

SELECT CustomerID, SalesOrderID, CAST(OrderDate AS DATE) AS OrderDate,
FROM Sales.SalesOrderHeader
WHERE CustomerID = 11300;

Take a look at the partial results, starting where the numbers returned by the functions begin to get interesting.

notes 96 SQL SERVER   What’s the Difference between ROW NUMBER, RANK, and DENSE RANK?   Notes from the Field #096

The OrderDate values were unique until the 7th row, and all three functions return 7 on row 7. The 8th order has the same OrderDate as row 7 so it’s a tie. ROW_NUMBER doesn’t care; it continues to provide unique incrementing numbers. RANK and DENSE_RANK both return 7 again because both rows 7 and 8 are ranked the same.

Notice that on row 9, RANK “catches up” with ROW_NUMBER by skipping the value 8. RANK catches up with ROW_NUMBER every time once it’s past the tie. DENSE_RANK, on the other hand, does not. Instead of catching up, it returns the next possible value. DENSE_RANK doesn’t skip any numbers.

Take a look at row 9 again. ROW_NUMBER returns the position of the row. RANK returns the rank of the row based on its position. DENSE_RANK returns the logical rank: the rank over unique OrderDate values. The row with OrderDate 2913-11-14 is in the 9th position. It is ranked 9th over the set of rows. It is ranked 8th over the set of unique OrderDate values.

If you use a unique ORDER BY expression, all three functions will return the same values because there will be no ties.


These three functions are usually a step along the way to a more complex solution. Understanding how they work and how they differ will help you decide which one to use when faced with the choice.

notes 82 3 SQL SERVER   What’s the Difference between ROW NUMBER, RANK, and DENSE RANK?   Notes from the Field #096If you would like to learn more about T-SQL window functions, be sure to check out my latest book Expert T-SQL Window Functions in SQL Server or my Pluralsight course T-SQL Window Functions.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – FIX – Error: Alter failed for Server ‘Server Name’ in Check Integrity task

Recently I was getting ready for an upcoming demo session and found that my maintenance plan for CHECKDB were failing. When I executed manually, I got below error

Execution failed. See the maintenance plan and SQL Server Agent job history logs for details.

I followed error message and checked the history of maintenance plan, here was the error.

Failed: (0) Alter failed for server ‘ComputerName\InstanceName’

My research yielded https://support.microsoft.com/en-us/kb/945067 which was talking about incorrect setting of Allow Updates in sp_configure. In my case it was just fine. I have put profiler and found below statement failing.

EXEC sys.sp_configure N'user options', 0 RECONFIGURE

When I executed below statement from SQL Server Management Studio, I got below error

Alter failed 01 SQL SERVER    FIX   Error: Alter failed for Server Server Name in Check Integrity task

Configuration option ‘user options’ changed from 0 to 0. Run the RECONFIGURE statement to install.

Msg 5807, Level 16, State 1, Line 1
Recovery intervals above 60 minutes not recommended.
Use the RECONFIGURE WITH OVERRIDE statement to force this configuration.

In my case, it was recovery interval value which I as 10000, was the problem. Once I bought it down to 60, I was able to run the maintenance plan without any problem.

sp_configure 'recovery interval (min)', 60

As shown above, the maintenance plan is using the “reconfigure” command and getting an error. Due to that it wasn’t able to proceed further and failing with “Alter failed for server” error. Hope it’s clear that if you are getting the same error, you should run the RECONFIGURE command from SQL Server Management studio and check the any error. Here are the various errors which I saw.

  1. Make sure that “Allow updates” is 0
  2. Make sure that the “recovery interval (min0” is less than 60
  3. Make sure that the service account is having “Lock Pages in Memory” when AWE is enabled.

Later, when I tried to reproduce the same on SQL 2012 and SQL Server 2014, I was not able to. This is a good news that the issue is fixed by Microsoft in later versions.

Have you ever found the same error and the solution was other than three which I mentioned? Please comment and let me know so that it can help others also.

Reference: Pinal Dave (http://blog.sqlauthority.com)