SQL SERVER – Availability Group and the Six Critical Steps for DBAs to Follow – Notes from the Field #104

Mike Lawell SQL SERVER   Availability Group and the Six Critical Steps for DBAs to Follow   Notes from the Field #104

Mike’s, “now what am I supposed to do?”, face.

[Note from Pinal]: In this episode of the Notes from the Field series database expert Mike Lawell explains about Availability Group and the Six Critical Steps for DBAs to Follow.  A few weeks ago, I asked questions in conferance, how many people know about availability group. Only a handful of people raised their hands. I was expecting this as many have heard about AlwaysOn and Availability Groups but not everyone has implemented the same. Most of the time, there are no clear guidelines for the same. In this blog post Mike tells an amazing story about Six Critical Steps every DBA must know about Availability group. He makes this simple concept very easy.  Read the experience of  Mike in her own words.

Your boss has asked you to implement SQL Server AlwaysOn Availability Groups in your environment. Exciting as it may, you’re an accidental DBA with little to no experience implementing any high availability solution, let alone, a fairly new availability group.

The first thing you say is, fantastic! Then, hmmm, fantastic. You realize this is above your head, but it is a great opportunity to learn.

This happened to one of my clients, where the DBA barely had SQL Server Database Mirroring experience. The product had a required up time of 99.999% (yeah, right), that they were having problems maintaining. They wanted to improve their availability and business continuity.

The DBA had a lot of anxiety with implementing AGs in this environment without experiencing considerable downtime.

Fortunately they were able to engage our services to lead them through the process. We migrated from mirroring to AGs with approximately a minute of downtime into a 4 server multi-subnet environment to a new set of servers.

1.      Research

There are many environments where AGs will simply not work (or have a hidden disaster waiting). For instance, you should not use availability groups in SQL Server 2012 or 2014, if you’re doing cross database transactions or distributed transactions. If you have a failover during a distributed transaction, it is possible that your database will enter a suspect mode (btw, this changes in SQL Server 2016).

How many servers and replicas will you need? Which databases do you want in which AG? What might the licensing costs be? What should/can your quorum mode be? Do your applications even support AGs?

2.      Plan

Develop the high level plan to the best of your knowledge. Don’t worry about being wrong, you will miss something. Take the time to think of every application or service, etc. that will connect to the AG. Document all of the jobs, alerts, mail configuration, operators, everything you can think of (find a script to do it).

3.      Test the AG Build

Now that you have some idea of what is possible and you think you know what needs to be done, build an AG as a proof of concept in an isolated environment to test your theory of how it should work. Make sure you use the same drive letters as production. You will learn that you will miss important configurations, steps, etc. Tear the AG down then rebuild it, until you are familiar with the steps, write the steps you take down (every step). Do this until you’re confident you have all of the steps assembled and you haven’t missed anything. You will miss something, because most database environments are complex, and many DBAs don’t know all of the services connecting to their databases.

4.      Script the Implementation

104 doc SQL SERVER   Availability Group and the Six Critical Steps for DBAs to Follow   Notes from the Field #104This part really helps in minimizing downtime. You don’t want to be clicking through SSMS trying to get all of the configurations correct in production and risk additional downtime at failover or worse.

I built the AGs ahead of time in the production environment and added a test database so I could test the failover between machines before go live.

The scripts were PowerShell scripts pulling the configuration from a csv file. I learned how to script PowerShell from the internet. Make sure your configuration file is tested. I create a spreadsheet with the important data (created from a script run against production) about database name, data file location, and more. Don’t type the configurations if you can avoid it. This is where you get into trouble especially in complex environments. I have some examples of the PowerShell AG and SQLCMD Log Shipping scripts I used on my blog at SQLServerAssociates.com

5.      Test the Scripts

This is the most important step as even the best plan must be tested. It must work repeatedly in your development environment without errors. Don’t compromise with this step. If you want to avoid extended downtime because you received an error that you’ll have to research to fix, test your scripts.

6.      Know your Environment

You must be able to talk confidently about AGs and defend your configuration to management. You will need to document your environment for management, operations, and other DBAs (including yourself). Use Visio diagrams for visuals.


Migrating your environment to AGs can be a great learning experience, but don’t be afraid to engage a consultant to help you do it right. The justification is usually very easy when talking to management when you equate downtime to revenue lost.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – What Resource Wait Are We Seeing?

I have been a big supporter of the Activity Monitor capability inside SQL Server Management Studio. And I have written a number of articles before in this blog. Please take a moment to look at:

SQL SERVER – Activity Monitor to Identify Blocking – Find Expensive Queries

One of the readers wrote to me on this blog and asked me something simple and I was surprised I had not touched on it before. His mail was:

Hi Pinal,

I have seen your blog around Activity Monitor and as a budding DBA, I am still learning the tricks of the trade. I use many scripts from this site and they have helped me a number of times in my career.

In Activity monitor, I notice a large number of PAGELATCH waits. If I look at processes, I sometimes see a large number of processes waiting for PAGELATCH_SH and PAGELATCH_UP on resources 2:1:3 and sometimes on 2:1:1. This shows contention is placed on SGAM and PFS pages.

Do you have a script written somewhere to find what type of disk resource is under contention? After reading a number of web search I can see this is tempdb contention because of database id of 2.

I am sure you can help me somehow.


Blog reader

I personally didn’t see much of a problem in his request. And I was not aware if this script existed in this blog. So I went ahead to write the same. This is a query based on the sys.dm_exec_requests, sys.dm_exec_sessions and sys.dm_exec_qury_plan.

SELECT es.session_id, DB_NAME(er.database_id) AS [database_name],
OBJECT_NAME(qp.objectid, qp.dbid) AS [object_name], -- NULL if Ad-Hoc or Prepared statements
WHEN pageid = 1 OR pageid % 8088 = 0 THEN 'Is_PFS_Page'
WHEN pageid = 2 OR pageid % 511232 = 0 THEN 'Is_GAM_Page'
WHEN pageid = 3 OR (pageid - 1) % 511232 = 0 THEN 'Is_SGAM_Page'
ELSE 'Is Not PFS, GAM or SGAM page' END
(SELECT CASE WHEN er.[wait_type] LIKE 'PAGE%LATCH%' AND er.[wait_resource] LIKE '%:%'
THEN CAST(RIGHT(er.[wait_resource], LEN(er.[wait_resource]) - CHARINDEX(':', er.[wait_resource], LEN(er.[wait_resource])-CHARINDEX(':', REVERSE(er.[wait_resource])))) AS INT)
ELSE NULL END AS pageid) AS latch_pageid
) AS wait_resource_type,
er.wait_time AS wait_time_ms,
SELECT qt.TEXT AS [text()]
FROM sys.dm_exec_sql_text(er.sql_handle) AS qt
FOR XML PATH(''), TYPE) AS [running_batch],
CASE WHEN er.statement_start_offset = 0 THEN 0 ELSE er.statement_start_offset/2 END),
CASE WHEN er.statement_end_offset = -1 THEN DATALENGTH(qt2.TEXT) ELSE er.statement_end_offset/2 END - (CASE WHEN er.statement_start_offset = 0 THEN 0 ELSE er.statement_start_offset/2 END))) AS [text()]
FROM sys.dm_exec_sql_text(er.sql_handle) AS qt2
FOR XML PATH(''), TYPE) AS [running_statement],
FROM sys.dm_exec_requests er
LEFT OUTER JOIN sys.dm_exec_sessions es ON er.session_id = es.session_id
CROSS APPLY sys.dm_exec_query_plan (er.plan_handle) qp
WHERE er.session_id <> @@SPID AND es.is_user_process = 1
ORDER BY er.total_elapsed_time DESC, er.logical_reads DESC, [database_name], session_id

The output will help you in analyzing the typical wait resource type and if you are having tempdb contention this should show up here. For a matter of fact, if you have other contentions on disk, that would show up too.

Do let me know if this script was helpful and I will be more than happy to extend the same. Or if you extended the script, please make sure to share the same via comments.

Reference : Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – How to Create Linked Server to PostgreSQL?

The databases world is expanding and I have been fortunate enough to learn and share my experiences around databases like MySQL, PostgreSQL apart from working with SQL Server. I always try to expand my horizon and try various database products. In case you don’t know, I have course on Pluralsight about PostgreSQL


It is always interesting to integrate various products and make them work seamlessly. Due to my knowledge of both database products, I have been asked one question very frequently.

How can I create linked servers in SQL connecting with Postgres?

Though this question looks simple and easy – I thought of writing a note to show you the actual steps to achieve the same. This blog shows simple example about creating linked server. There is a provider called PGOLEDB which can be used for this purpose.

Create a Database and Table in PostgreSQL

  1. Create database:
  1. Once database is created, change the connection, create table and insert some data.
ID integer NOT NULL, 
Name varchar(128) NOT NULL
insert into MyTable values (1, 'Pinal Dave');
  1. Verify that we have data in table
    Select * from MyTable

    PG Linked 01 SQL SERVER   How to Create Linked Server to PostgreSQL?

    Create a Linked Server in SQL Server

    1. Go to http://www.pgoledb.com and choose “Download” from menu bar.
      PG Linked 02 SQL SERVER   How to Create Linked Server to PostgreSQL?
    2. Go to the page and choose “PGNP OLEDB Providers for Postgres, Greenplum and Redshift” as shown below.
      PG Linked 03 SQL SERVER   How to Create Linked Server to PostgreSQL?
    3. Install it and then we should see provider it in SSMS
      PG Linked 04 SQL SERVER   How to Create Linked Server to PostgreSQL?
    1. Modify and run below script in SQL Server. You need to change Server Name, port etc.
      -- Change parameter for provider Allow In Procees = true / DynamicParameters = true
      EXEC MASTER.dbo.sp_MSset_oledb_prop N'PGNP'  ,N'AllowInProcess',1
      EXEC MASTER.dbo.sp_MSset_oledb_prop N'PGNP'  ,N'DynamicParameters',1
      DECLARE @name NVARCHAR(4000);
      DECLARE @provider NVARCHAR(4000);
      DECLARE @servername NVARCHAR(4000);
      DECLARE @port NVARCHAR(4000);
      DECLARE @db_name NVARCHAR(4000)
      -- destination postgres database
      SET @name = N'SQLAuth_PG';
      SET @provider = N'PGNP';
      SET @servername = N'localhost';
      SET @port = 'PORT=5432;'
      SET @db_name = N'sqlauthority';
      -- create linked server
      EXEC MASTER.dbo.sp_addlinkedserver @server = @name
      ,@srvproduct = N'PGNP'
      ,@provider = N'PGNP'
      ,@datasrc = @servername
      ,@provstr = @port
      ,@catalog = @db_name
      -- username and password for postgres
      EXEC MASTER.dbo.sp_addlinkedsrvlogin @rmtsrvname = @name
      ,@useself = N'False'
      ,@locallogin = NULL
      @rmtuser = N'postgres'
      ,@rmtpassword = 'sa'
      -- set up Extended properties of the Linked Server
      EXEC MASTER.dbo.sp_serveroption @server = @name
      ,@optname = 'data access'
      ,@optvalue = 'true'
      EXEC MASTER.dbo.sp_serveroption @server = @name
      ,@optname = 'use remote collation'
      ,@optvalue = 'true'
      EXEC MASTER.dbo.sp_serveroption @server = @name
      ,@optname = 'rpc'
      ,@optvalue = 'true'
      EXEC MASTER.dbo.sp_serveroption @server = @name
      ,@optname = 'rpc out'
      ,@optvalue = 'true'
    2. Once completed, we should be able to see linked server created as shown below.
      PG Linked 05 SQL SERVER   How to Create Linked Server to PostgreSQL?

    Test the Linked Server

    Here are the test queries which I have used based on earlier script.

    SELECT [id],[name]
    FROM   [SQLAuth_PG].[sqlauthority].[public].[mytable]
    SELECT *
    FROM OPENQUERY(SQLAuth_PG, 'select id, name from mytable');

    PG Linked 06 SQL SERVER   How to Create Linked Server to PostgreSQL?

    Have you ever created any other 3rd party linked server in your environments? What methods have you used? Will you be kind enough to share the same via comments?

    Reference : Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – New features in SQL Server 2016 Setup Wizard

In this world of change, the SQL Server product is also changing. One thing which is very clear is that Microsoft is listening to feedbacks from customers and enhancing the setup experience also. Some of the classic feedback implementation can be seen in the setup of the next version of SQL Server 2016. In this blog post, I will try to bring out some of the enhancements that are being incorporated in the Setup UI. Some are critical and worth making a note.

Here are the few things which I noticed in SQL Server user interface.

SQL Setup allows user to add Database Engine Service SID into “Perform Volume Maintenance Task” privilege under security policy. This helps in instant file initialization

sql2016 setup 01 SQL SERVER   New features in SQL Server 2016 Setup Wizard

Here is the new parameter added in command line for SQL Setup.


This may lead to information disclosure as it could allow deleted content to be accessed by an unauthorized principal.

TempDB Configuration

In SQL Server 2016, we have the option to specify TempDB file specifications during setup itself. Most of the companies have “post deployment script” and one of the steps is to optimize TempDB files. It’s a tab called “TempDB” under “Database Engine Configuration”

sql2016 setup 02 SQL SERVER   New features in SQL Server 2016 Setup Wizard

Since UI has it, there is also a parameter in command line as below:

Parameter Meaning
SQLTEMPDBFILECOUNT The number of Database Engine TempDB files
SQLTEMPDBFILESIZE Initial size of a Database Engine TempDB data file in MB.
SQLTEMPDBFILEGROWTH automatic growth increment of each Database Engine TempDB data file in MB
SQLTEMPDBLOGFILESIZE initial size of the Database Engine TempDB log file in MB.
SQLTEMPDBLOGFILEGROWTH automatic growth increment of the Database Engine TempDB log file in MB.
SQLTEMPDBDIR Directories for Database Engine TempDB files.
SQLTEMPDBLOGDIR Directory for the Database Engine TempDB log files

Hopefully, we would see more feedback embedded in the product by default.

Reference : Pinal Dave (http://blog.SQLAuthority.com)

Interview Question of the Week #046 – How @@DATEFIRST and SET DATEFIRST Are Related?

Question: How @@DATEFIRST and SET DATEFIRST are related?

Answer: The master database’s syslanguages table has a DateFirst column that defines the first day of the week for a particular language. SQL Server with US English as default language, SQL Server sets DATEFIRST to 7 (Sunday) by default. We can reset any day as first day of the week using


This will set Friday as the first day of the week.
@@DATEFIRST returns the current value, for the session, of SET DATEFIRST.

----This will return result as 1(Monday)
SET LANGUAGE us_english
----This will return result as 7(Sunday)

In this way @@DATEFIRST and SET DATEFIRST are related. When I learned about this feature I was very glad as our company has started to server global clients and simple feature like this helps a lot to avoid confusion.

Reference : Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Inserting into ColumnSet with SPARSE Columns

There are a number of blogs around using SPARSE columns here. Here are few of them for reference and can be used as a great learning experience.

SQL SERVER – 2008 – Introduction to SPARSE Columns

SQL SERVER – 2008 – Introduction to SPARSE Columns – Part 2

SQL SERVER – Errors and Limitations of Working with SPARSE Columns in SQL Server

Though such capabilities exist inside the product, I hardly see people using this feature effectively. Recently, one friend from a product based company called me to check if I had written on this topic. I explained that there are tons of documentation and it is great to see them use it. He was saying, they had a table which had close to 1000’s of column as they are using it like a generic table and wanted to know if there is an easy way to update or insert into this SPARSE table directly from ColumnSet?

The short answer is, YES. It is quite possible to insert using the ColumnSet column directly. I sent him an example and thought it would be of great use for folks here too.

Let us create our table for the experiment:

USE tempdb;
IF EXISTS (SELECT * FROM sys.objects WHERE name = 'DocStore_With_ColumnSet')
-- note the use of the SPARSE keyword and the XML column_set
CREATE TABLE DocStore_With_ColumnSet
ProductionSpecification VARCHAR(20) SPARSE NULL,
ProductionLocation smallint SPARSE NULL,
MarketingSurveyGroup VARCHAR(20) SPARSE NULL,
MarketingProgramID INT SPARSE NULL,

We can insert into different values columns as shown below:

INSERT DocStore_With_ColumnSet (DocID, Title, ProductionSpecification, ProductionLocation)
VALUES (1, 'Tire Specs 1', 'ABC321', 27)
INSERT DocStore_With_ColumnSet (DocID, Title, MarketingSurveyGroup)
VALUES (2, 'Survey 1234', 'Female 25 - 35')
-- Note that the Special_XML_Column column is correctly populated based on the columns that are populated
SELECT * FROM DocStore_With_ColumnSet

Can we add to the ColumnSet directly? Here is what we can do:

-- you can add data directly by specifying the xml
INSERT DocStore_With_ColumnSet (DocID, Title, Special_XML_Column)
VALUES (3, 'Specification 2', '<ProductionSpecification>AXZ7R242</ProductionSpecification><ProductionLocation>18</ProductionLocation>')

We can see the success of the insert statement. Let us next try to check if the values are inserted into our ColumnSet:

SELECT * FROM DocStore_With_ColumnSet

If this was not enough, we can also use this technique to update the columnset values too. A typical example for this is shown below:

-- now let us update some of the data
UPDATE DocStore_With_ColumnSet
SET Special_XML_Column = '<ProductionSpecification>ZZ456Z</ProductionSpecification><ProductionLocation>18</ProductionLocation>'
WHERE DocID = 3;

You can see again by querying that the values have been inserted successfully. This was a simple example to how we can use columnsets to insert data into a sparse table rather can doing a qualification of each columns. Let us next try to clean this table:

-- Cleanup
DROP TABLE DocStore_With_ColumnSet

As I conclude, would like to know how many of you have used SPARSE columns in your application and what were some of the scenarios you were able to lighten up using this feature?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – InMemory Table Creation Error Message: Msg 41337

I had recently gone to my hometown to visit my parents as it was holidays for my daughter because of festival. These are relaxing times and hardly try to get in front of my computer during these 2-3 days of family time. But who said that happens always. In this recent trip, I had one of my cousins visit our place who was incidentally studying his engineering degree. We suddenly got into a conversation and I started talking about InMemory databases and how they can be made resilient. Though this topic becomes a great conversation starter to college students, he was baffled with the fact that we can have InMemory databases that are ACID compliant.

He got to his home and said he would play around with this concept because he had a SQL Server 2016 evaluation version. I said there are many more enhancements to talk but wanted him to do his bit of search and homework before I taught him some really cool tricks. That night he sent me a mail stating he was getting some error while creating an InMemory table and was not sure what to do. I asked him to send me the error to take a first look:

Msg 41337, Level 16, State 100, Line 2
Cannot create memory optimized tables. To create memory optimized tables, the database must have a MEMORY_OPTIMIZED_FILEGROUP that is online and has at least one container.

It was clear from the message the source of the problem. He had created a normal database without the InMemory filegroup.

I sent him the script to get rid of the error message and is shown below:

-- Create the Database
-- Note the filegroup of type MEMORY_OPTIMIZED_DATA
( NAME = N'InMem_OLTP_data', FILENAME = N'C:\DATA\InMem_OLTP_data.mdf',size = 50MB)
( NAME = N'InMem_OLTP_log', FILENAME = N'C:\DATA\InMem_OLTP_log.ldf',size = 10MB)
--- Step 2 - Can we create this table?
CREATE TABLE [Customer](

As you can see we will get the exact error as mentioned earlier.

Msg 41337, Level 16, State 100, Line 2
Cannot create memory optimized tables. To create memory optimized tables, the database must have a MEMORY_OPTIMIZED_FILEGROUP that is online and has at least one container.

As mentioned, let us add the InMemory Filegroup so that we can create the tables without a glitch:

-- Step 3 -- Add a Filegroup that is MEMORY_OPTIMIZED.
ADD FILE (NAME = InMem_OLTP_InMemory, FILENAME = N'C:\Data\InMem_OLTP_mopt')

Now if we create the table as mentioned in Step 2 above, there will not be any error. I don’t blame my cousin for this because he was a novice and was not able to understand what a container is when it comes to InMemory tables. Though this was a simple error that most of you would easily be able to solve, it can surely be a great source troubleshooting for people like my cousin.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – T-SQL Window Function Framing and Performance – Notes from the Field #103

Kathi SQL SERVER   T SQL Window Function Framing and Performance   Notes from the Field #103[Note from Pinal]: In this episode of the Notes from the Field series database expert Kathi Kellenberger explains about T-SQL Over clause. Kathi is an amazing instructor, she was the SQL author I have read in my early career. The reason, I love SQL Server because her writing has instigated love for this technology in me. Today she brings a relatively interesting topic for database experts. Read the experience of  Kathi in her own words.

In my last post to SQL Authority, I explained how to add frames to T-SQL window functions where it is supported. I broke down the syntax and showed where things can go wrong if you rely on the default frame instead of explicitly specifying the frame. Be sure to read that article if you are new to frames or just need a refresher.

When creating a running total, for example, It’s very easy to just leave out the frame, and, in many cases, the results will be what you want. There is another downside to using the default frame, however, and that is a pretty big performance penalty. The default frame, when one is not specified, is RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. If the ORDER BY expression in the OVER clause produces unique values, you will get a valid running total.

In order to perform the window function calculation, such as a running total, SQL Server creates a worktable and populates it with each partition. When RANGE is used, that worktable is always created in tempdb. When ROWS is used instead by specifying the frame, the worktable is created in memory most of the time. The worktable created in memory has no I/O, locking, or contention issues, so it performs much better. Unfortunately, when you look at the graphical execution plan, you won’t see much of a difference between ROWS and RANGE. You can see the difference, however, if you look at STATISTICS IO.

To see the difference for yourself, run this script against the AdventureWorks database. Be sure to enable the Actual Execution Plan before running it as well.

PRINT 'Default frame';
SELECT CustomerID, SalesOrderID, TotalDue,
SUM(TotalDue) OVER(PARTITION BY CustomerID ORDER BY SalesOrderID) AS RunningTotal
FROM Sales.SalesOrderHeader;
PRINT 'ROWS frame';
SELECT CustomerID, SalesOrderID, TotalDue,
) AS RunningTotal
FROM Sales.SalesOrderHeader;

First, take a look at the execution plans as shown in the Figure 1. The two queries did not produce identical plans, but the relative cost for each is 50%. So, at least the execution plans report that the queries perform the same.

103 1 SQL SERVER   T SQL Window Function Framing and Performance   Notes from the Field #103

Figure 1: The graphical execution plans

Now take a look at the messages tab. Here you will see a big difference as shown in Figure 2. The query using the default frame reported a very high number of logical reads from a worktable. The second query, which uses ROWS, reports 0 reads for the worktable. If you turn on Statistics Time, you will also see that the second query runs faster.

103 2 SQL SERVER   T SQL Window Function Framing and Performance   Notes from the Field #103

Figure 2: The statistics IO

It is often difficult to show performance differences with AdventureWorks tables. They are just too small. Adam Machanic, a SQL Server MVP, published a script that creates a couple of large tables based on AdventureWorks data. One of those tables, bigTransactionHistory, contains 30 million rows. I used bigTransactionHistory to do some performance testing to see if there really was a time difference that could be measured between ROWS and RANGE. For the test, I created a couple of smaller tables, subsets of Adam’s table, which contained 15 and 7.5 million rows respectively. I turned off the grid results so that only the server time would be measured. In addition to comparing ROWS and RANGE, I also tested two traditional methods: using a self-join subquery and a cursor. You can see the results of my tests in Figure 3.

103 3 SQL SERVER   T SQL Window Function Framing and Performance   Notes from the Field #103

Figure 3: The results of testing running totals

Using an accumulating window aggregate function (WF) outperformed the traditional techniques even with the default frame. You can see, however, that specifying a Rows frame was incredibly fast.

Whenever frames are supported (FIRST_VALUE, LAST_VALUE and accumulating window aggregates), make sure that you always specify the frame. Not only can you avoid some logic problems caused by RANGE frames, you will almost always see a performance boost by using ROWS. Remember: Use ROWS. Use ROWS. And use ROWS.

notes 82 3 SQL SERVER   T SQL Window Function Framing and Performance   Notes from the Field #103If you would like to learn more about T-SQL window functions, be sure to check out my latest book Expert T-SQL Window Functions in SQL Server or my Pluralsight course T-SQL Window Functions.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Puzzle with Miliseconds – Win USD 50 Amazon Gift Card

Last contest, which we had ran was in May and it had received amazing responses. I once again reached out to kind team of Embarcadero and they agreed to support the next contest. The contest has two steps and they are very simple. It took me a while to build contest, but it is a really fun one. I am very confident that once you try out the contest, you will love it.

Two Giveaways:

amazon gift cards SQL SERVER   Puzzle with Miliseconds   Win USD 50 Amazon Gift Card

(USA) USD 50 Amazon Gift Card to 1 Individual

(India) INR 2500 Amazon.in Gift Card to 1 Individual

(Rest of the world) USD 50 Amazon Gift Card to 1 Individual

Contest 1: T-SQL

Run following T-SQL script and observe the answer:

CAST(‘2015-01-01 14:48:34.69’ AS DATETIME) FirstVal,
CAST(‘2015-01-01 14:48:34:69’ AS DATETIME) SecondVal

When we look at the answer, there is a difference between the milliseconds part in the resutlset, whereas you can notice that in the SELECT statement I have specified different milliseconds part. The question is why there is a difference in the millisecond part even though I have same value selected?

milisecond SQL SERVER   Puzzle with Miliseconds   Win USD 50 Amazon Gift Card

Contest 2: Download and Install DBArtisan

This is the easy part of the contest. You just have to download and install DBArtisan. You can download DBArtisan from here.

How to Participate?

  • Leave an answer for contest 1 in the comment section of the blog.
  • Leave a comment with the same email address which you have used to download DBArtisan.
  • The contest is open till December 1st, 2015 Midnight PST.
  • The winner will be announced on December 10th, 2015.
  • There will be three winners 1) USA 2) India 3) Rest of the World
  • Please do not forget to indicate which country do you belong to qualify for the contest.
  • All the correct answer to the contest will be hidden till the winner is announced.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Looking Inside SQL Complete – Advantages of Intellisense Features

Recently SQL Complete add-in was updated, which I use when writing SQL queries in SSMS. In the product history I saw that a several features were added, such as code refactoring or automatic semicolon insertion. However, in this post I do not want to talk about them. I have noticed a new logic for metadata queries, which significantly accelerated my work with database objects. So I decided to look inside SQL Complete… to understand how it works and what changed…

We can use the free dbForge Event Profiler for SQL Server tool to see what queries SQL Complete sends. We can apply filter to eliminate custom queries from the tracing.

sqlcomp 1 SQL SERVER   Looking Inside SQL Complete   Advantages of Intellisense Features

When we first open an SQL document in SSMS, we get the following trace.

sqlcomp 2 SQL SERVER   Looking Inside SQL Complete   Advantages of Intellisense Features

I wrote in comments what each of these queries return.

-- 1. instance version



-- 2. default schema (dbo by default)





-- 3. don’t know why SQL Complete needs a default backup path



SELECT @dir=CAST(SERVERPROPERTY('instancedefaultdatapath') AS NVARCHAR(512))


EXEC [master].dbo.xp_instance_regread 'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', 'BackupDirectory', @dir OUTPUT, 'no_output'



-- 4. available databases



, name

, CASE WHEN [dbid] >= 1 AND [dbid] <= 4 THEN 1 ELSE 0 END AS is_fixed_database

, CASE WHEN [status] & 32768 != 0 THEN 0 ELSE CONVERT(BIT, [status] & (32 | 64 | 128 | 256 | 512)) END AS is_unavailable

FROM [master].dbo.sysdatabases WITH(NOLOCK)

ORDER BY name;


-- 5. system languages

SELECT alias

FROM [master].sys.syslanguages WITH(NOLOCK)

ORDER BY alias;


-- 6. system collations (SQL_Latin1_General_CP1_CI_AI, ....)


FROM ::fn_helpcollations()

ORDER BY name;


-- 7. linked servers

SELECT srvname AS name

FROM [master].dbo.sysservers

WHERE srvid != 0

ORDER BY srvid;


-- 8. server triggers



, t.[object_id]

, CASE t.[type] WHEN 'TR' THEN 1 ELSE 0 END AS is_sql_trigger

, CASE WHEN ssm.[definition] IS NULL THEN 1 ELSE 0 END AS is_encrypted

FROM [master].sys.server_triggers t WITH(NOLOCK)

LEFT JOIN [master].sys.server_sql_modules ssm WITH(NOLOCK) ON ssm.[object_id] = t.[object_id]

WHERE is_ms_shipped = 0;


-- 9. system objects (like sys.views, sys. ...)


o.[object_id] AS id

, o.name

, o.[type]

, o.[schema_id]

FROM sys.system_objects o WITH(NOLOCK)

WHERE o.[type] in ('U', 'V', 'IF', 'TF', 'FT', 'FN', 'AF', 'FS', 'P', 'RF', 'PC', 'X')

ORDER BY o.[object_id];


-- 10. extended properties for current database

SELECT value

FROM sys.extended_properties WITH(NOLOCK)

WHERE class = 0 AND name = 'MS_Description';


-- 11. available schemas



, s.name

, CASE WHEN s.[schema_id] BETWEEN 16384 AND 16399 OR s.name IN ('guest', 'INFORMATION_SCHEMA', 'sys')



END AS is_fixed_role

, CASE WHEN ISNULL(NULLIF(SCHEMA_NAME(), 'guest'), 'dbo') = s.name



END AS is_default_schema

, ep.value AS [description]

FROM sys.schemas s WITH(NOLOCK)


SELECT value, major_id

FROM sys.extended_properties WITH(NOLOCK)

WHERE class = 3 AND name = 'MS_Description'

) ep ON s.[schema_id] = ep.major_id;


-- 12. user objects (tables, views, procedures, triggers, etc)


o.[object_id] AS id

, o.name

, o.[type]

, o.[schema_id]

, o.is_ms_shipped AS is_system

, COALESCE(OBJECTPROPERTY(o.[object_id], 'IsEncrypted'), 0) AS is_encrypted

, ep.value AS [description]

FROM sys.objects o WITH(NOLOCK)


SELECT value, major_id, minor_id

FROM sys.extended_properties WITH(NOLOCK)

WHERE class = 1 AND name = 'MS_Description'

) ep ON o.[object_id] = ep.major_id AND ep.minor_id = 0

WHERE o.[type] in ('U', 'V', 'IF', 'TF', 'FT', 'FN', 'AF', 'FS', 'P', 'RF', 'PC', 'X')

ORDER BY o.[object_id];


-- 13. foreign keys between tables



, fk.[object_id]

, fk.name

, o.name AS referenced_table_name

, SCHEMA_NAME(o.[schema_id]) AS referenced_table_schema

FROM sys.foreign_keys fk WITH(NOLOCK)

JOIN sys.objects o WITH(NOLOCK) ON o.[object_id] = fk.referenced_object_id

WHERE o.[type] = 'U'



, fk.[object_id];


-- 14. columns for foreign keys


fc.parent_object_id as owner_object_id

, fc.constraint_object_id AS constraint_id

, COL_NAME(fc.parent_object_id, fc.parent_column_id) AS column_name

, COL_NAME(fc.referenced_object_id, fc.referenced_column_id) AS referenced_column_name

FROM sys.foreign_key_columns AS fc WITH (NOLOCK)



, fc.constraint_object_id

, fc.constraint_column_id;


-- 15. user types



, t.[schema_id]

, t.is_assembly_type AS is_clr

, st.name AS base_type

FROM sys.types t WITH(NOLOCK)

LEFT JOIN sys.types st WITH(NOLOCK) ON st.is_user_defined = 0

AND st.is_assembly_type = 0

AND st.is_table_type = 0

AND st.user_type_id = st.system_type_id

AND st.system_type_id = t.system_type_id

WHERE t.is_user_defined = 1 AND t.is_table_type = 0

ORDER BY t.user_type_id;


-- 16. table types


tt.type_table_object_id AS id

, t.name

, t.[schema_id]

FROM sys.types t WITH(NOLOCK)

LEFT JOIN sys.table_types tt WITH(NOLOCK) ON t.user_type_id = tt.user_type_id

WHERE t.is_user_defined = 1

AND t.is_table_type = 1



-- 17. database triggers



, t.[object_id]

, o.[schema_id]

, CASE t.type WHEN 'TR' THEN 1 ELSE 0 END as is_sql_trigger

, COALESCE(OBJECTPROPERTY(t.[object_id], 'IsEncrypted'), 0) AS is_encrypted

, o.name AS parent_name

FROM sys.triggers t WITH(NOLOCK)

LEFT JOIN sys.objects o WITH(NOLOCK) ON o.[object_id] = t.parent_id

WHERE t.is_ms_shipped = 0 AND t.parent_class = 1



, t.[object_id];


-- 18. synonyms



, s.name

, s.[schema_id]

, s.base_object_name

, OBJECTPROPERTYEX(s.[object_id], N'BaseType') AS base_object_type

, ep.value AS [description]

FROM sys.synonyms s WITH(NOLOCK)


SELECT value, major_id, minor_id

FROM sys.extended_properties WITH(NOLOCK)

WHERE class = 1 AND name = 'MS_Description'

) ep ON s.[object_id] = ep.major_id AND ep.minor_id = 0;


-- 19. xml collections

SELECT c.name AS name, c.[schema_id]

FROM sys.xml_schema_collections c WITH(NOLOCK)

WHERE c.[schema_id] <> 4

ORDER BY c.xml_collection_id;


Then I switched to another database and noticed that only a part of queries (10-19) has been executed. This is a way of optimization, when we retrieve names of system objects only once. You need to understand that names of system objects do not change between databases. However, the data returned by these objects is subject to change.

Another interesting optimization is hidden in the settings. If such option «Detect changes on a server…»

sqlcomp 3 SQL SERVER   Looking Inside SQL Complete   Advantages of Intellisense Features

If you turn it on, then while updating the custom objects prompt, the following query is executed:



, MAX(modify_date) AS max_date

, COUNT(1) AS num

FROM sys.objects WITH(NOLOCK)

WHERE [type] IN ('U', 'V', 'IF', 'TF', 'FT', 'FN', 'AF', 'FS', 'P', 'RF', 'PC', 'X', 'SN', 'TR', 'TA')

GROUP BY [type];

The optimization goal is to define what types of objects have changed since the last update of metadata and get changes only for changed types.

When profiling, I noticed another interesting optimization. When writing code, additional information about the objects is loaded when they are selected. For example, we choose a table.

sqlcomp 4 SQL SERVER   Looking Inside SQL Complete   Advantages of Intellisense Features

And we have information about this table immediately:

-- table/view columns



, c.[object_id] AS parent_id

, c.column_id

, c.is_rowguidcol

, c.is_identity

, c.is_nullable

, c.is_computed

, CASE WHEN c.default_object_id <> 0 THEN 1 ELSE 0 END AS default_exist

, c.[precision]

, c.scale


CASE WHEN t.base_type_name IN ('nchar', 'nvarchar') AND c.max_length <> -1

THEN c.max_length / 2

ELSE c.max_length

END AS INT) AS [length]

, t.name AS [type_name]

, t.base_type_name

, t.is_user_defined

, t.type_schema_name

, ep.value AS [description]

FROM sys.all_columns c WITH(NOLOCK)




, SCHEMA_NAME([schema_id]) AS type_schema_name

, COALESCE(TYPE_NAME(system_type_id), name) as base_type_name


) t ON c.user_type_id = t.user_type_id


SELECT value, major_id, minor_id

FROM sys.extended_properties WITH(NOLOCK)

WHERE class = 1 AND name = 'MS_Description'

) ep ON ep.major_id = c.[object_id] AND ep.minor_id = c.column_id

WHERE c.[object_id] = {0}

ORDER BY c.column_id;


-- index for selected table/view



, i.index_id

, i.name

, i.is_unique_constraint

, i.is_primary_key

, i.[type]

FROM sys.indexes i WITH(NOLOCK)

WHERE i.[object_id] = {0}

AND i.index_id > 0

AND i.is_hypothetical = 0

ORDER BY i.index_id;


-- index columns for selected table/view



, ic.index_id

, c.name

FROM sys.index_columns ic WITH(NOLOCK)

JOIN sys.columns c WITH(NOLOCK) ON c.[object_id] = ic.[object_id] AND c.column_id = ic.column_id

WHERE ic.[object_id] = {0}

AND ic.index_id > 0



, ic.index_column_id;


This same approach works while prompting another database objects (e.g. stored routines, functions and synonyms).

The described optimization improves usability when working. You do not need to wait a second to write a query.

In the future, I plan to continue to look under the hood of other SSMS add-ins. It is more interesting to work, when you know how it works.

Reference : Pinal Dave (http://blog.SQLAuthority.com)