Interview Question of the Week #046 – How @@DATEFIRST and SET DATEFIRST Are Related?

Question: How @@DATEFIRST and SET DATEFIRST are related?

Answer: The master database’s syslanguages table has a DateFirst column that defines the first day of the week for a particular language. SQL Server with US English as default language, SQL Server sets DATEFIRST to 7 (Sunday) by default. We can reset any day as first day of the week using


This will set Friday as the first day of the week.
@@DATEFIRST returns the current value, for the session, of SET DATEFIRST.

----This will return result as 1(Monday)
SET LANGUAGE us_english
----This will return result as 7(Sunday)

In this way @@DATEFIRST and SET DATEFIRST are related. When I learned about this feature I was very glad as our company has started to server global clients and simple feature like this helps a lot to avoid confusion.

Reference : Pinal Dave (

SQL SERVER – Inserting into ColumnSet with SPARSE Columns

There are a number of blogs around using SPARSE columns here. Here are few of them for reference and can be used as a great learning experience.

SQL SERVER – 2008 – Introduction to SPARSE Columns

SQL SERVER – 2008 – Introduction to SPARSE Columns – Part 2

SQL SERVER – Errors and Limitations of Working with SPARSE Columns in SQL Server

Though such capabilities exist inside the product, I hardly see people using this feature effectively. Recently, one friend from a product based company called me to check if I had written on this topic. I explained that there are tons of documentation and it is great to see them use it. He was saying, they had a table which had close to 1000’s of column as they are using it like a generic table and wanted to know if there is an easy way to update or insert into this SPARSE table directly from ColumnSet?

The short answer is, YES. It is quite possible to insert using the ColumnSet column directly. I sent him an example and thought it would be of great use for folks here too.

Let us create our table for the experiment:

USE tempdb;
IF EXISTS (SELECT * FROM sys.objects WHERE name = 'DocStore_With_ColumnSet')
-- note the use of the SPARSE keyword and the XML column_set
CREATE TABLE DocStore_With_ColumnSet
ProductionSpecification VARCHAR(20) SPARSE NULL,
ProductionLocation smallint SPARSE NULL,
MarketingSurveyGroup VARCHAR(20) SPARSE NULL,
MarketingProgramID INT SPARSE NULL,

We can insert into different values columns as shown below:

INSERT DocStore_With_ColumnSet (DocID, Title, ProductionSpecification, ProductionLocation)
VALUES (1, 'Tire Specs 1', 'ABC321', 27)
INSERT DocStore_With_ColumnSet (DocID, Title, MarketingSurveyGroup)
VALUES (2, 'Survey 1234', 'Female 25 - 35')
-- Note that the Special_XML_Column column is correctly populated based on the columns that are populated
SELECT * FROM DocStore_With_ColumnSet

Can we add to the ColumnSet directly? Here is what we can do:

-- you can add data directly by specifying the xml
INSERT DocStore_With_ColumnSet (DocID, Title, Special_XML_Column)
VALUES (3, 'Specification 2', '<ProductionSpecification>AXZ7R242</ProductionSpecification><ProductionLocation>18</ProductionLocation>')

We can see the success of the insert statement. Let us next try to check if the values are inserted into our ColumnSet:

SELECT * FROM DocStore_With_ColumnSet

If this was not enough, we can also use this technique to update the columnset values too. A typical example for this is shown below:

-- now let us update some of the data
UPDATE DocStore_With_ColumnSet
SET Special_XML_Column = '<ProductionSpecification>ZZ456Z</ProductionSpecification><ProductionLocation>18</ProductionLocation>'
WHERE DocID = 3;

You can see again by querying that the values have been inserted successfully. This was a simple example to how we can use columnsets to insert data into a sparse table rather can doing a qualification of each columns. Let us next try to clean this table:

-- Cleanup
DROP TABLE DocStore_With_ColumnSet

As I conclude, would like to know how many of you have used SPARSE columns in your application and what were some of the scenarios you were able to lighten up using this feature?

Reference: Pinal Dave (

SQL SERVER – InMemory Table Creation Error Message: Msg 41337

I had recently gone to my hometown to visit my parents as it was holidays for my daughter because of festival. These are relaxing times and hardly try to get in front of my computer during these 2-3 days of family time. But who said that happens always. In this recent trip, I had one of my cousins visit our place who was incidentally studying his engineering degree. We suddenly got into a conversation and I started talking about InMemory databases and how they can be made resilient. Though this topic becomes a great conversation starter to college students, he was baffled with the fact that we can have InMemory databases that are ACID compliant.

He got to his home and said he would play around with this concept because he had a SQL Server 2016 evaluation version. I said there are many more enhancements to talk but wanted him to do his bit of search and homework before I taught him some really cool tricks. That night he sent me a mail stating he was getting some error while creating an InMemory table and was not sure what to do. I asked him to send me the error to take a first look:

Msg 41337, Level 16, State 100, Line 2
Cannot create memory optimized tables. To create memory optimized tables, the database must have a MEMORY_OPTIMIZED_FILEGROUP that is online and has at least one container.

It was clear from the message the source of the problem. He had created a normal database without the InMemory filegroup.

I sent him the script to get rid of the error message and is shown below:

-- Create the Database
-- Note the filegroup of type MEMORY_OPTIMIZED_DATA
( NAME = N'InMem_OLTP_data', FILENAME = N'C:\DATA\InMem_OLTP_data.mdf',size = 50MB)
( NAME = N'InMem_OLTP_log', FILENAME = N'C:\DATA\InMem_OLTP_log.ldf',size = 10MB)
--- Step 2 - Can we create this table?
CREATE TABLE [Customer](

As you can see we will get the exact error as mentioned earlier.

Msg 41337, Level 16, State 100, Line 2
Cannot create memory optimized tables. To create memory optimized tables, the database must have a MEMORY_OPTIMIZED_FILEGROUP that is online and has at least one container.

As mentioned, let us add the InMemory Filegroup so that we can create the tables without a glitch:

-- Step 3 -- Add a Filegroup that is MEMORY_OPTIMIZED.
ADD FILE (NAME = InMem_OLTP_InMemory, FILENAME = N'C:\Data\InMem_OLTP_mopt')

Now if we create the table as mentioned in Step 2 above, there will not be any error. I don’t blame my cousin for this because he was a novice and was not able to understand what a container is when it comes to InMemory tables. Though this was a simple error that most of you would easily be able to solve, it can surely be a great source troubleshooting for people like my cousin.

Reference: Pinal Dave (

SQL SERVER – T-SQL Window Function Framing and Performance – Notes from the Field #103

Kathi SQL SERVER   T SQL Window Function Framing and Performance   Notes from the Field #103[Note from Pinal]: In this episode of the Notes from the Field series database expert Kathi Kellenberger explains about T-SQL Over clause. Kathi is an amazing instructor, she was the SQL author I have read in my early career. The reason, I love SQL Server because her writing has instigated love for this technology in me. Today she brings a relatively interesting topic for database experts. Read the experience of  Kathi in her own words.

In my last post to SQL Authority, I explained how to add frames to T-SQL window functions where it is supported. I broke down the syntax and showed where things can go wrong if you rely on the default frame instead of explicitly specifying the frame. Be sure to read that article if you are new to frames or just need a refresher.

When creating a running total, for example, It’s very easy to just leave out the frame, and, in many cases, the results will be what you want. There is another downside to using the default frame, however, and that is a pretty big performance penalty. The default frame, when one is not specified, is RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. If the ORDER BY expression in the OVER clause produces unique values, you will get a valid running total.

In order to perform the window function calculation, such as a running total, SQL Server creates a worktable and populates it with each partition. When RANGE is used, that worktable is always created in tempdb. When ROWS is used instead by specifying the frame, the worktable is created in memory most of the time. The worktable created in memory has no I/O, locking, or contention issues, so it performs much better. Unfortunately, when you look at the graphical execution plan, you won’t see much of a difference between ROWS and RANGE. You can see the difference, however, if you look at STATISTICS IO.

To see the difference for yourself, run this script against the AdventureWorks database. Be sure to enable the Actual Execution Plan before running it as well.

PRINT 'Default frame';
SELECT CustomerID, SalesOrderID, TotalDue,
SUM(TotalDue) OVER(PARTITION BY CustomerID ORDER BY SalesOrderID) AS RunningTotal
FROM Sales.SalesOrderHeader;
PRINT 'ROWS frame';
SELECT CustomerID, SalesOrderID, TotalDue,
) AS RunningTotal
FROM Sales.SalesOrderHeader;

First, take a look at the execution plans as shown in the Figure 1. The two queries did not produce identical plans, but the relative cost for each is 50%. So, at least the execution plans report that the queries perform the same.

103 1 SQL SERVER   T SQL Window Function Framing and Performance   Notes from the Field #103

Figure 1: The graphical execution plans

Now take a look at the messages tab. Here you will see a big difference as shown in Figure 2. The query using the default frame reported a very high number of logical reads from a worktable. The second query, which uses ROWS, reports 0 reads for the worktable. If you turn on Statistics Time, you will also see that the second query runs faster.

103 2 SQL SERVER   T SQL Window Function Framing and Performance   Notes from the Field #103

Figure 2: The statistics IO

It is often difficult to show performance differences with AdventureWorks tables. They are just too small. Adam Machanic, a SQL Server MVP, published a script that creates a couple of large tables based on AdventureWorks data. One of those tables, bigTransactionHistory, contains 30 million rows. I used bigTransactionHistory to do some performance testing to see if there really was a time difference that could be measured between ROWS and RANGE. For the test, I created a couple of smaller tables, subsets of Adam’s table, which contained 15 and 7.5 million rows respectively. I turned off the grid results so that only the server time would be measured. In addition to comparing ROWS and RANGE, I also tested two traditional methods: using a self-join subquery and a cursor. You can see the results of my tests in Figure 3.

103 3 SQL SERVER   T SQL Window Function Framing and Performance   Notes from the Field #103

Figure 3: The results of testing running totals

Using an accumulating window aggregate function (WF) outperformed the traditional techniques even with the default frame. You can see, however, that specifying a Rows frame was incredibly fast.

Whenever frames are supported (FIRST_VALUE, LAST_VALUE and accumulating window aggregates), make sure that you always specify the frame. Not only can you avoid some logic problems caused by RANGE frames, you will almost always see a performance boost by using ROWS. Remember: Use ROWS. Use ROWS. And use ROWS.

notes 82 3 SQL SERVER   T SQL Window Function Framing and Performance   Notes from the Field #103If you would like to learn more about T-SQL window functions, be sure to check out my latest book Expert T-SQL Window Functions in SQL Server or my Pluralsight course T-SQL Window Functions.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (

SQL SERVER – Puzzle with Miliseconds – Win USD 50 Amazon Gift Card

Last contest, which we had ran was in May and it had received amazing responses. I once again reached out to kind team of Embarcadero and they agreed to support the next contest. The contest has two steps and they are very simple. It took me a while to build contest, but it is a really fun one. I am very confident that once you try out the contest, you will love it.

Two Giveaways:

amazon gift cards SQL SERVER   Puzzle with Miliseconds   Win USD 50 Amazon Gift Card

(USA) USD 50 Amazon Gift Card to 1 Individual

(India) INR 2500 Gift Card to 1 Individual

(Rest of the world) USD 50 Amazon Gift Card to 1 Individual

Contest 1: T-SQL

Run following T-SQL script and observe the answer:

CAST(‘2015-01-01 14:48:34.69’ AS DATETIME) FirstVal,
CAST(‘2015-01-01 14:48:34:69’ AS DATETIME) SecondVal

When we look at the answer, there is a difference between the milliseconds part in the resutlset, whereas you can notice that in the SELECT statement I have specified different milliseconds part. The question is why there is a difference in the millisecond part even though I have same value selected?

milisecond SQL SERVER   Puzzle with Miliseconds   Win USD 50 Amazon Gift Card

Contest 2: Download and Install DBArtisan

This is the easy part of the contest. You just have to download and install DBArtisan. You can download DBArtisan from here.

How to Participate?

  • Leave an answer for contest 1 in the comment section of the blog.
  • Leave a comment with the same email address which you have used to download DBArtisan.
  • The contest is open till December 1st, 2015 Midnight PST.
  • The winner will be announced on December 10th, 2015.
  • There will be three winners 1) USA 2) India 3) Rest of the World
  • Please do not forget to indicate which country do you belong to qualify for the contest.
  • All the correct answer to the contest will be hidden till the winner is announced.

Reference: Pinal Dave (

SQL SERVER – Looking Inside SQL Complete – Advantages of Intellisense Features

Recently SQL Complete add-in was updated, which I use when writing SQL queries in SSMS. In the product history I saw that a several features were added, such as code refactoring or automatic semicolon insertion. However, in this post I do not want to talk about them. I have noticed a new logic for metadata queries, which significantly accelerated my work with database objects. So I decided to look inside SQL Complete… to understand how it works and what changed…

We can use the free dbForge Event Profiler for SQL Server tool to see what queries SQL Complete sends. We can apply filter to eliminate custom queries from the tracing.

sqlcomp 1 SQL SERVER   Looking Inside SQL Complete   Advantages of Intellisense Features

When we first open an SQL document in SSMS, we get the following trace.

sqlcomp 2 SQL SERVER   Looking Inside SQL Complete   Advantages of Intellisense Features

I wrote in comments what each of these queries return.

-- 1. instance version



-- 2. default schema (dbo by default)





-- 3. don’t know why SQL Complete needs a default backup path



SELECT @dir=CAST(SERVERPROPERTY('instancedefaultdatapath') AS NVARCHAR(512))


EXEC [master].dbo.xp_instance_regread 'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', 'BackupDirectory', @dir OUTPUT, 'no_output'



-- 4. available databases



, name

, CASE WHEN [dbid] >= 1 AND [dbid] <= 4 THEN 1 ELSE 0 END AS is_fixed_database

, CASE WHEN [status] & 32768 != 0 THEN 0 ELSE CONVERT(BIT, [status] & (32 | 64 | 128 | 256 | 512)) END AS is_unavailable

FROM [master].dbo.sysdatabases WITH(NOLOCK)

ORDER BY name;


-- 5. system languages

SELECT alias

FROM [master].sys.syslanguages WITH(NOLOCK)

ORDER BY alias;


-- 6. system collations (SQL_Latin1_General_CP1_CI_AI, ....)


FROM ::fn_helpcollations()

ORDER BY name;


-- 7. linked servers

SELECT srvname AS name

FROM [master].dbo.sysservers

WHERE srvid != 0

ORDER BY srvid;


-- 8. server triggers


, t.[object_id]

, CASE t.[type] WHEN 'TR' THEN 1 ELSE 0 END AS is_sql_trigger

, CASE WHEN ssm.[definition] IS NULL THEN 1 ELSE 0 END AS is_encrypted

FROM [master].sys.server_triggers t WITH(NOLOCK)

LEFT JOIN [master].sys.server_sql_modules ssm WITH(NOLOCK) ON ssm.[object_id] = t.[object_id]

WHERE is_ms_shipped = 0;


-- 9. system objects (like sys.views, sys. ...)


o.[object_id] AS id


, o.[type]

, o.[schema_id]

FROM sys.system_objects o WITH(NOLOCK)

WHERE o.[type] in ('U', 'V', 'IF', 'TF', 'FT', 'FN', 'AF', 'FS', 'P', 'RF', 'PC', 'X')

ORDER BY o.[object_id];


-- 10. extended properties for current database

SELECT value

FROM sys.extended_properties WITH(NOLOCK)

WHERE class = 0 AND name = 'MS_Description';


-- 11. available schemas




, CASE WHEN s.[schema_id] BETWEEN 16384 AND 16399 OR IN ('guest', 'INFORMATION_SCHEMA', 'sys')



END AS is_fixed_role




END AS is_default_schema

, ep.value AS [description]

FROM sys.schemas s WITH(NOLOCK)


SELECT value, major_id

FROM sys.extended_properties WITH(NOLOCK)

WHERE class = 3 AND name = 'MS_Description'

) ep ON s.[schema_id] = ep.major_id;


-- 12. user objects (tables, views, procedures, triggers, etc)


o.[object_id] AS id


, o.[type]

, o.[schema_id]

, o.is_ms_shipped AS is_system

, COALESCE(OBJECTPROPERTY(o.[object_id], 'IsEncrypted'), 0) AS is_encrypted

, ep.value AS [description]

FROM sys.objects o WITH(NOLOCK)


SELECT value, major_id, minor_id

FROM sys.extended_properties WITH(NOLOCK)

WHERE class = 1 AND name = 'MS_Description'

) ep ON o.[object_id] = ep.major_id AND ep.minor_id = 0

WHERE o.[type] in ('U', 'V', 'IF', 'TF', 'FT', 'FN', 'AF', 'FS', 'P', 'RF', 'PC', 'X')

ORDER BY o.[object_id];


-- 13. foreign keys between tables



, fk.[object_id]


, AS referenced_table_name

, SCHEMA_NAME(o.[schema_id]) AS referenced_table_schema

FROM sys.foreign_keys fk WITH(NOLOCK)

JOIN sys.objects o WITH(NOLOCK) ON o.[object_id] = fk.referenced_object_id

WHERE o.[type] = 'U'



, fk.[object_id];


-- 14. columns for foreign keys


fc.parent_object_id as owner_object_id

, fc.constraint_object_id AS constraint_id

, COL_NAME(fc.parent_object_id, fc.parent_column_id) AS column_name

, COL_NAME(fc.referenced_object_id, fc.referenced_column_id) AS referenced_column_name

FROM sys.foreign_key_columns AS fc WITH (NOLOCK)



, fc.constraint_object_id

, fc.constraint_column_id;


-- 15. user types


, t.[schema_id]

, t.is_assembly_type AS is_clr

, AS base_type

FROM sys.types t WITH(NOLOCK)

LEFT JOIN sys.types st WITH(NOLOCK) ON st.is_user_defined = 0

AND st.is_assembly_type = 0

AND st.is_table_type = 0

AND st.user_type_id = st.system_type_id

AND st.system_type_id = t.system_type_id

WHERE t.is_user_defined = 1 AND t.is_table_type = 0

ORDER BY t.user_type_id;


-- 16. table types


tt.type_table_object_id AS id


, t.[schema_id]

FROM sys.types t WITH(NOLOCK)

LEFT JOIN sys.table_types tt WITH(NOLOCK) ON t.user_type_id = tt.user_type_id

WHERE t.is_user_defined = 1

AND t.is_table_type = 1



-- 17. database triggers


, t.[object_id]

, o.[schema_id]

, CASE t.type WHEN 'TR' THEN 1 ELSE 0 END as is_sql_trigger

, COALESCE(OBJECTPROPERTY(t.[object_id], 'IsEncrypted'), 0) AS is_encrypted

, AS parent_name

FROM sys.triggers t WITH(NOLOCK)

LEFT JOIN sys.objects o WITH(NOLOCK) ON o.[object_id] = t.parent_id

WHERE t.is_ms_shipped = 0 AND t.parent_class = 1



, t.[object_id];


-- 18. synonyms




, s.[schema_id]

, s.base_object_name

, OBJECTPROPERTYEX(s.[object_id], N'BaseType') AS base_object_type

, ep.value AS [description]

FROM sys.synonyms s WITH(NOLOCK)


SELECT value, major_id, minor_id

FROM sys.extended_properties WITH(NOLOCK)

WHERE class = 1 AND name = 'MS_Description'

) ep ON s.[object_id] = ep.major_id AND ep.minor_id = 0;


-- 19. xml collections

SELECT AS name, c.[schema_id]

FROM sys.xml_schema_collections c WITH(NOLOCK)

WHERE c.[schema_id] <> 4

ORDER BY c.xml_collection_id;


Then I switched to another database and noticed that only a part of queries (10-19) has been executed. This is a way of optimization, when we retrieve names of system objects only once. You need to understand that names of system objects do not change between databases. However, the data returned by these objects is subject to change.

Another interesting optimization is hidden in the settings. If such option «Detect changes on a server…»

sqlcomp 3 SQL SERVER   Looking Inside SQL Complete   Advantages of Intellisense Features

If you turn it on, then while updating the custom objects prompt, the following query is executed:



, MAX(modify_date) AS max_date

, COUNT(1) AS num

FROM sys.objects WITH(NOLOCK)

WHERE [type] IN ('U', 'V', 'IF', 'TF', 'FT', 'FN', 'AF', 'FS', 'P', 'RF', 'PC', 'X', 'SN', 'TR', 'TA')

GROUP BY [type];

The optimization goal is to define what types of objects have changed since the last update of metadata and get changes only for changed types.

When profiling, I noticed another interesting optimization. When writing code, additional information about the objects is loaded when they are selected. For example, we choose a table.

sqlcomp 4 SQL SERVER   Looking Inside SQL Complete   Advantages of Intellisense Features

And we have information about this table immediately:

-- table/view columns


, c.[object_id] AS parent_id

, c.column_id

, c.is_rowguidcol

, c.is_identity

, c.is_nullable

, c.is_computed

, CASE WHEN c.default_object_id <> 0 THEN 1 ELSE 0 END AS default_exist

, c.[precision]

, c.scale


CASE WHEN t.base_type_name IN ('nchar', 'nvarchar') AND c.max_length <> -1

THEN c.max_length / 2

ELSE c.max_length

END AS INT) AS [length]

, AS [type_name]

, t.base_type_name

, t.is_user_defined

, t.type_schema_name

, ep.value AS [description]

FROM sys.all_columns c WITH(NOLOCK)




, SCHEMA_NAME([schema_id]) AS type_schema_name

, COALESCE(TYPE_NAME(system_type_id), name) as base_type_name


) t ON c.user_type_id = t.user_type_id


SELECT value, major_id, minor_id

FROM sys.extended_properties WITH(NOLOCK)

WHERE class = 1 AND name = 'MS_Description'

) ep ON ep.major_id = c.[object_id] AND ep.minor_id = c.column_id

WHERE c.[object_id] = {0}

ORDER BY c.column_id;


-- index for selected table/view



, i.index_id


, i.is_unique_constraint

, i.is_primary_key

, i.[type]

FROM sys.indexes i WITH(NOLOCK)

WHERE i.[object_id] = {0}

AND i.index_id > 0

AND i.is_hypothetical = 0

ORDER BY i.index_id;


-- index columns for selected table/view



, ic.index_id


FROM sys.index_columns ic WITH(NOLOCK)

JOIN sys.columns c WITH(NOLOCK) ON c.[object_id] = ic.[object_id] AND c.column_id = ic.column_id

WHERE ic.[object_id] = {0}

AND ic.index_id > 0



, ic.index_column_id;


This same approach works while prompting another database objects (e.g. stored routines, functions and synonyms).

The described optimization improves usability when working. You do not need to wait a second to write a query.

In the future, I plan to continue to look under the hood of other SSMS add-ins. It is more interesting to work, when you know how it works.

Reference : Pinal Dave (

MySQL – Get Latest Identity Value by Inserts

In SQL Server, whenever you add data to a table that has an identity column, you can get the lastly generated identity value using @@identity or scope_identity(). Similarly, in MySQL you can make use of LAST_INSERT_ID () function

Let us create the following dataset

Create table test(id int AUTO_INCREMENT NOT NULL,name varchar(100));
INSERT INTO test(name) select 'MySQL' union all
INSERT INTO test(name) select 'SQL Server' union all
INSERT INTO test(name) select 'Oracle' ;

After executing the above code, if you want to know the lastly generated identity value, use the following code


The result is 3

Note that this will work only in the current session and it is advisable to use the code immediately after the INSERT statement. If you want to make use of the value at a later point within the block of code, you can assign it to a variable.


Now you can use the value of @id_val at the later point within a block of code or a procedure.

Reference : Pinal Dave (

Interview Question of the Week #045 – How to Do Case Sensitive Search?

One of my popular questions which helps me to understand lots of understanding of SQL by SQL developer.

Question: How doing a case sensitive search in SQL Server?


If Column1 of Table1 has following values ‘CaseSearch, casesearch, CASESEARCH, CaSeSeArCh’, following statement will return you all the four records.

SELECT Column1
FROM Table1
WHERE Column1 = 'casesearch'

To make the query case sensitive and retrieve only one record (“casesearch”) from above query, the collation of the query needs to be changed as follows.
SELECT Column1
FROM Table1
WHERE Column1 COLLATE Latin1_General_CS_AS = 'casesearch'

Adding COLLATE Latin1_General_CS_AS makes the search case sensitive.

Default Collation of the SQL Server installation SQL_Latin1_General_CP1_CI_AS is not case sensitive.

To change the collation of the any column for any table permanently run following query.
COLLATE Latin1_General_CS_AS

To know the collation of the column for any table run following Stored Procedure.
EXEC sp_help DatabaseName

Second results set above script will return you collation of database DatabaseName.

Reference : Pinal Dave (

SQL SERVER – Installation failed with error – Wait on the Database Engine recovery handle failed

You might have seen on Facebook that I have completed the new setup of my machines. Since I use VMs to do my tests. To have a domain, I made a VM as domain controller and another as a member server. Next step was to install SQL Server and by mistake, I have run the setup of SQL on a domain controller. As per Microsoft documentation they don’t recommend installing SQL on the domain controller and there is a warning in SQL Setup.

setup dc 01 SQL SERVER   Installation failed with error   Wait on the Database Engine recovery handle failed

Rule Check Result
Rule “Computer domain controller” generated a warning.
Installing SQL Server 2016 CTP2.4 on a domain controller is not recommended.
I proceeded further and it failed at the end with below error
Wait on the Database Engine recovery handle failed. Check the SQL Server error log for potential causes.
Here is the complete message in summary.txt file.
Feature:                       SQL Server Replication
Status:                        Failed: see logs for details
Reason for failure:            An error occurred for a dependency of the feature causing the setup process for the feature to fail.
Next Step:                     Use the following information to resolve the error, uninstall this feature, and then run the setup process again.
Component name:                SQL Server Database Engine Services Instance Features
Component error code:          0x851A001A
Error description:             Wait on the Database Engine recovery handle failed. Check the SQL Server error log for potential causes.
I followed the error message as it asked to check ERRORLOG. Here is message in ERRORLOG
Error: 17190, Severity: 16, State: 1.
Initializing the FallBack certificate failed with error code: 1, state: 20, error number: 0.
Unable to initialize SSL encryption because a valid certificate could not be found, and it is not possible to create a self-signed certificate.
Error: 17182, Severity: 16, State: 1.
TDSSNIClient initialization failed with error 0x80092004, status code 0x80. Reason: Unable to initialize SSL support. Cannot find object or property.
Error: 17182, Severity: 16, State: 1.
TDSSNIClient initialization failed with error 0x80092004, status code 0x1. Reason: Initialization failed with an infrastructure error. Check for previous errors. Cannot find object or property.
Error: 17826, Severity: 18, State: 3.
Could not start the network library because of an internal error in the network library. To determine the cause, review the errors immediately preceding this one in the error log.
Error: 17120, Severity: 16, State: 1.
SQL Server could not spawn FRunCommunicationsManager thread. Check the SQL Server error log and the Windows event logs for information about possible related problems.

I searched on the internet and found that this could be issued with a SQL Service account. I selected all default configuration and it was NT Service\MSSQLServer

I uninstalled SQL Server and this time I selected Local System account for SQL Service in setup wizard and voilà it went fine.

Have you ever faced any such error on a regular machine? What was the solution?

Reference : Pinal Dave (

SQL SERVER – Backup to Azure Blob error – The remote server returned an error: (409) Conflict

Cloud is inevitable. With various innovations happenings in this area, SQL Server is not left behind. It is becoming a first class citizen when it comes to moving to cloud. In the past couple of releases, we have been seeing the enhancements of integrating with Azure getting stronger. Last year I wrote a blog about taking backup directly to Azure Storage.

SQL SERVER – Backup to Azure Blob using SQL Server 2014 Management Studio

As part of a demo environments, I configured it long back and recently I stated seeing failures of backup in ERRORLOG and found below error message.

2015-04-03 15:39:12.40 spid165     Error: 18210, Severity: 16, State: 1.
2015-04-03 15:39:12.40 spid165     BackupVirtualDeviceFile::DetermineFileSize: SetPosition(0,EOF) failure on backup device ‘;. Operating system error Backup to URL received an exception from the remote endpoint. Exception Message: The remote server returned an error: (409) Conflict.

I searched for the ways to fix this error and landed across MSDN article which asked to enable trace flag 3051 to get additional details.

DBCC TRACEON (3051,-1)

Once I ran the command and ran the back-up again, it failed but there was an additional file generated in the same folder which has ERRORLOG. Here is the content

11/10/2015 3:39:15 PM: ======== BackupToUrl Initiated =========
11/10/2015 3:39:15 PM: Inputs: Backup = False, PageBlob= True, URI =, Acct= sqlauth, FORMAT= False, Instance Name = MSSQLSERVER, DBName = SQLAUTHTEST LogPath = D:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Log
11/10/2015 3:39:15 PM: Process Id: 7384
11/10/2015 3:39:15 PM: Time for Initialization = 9.0003 ms
11/10/2015 3:39:15 PM: BackupToUrl Client is getting configuration from SqlServr
11/10/2015 3:39:15 PM: Time for Handshake and VDI config = 10.0001 ms
11/10/2015 3:39:15 PM: Time for Get BlobRef = 0.9949 ms
11/10/2015 3:39:16 PM: An exception occurred during communication with Azure Storage, exception information follows
11/10/2015 3:39:16 PM:  Exception Info: The remote server returned an error: (409) Conflict.
11/10/2015 3:39:16 PM:  Stack:    at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ExecuteSync[T](StorageCommandBase`1 cmd, IRetryPolicy policy, OperationContext operationContext)
at Microsoft.WindowsAzure.Storage.Blob.CloudPageBlob.AcquireLease(Nullable`1 leaseTime, String proposedLeaseId, AccessCondition accessCondition, BlobRequestOptions options, OperationContext operationContext)

If we look at stack it talks about lease. 409 conflict – Indicates that the request could not be processed because of conflict in the request, such as an edit conflict in the case of multiple updates.

I downloaded Azure Explore tool from and using the tool I was able to break lease which was open. Below is the option in the UI called ‘Break Lease On Blob’ once you are connected to the Storage account.

lease 01 SQL SERVER   Backup to Azure Blob error   The remote server returned an error: (409) Conflict

Another way was to break the lease using the way documented in MSDN (Deleting Backup Blob Files with Active Leases) and it needs coding which I am not an expert at.

Have you ever tried using backupToURL option with SQL Server? Will you be using these tools for your environments? Do let me know via comments.

Reference : Pinal Dave (