SQL SERVER – Database Taking Long Time to Create – Notes from the Field #047

[Notes from Pinal]: There are few things which confuse us when we encounter first time, however, there are few concepts which confuses us even though we face them multiple times. One of the such subjects is database taking a long time to create. Think about it, if you see a database which takes long time to create, your natural reaction will be how long various operations will take with the same database. I asked him if he can write further on this topic and help people understand this complex subject in simple words.

Linchpin People are database coaches and wellness experts for a data driven world. In this 26th episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple word. Please follow Tim on his twitter handle at @tradney.


I recently came across a support question on a forum where the user was very concerned about the health of their environment because it was taking over 10 minutes to create a 150GB database. When I was reading the post from the user two things immediately came to mind. First was that they could be creating the data file on a slow IO subsystem and second that they probably do not have Instant File Initialization turned on.

When data and log files are created in SQL Server they are initialized to overwrite any existing data. This is done by filling the files with zeros. This is a process commonly referred to as zero’ing out the file.

In SQL Server there is a way to change a setting that allows for instant initialization of the data file. This process does have a security consideration that must be understood, however for most organizations it is not an issue. Basically by not writing over the free space with zeros it could potentially allow an unauthorized user to read any previous data written to the disk.

Instant file initialization is only available if the SQL Server (MSSQLSERVER) service account has been granted SE_MANAGE_VOLUME_NAME. Members of the Windows Administrator group have this right and can grant it to other users by adding them to the Perform Volume Maintenance Tasks security policy. For more information about assigning user rights, see the Windows documentation. (Reference)

In the case of the user on the forum, they were not using Instant File Initialization and decided to enable it. The user then created another database of the same size and it created in seconds versus the same operation before taking over 10 minutes.

I created a blog and video a couple of years ago walking through the process of enabling this feature. You can view it here.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – Location of Natively Compiled Stored Procedure and Naming Convention

Yesterday I wrote about SQL SERVER – Beginning In-Memory OLTP with Sample Example. One of the questions I received right after I published a blog post was why do I call stored procedure natively coded stored procedure when the entire code is in T-SQL. Indeed a very good question. The answer is very simple, the reason we call it natively compiled stored procedure as soon as we execute create statement compiler will convert interpreted T-SQL, query plans and expressions into native code.

You can execute the following query in your SSMS and find out the location of the natively compiled stored procedure.

SELECT name,
description
FROM   sys.dm_os_loaded_modules
WHERE description = 'XTP Native DLL'
GO

To see this DMV in action execute the code from this blog post on your SQL Server.

– Create database
CREATE DATABASE InMemory
ON PRIMARY(NAME = InMemoryData,
FILENAME = 'd:\data\InMemoryData.mdf', size=200MB),
-- Memory Optimized Data
FILEGROUP [InMem_FG] CONTAINS MEMORY_OPTIMIZED_DATA(
NAME = [InMemory_InMem_dir],
FILENAME = 'd:\data\InMemory_InMem_dir')
LOG ON (name = [InMem_demo_log], Filename='d:\data\InMemory.ldf', size=100MB)
GO

– Create table
USE InMemory
GO
-- Create a Memeory Optimized Table
CREATE TABLE DummyTable_Mem (ID INT NOT NULL,
Name VARCHAR(100) NOT NULL
CONSTRAINT ID_Clust_DummyTable_Mem PRIMARY KEY NONCLUSTERED HASH (ID) WITH (BUCKET_COUNT=1000000))
WITH (MEMORY_OPTIMIZED=ON)
GO

– Create stored procedure
-- Inserting same 100,000 rows using InMemory Table
CREATE PROCEDURE ImMemory_Insert_test
WITH NATIVE_COMPILATION, SCHEMABINDING,EXECUTE AS OWNER
AS
BEGIN
ATOMIC WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE='english')
DECLARE @counter AS INT = 1
DECLARE @start DATETIME
SELECT
@start = GETDATE()
WHILE (@counter <= 100000)
BEGIN
INSERT INTO
dbo.DummyTable_Mem VALUES(@counter, 'SQLAuthority')
SET @counter = @counter + 1
END
SELECT
DATEDIFF(SECOND, @start, GETDATE() ) [InMemory_Insert in sec]
END
GO

Now let us execute our script as described.

Now we can see in our result, there are two different dll files.  From the image above I have explained various parts of the dll file.

As per the image, our database id is 11 and if we check it is same as what we have created few seconds ago. Similarly the name of the object id can be found as well.

If we open up the folder where we have created this object we will see two sets of file information. One for stored procedure and one for table.

My friend Balmukund explains this concept very well on his blog over here.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Beginning In-Memory OLTP with Sample Example

In-Memory OLTP is a wonderful new feature introduced in SQL Server 2014. My friend Balmukund Lakhani has written amazing series on A-Z of In-Memory on his blog. All serious learner should study it for deep understanding of the same subject. I will try to cover a few of the concepts in simpler word and often you may find me referring Balmukund’s site on this subject.

Why do we need In-Memory?

Here is the paragraph from Balmukund’s blog (published with approval):

Looking at the market trends of tumbling cost of RAM (USD/MB) and performance implication of reading data from memory vs disk, its evident that people would love to keep the data in memory. With this evolution in hardware industry, softwares have to be evolved and modified so that they can take advantage and scale as much as possible. On the other hand, businesses also don’t want to compromise the durability of data – restart would clear RAM, but data should be back in the same state as it was before the failure. To meet hardware trends and durability requirements, SQL Server 2014 has introduced In-Memory OLTP which would solve them in a unique manner.

Before we start on the subject, let us see a few of the reasons, why you want to go for high-performance memory optimized OLTP operation.

  • It naturally integrates with SQL Server relational database
  • It supports Full ACID properties
  • It helps with non-blocking multi-version optimistic concurrency control, in other words, no locks or latches

Well, let us start with a working example. In this example, we will learn a few things – please pay attention to the details.

  1. We will create a database with a file group which will contain memory optimized data
  2. We will create a table with setting memory_optimized set to enabled
  3. We will create a stored procedure which is natively compiled

The procedure of our test is very simple. We will create two stored procedures 1) Regular Stored Procedure 2) Natively Compiled. We will compare the performance of both the SP and see which one performs better.

Let’s Start!

Step 1: Create a database which creates a file group containing memory_optimized_data

CREATE DATABASE InMemory
ON PRIMARY(NAME = InMemoryData,
FILENAME = 'd:\data\InMemoryData.mdf', size=200MB),
-- Memory Optimized Data
FILEGROUP [InMem_FG] CONTAINS MEMORY_OPTIMIZED_DATA(
NAME = [InMemory_InMem_dir],
FILENAME = 'd:\data\InMemory_InMem_dir')
LOG ON (name = [InMem_demo_log], Filename='d:\data\InMemory.ldf', size=100MB)
GO

Step 2: Create two different tables 1) Regular table and 2) Memory Optimized table

USE InMemory
GO
-- Create a Simple Table
CREATE TABLE DummyTable (ID INT NOT NULL PRIMARY KEY,
Name VARCHAR(100) NOT NULL)
GO
-- Create a Memeory Optimized Table
CREATE TABLE DummyTable_Mem (ID INT NOT NULL,
Name VARCHAR(100) NOT NULL
CONSTRAINT ID_Clust_DummyTable_Mem PRIMARY KEY NONCLUSTERED HASH (ID) WITH (BUCKET_COUNT=1000000))
WITH (MEMORY_OPTIMIZED=ON)
GO

Step 3: Create two stored procedures 1) Regular SP and 2) Natively Compiled SP

Stored Procedure – Simple Insert
-- Simple table to insert 100,000 Rows
CREATE PROCEDURE Simple_Insert_test
AS
BEGIN
SET NOCOUNT ON
DECLARE
@counter AS INT = 1
DECLARE @start DATETIME
SELECT
@start = GETDATE()
WHILE (@counter <= 100000)
BEGIN
INSERT INTO
DummyTable VALUES(@counter, 'SQLAuthority')
SET @counter = @counter + 1
END
SELECT
DATEDIFF(SECOND, @start, GETDATE() ) [Simple_Insert in sec]
END
GO

Stored Procedure – InMemory Insert
-- Inserting same 100,000 rows using InMemory Table
CREATE PROCEDURE ImMemory_Insert_test
WITH NATIVE_COMPILATION, SCHEMABINDING,EXECUTE AS OWNER
AS
BEGIN
ATOMIC WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE='english')
DECLARE @counter AS INT = 1
DECLARE @start DATETIME
SELECT
@start = GETDATE()
WHILE (@counter <= 100000)
BEGIN
INSERT INTO
dbo.DummyTable_Mem VALUES(@counter, 'SQLAuthority')
SET @counter = @counter + 1
END
SELECT
DATEDIFF(SECOND, @start, GETDATE() ) [InMemory_Insert in sec]
END
GO

Step 4: Compare the performance of two SPs

Both of the stored procedure measures and print time taken to execute them. Let us execute them and measure the time.

-- Running the test for Insert
EXEC Simple_Insert_test
GO
EXEC ImMemory_Insert_test
GO

Here is the time taken by Simple Insert: 12 seconds

Here is the time taken by InMemory Insert: Nearly 0 second (less than 1 seconds)

Step 5: Clean up!

-- Clean up
USE MASTER
GO
DROP DATABASE InMemory
GO

Analysis of Result

It is very clear that memory In-Memory OLTP improves performance of the query and stored procedure. To implement In-Memory OLTP there are few steps user to have follow with regards to filegroup and table creation. However, the end result is much better in the case of In-Memory OTLP setup.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Video Introduction to Delayed Durability – SQL in Sixty Seconds #074

Earlier I wrote blog post based on my latest Pluralsight course on learning SQL Server 2014. SQL Server 2014 has introduced many new features and one of performance is Delayed Transaction Durability. This concept is indeed very interesting. To explain this feature in detail, we will require to understand what is Full Transaction Durability. The current default of SQL Server is Full Transaction Durability. A common question I often received is why would we go away from to delayed durability. The answer is – Performance prioritation over Durability.

I hope my earlier blog post clearly explained how delayed durability works for executing query works. If not, I suggest you watch following quick video where I explain this concept in extremely simple words.

You can download the code used in this video from Simple Example of Delayed Durability.

Action Item

Here are the blog posts I have previously written. You can read it over here:

You can subscribe to my YouTube Channel for frequent updates.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Live Plan for Executing Query – SQL in Sixty Seconds #073

Yesterday I wrote blog post based on my latest Pluralsight course on learning SQL Server 2014. I discussed newly introduced live plans. For query before you execute it, you can see the estimated execution plan and after you executed you can see the actual execution plan – however, how to see a query plan while the query is still being executed. This is a very interesting question. In SQL Server 2014 we have now a new DMV, which tracks execution stats at each operator level while the query is still running. The DMV for the same issys.dm_exec_query_profiles.

I hope my earlier blog post clearly explained how live plan for executing query works. If not, I suggest you watch following quick video where I explain this concept in extremely simple words.

You can download the code used in this video from Simple Demo of New Cardinality Estimation Features of SQL Server 2014.

Action Item

Here are the blog posts I have previously written. You can read it over here:

You can subscribe to my YouTube Channel for frequent updates.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to Know Cardinality of Executed Query?

Earlier I wrote three blog posts on SQL Server 2014 Cardinality. Read them over here to get the context of this blog post.

I explained in the last blog post that you can execute query in different cardinality even though your database has different compatibility. One of the questions which I have received right after the blog post was how does the user know what is the compatibility of the query which is just executed. It is indeed a very good question. There are multiple ways to know the cardinality of the query just executed. Here are two ways to do the same.

1) Execution Plan Properties

I enabled execution plan (CTRL+M) when executing queries and right after that I selected one of the operators of the execution plan. Right after that I right clicked and selected properties of the execution plan. You can also enable the properties with shortcut key F4 as well. It will display the properties window on the right side in SSMS.

Over here see the property CardinalityEstimationModelVersion. If the value of this property is 70 it is of previous version. If the value is 120 it means cardinality estimator algorithm is of latest SQL Server version.

 2) XML Execution Plan

This method also requires you to enable execution plan with (CTRL+M). Once the execution plan is enabled right click over the execution plan and click on the option Show XML execution plan. Once this option is clicked it will open a new window with XML plan.

In the XML execution plan, look for the property CardinalityEstimationModelVersion. If the value of this property is 70 it is of previous version. If the value is 120 it means cardinality estimator algorithm is of latest SQL Server version.

Well, it is very simple to know what is the cardinality of query execution.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Backing Up and Recovering the Tail End of a Transaction Log – Notes from the Field #042

[Notes from Pinal]: The biggest challenge which people face is not taking backup, but the biggest challenge is to restore a backup successfully. I have seen so many different examples where users have failed to restore their database because they made some mistake while they take backup and were not aware of the same. Tail Log backup was such an issue in earlier version of SQL Server but in the latest version of SQL Server, Microsoft team has fixed the confusion with additional information on the backup and restore screen itself. Now they have additional information, there are a few more people confused as they have no clue about this. Previously they did not find this as a issue and now they are finding tail log as a new learning.

Linchpin People are database coaches and wellness experts for a data driven world. In this 42nd episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words, Backing Up and Recovering the Tail End of a Transaction Log.


Many times when restoring a database over an existing database SQL Server will warn you about needing to make a tail end of the log backup. This might be your reminder that you have to choose to overwrite the database or could be your reminder that you are about to write over and lose any transactions since the last transaction log backup.

You might be asking yourself “What is the tail end of the transaction log”. The tail end of the transaction log is simply any committed transactions that have occurred since the last transaction log backup. This is a very crucial part of a recovery strategy if you are lucky enough to be able to capture this part of the log.

Most organizations have chosen to accept some amount of data loss. You might be shaking your head at this statement however if your organization is taking transaction logs backup every 15 minutes, then your potential risk of data loss is up to 15 minutes. Depending on the extent of the issue causing you to have to perform a restore, you may or may not have access to the transaction log (LDF) to be able to back up those vital transactions.

For example, if the storage array or disk that holds your transaction log file becomes corrupt or damaged then you wouldn’t be able to recover the tail end of the log. If you do have access to the physical log file then you can still back up the tail end of the log. In 2013 I presented a session at the PASS Summit called “The Ultimate Tail Log Backup and Restore” and have been invited back this year to present it again.

During this session I demonstrate how you can back up the tail end of the log even after the data file becomes corrupt. In my demonstration I set my database offline and then delete the data file (MDF). The database can’t become more corrupt than that. I attempt to bring the database back online to change the state to RECOVERY PENDING and then backup the tail end of the log. I can do this by specifying WITH NO_TRUNCATE. Using NO_TRUNCATE is equivalent to specifying both COPY_ONLY and CONTINUE_AFTER_ERROR. It as its name says, does not try to truncate the log. This is a great demo however how could I achieve backing up the tail end of the log if the failure destroys my entire instance of SQL and all I had was the LDF file?

During my demonstration I also demonstrate that I can attach the log file to a database on another instance and then back up the tail end of the log. If I am performing proper backups then my most recent full, differential and log files should be on a server other than the one that crashed. I am able to achieve this task by creating new database with the same name as the failed database. I then set the database offline, delete my data file and overwrite the log with my good log file. I attempt to bring the database back online and then backup the log with NO_TRUNCATE just like in the first example.

I encourage each of you to view my blog post and watch the video demonstration on how to perform these tasks. I really hope that none of you ever have to perform this in production, however it is a really good idea to know how to do this just in case. It really isn’t a matter of “IF” you will have to perform a restore of a production system but more of a “WHEN”. Being able to recover the tail end of the log in these sever cases could be the difference of having to notify all your business customers of data loss or not.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone.

Reference: Pinal Dave (http://blog.sqlauthority.com)