SQL SERVER – Video Introduction to Delayed Durability – SQL in Sixty Seconds #074

Earlier I wrote blog post based on my latest Pluralsight course on learning SQL Server 2014. SQL Server 2014 has introduced many new features and one of performance is Delayed Transaction Durability. This concept is indeed very interesting. To explain this feature in detail, we will require to understand what is Full Transaction Durability. The current default of SQL Server is Full Transaction Durability. A common question I often received is why would we go away from to delayed durability. The answer is – Performance prioritation over Durability.

I hope my earlier blog post clearly explained how delayed durability works for executing query works. If not, I suggest you watch following quick video where I explain this concept in extremely simple words.

You can download the code used in this video from Simple Example of Delayed Durability.

Action Item

Here are the blog posts I have previously written. You can read it over here:

You can subscribe to my YouTube Channel for frequent updates.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Live Plan for Executing Query – SQL in Sixty Seconds #073

Yesterday I wrote blog post based on my latest Pluralsight course on learning SQL Server 2014. I discussed newly introduced live plans. For query before you execute it, you can see the estimated execution plan and after you executed you can see the actual execution plan – however, how to see a query plan while the query is still being executed. This is a very interesting question. In SQL Server 2014 we have now a new DMV, which tracks execution stats at each operator level while the query is still running. The DMV for the same issys.dm_exec_query_profiles.

I hope my earlier blog post clearly explained how live plan for executing query works. If not, I suggest you watch following quick video where I explain this concept in extremely simple words.

You can download the code used in this video from Simple Demo of New Cardinality Estimation Features of SQL Server 2014.

Action Item

Here are the blog posts I have previously written. You can read it over here:

You can subscribe to my YouTube Channel for frequent updates.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to Know Cardinality of Executed Query?

Earlier I wrote three blog posts on SQL Server 2014 Cardinality. Read them over here to get the context of this blog post.

I explained in the last blog post that you can execute query in different cardinality even though your database has different compatibility. One of the questions which I have received right after the blog post was how does the user know what is the compatibility of the query which is just executed. It is indeed a very good question. There are multiple ways to know the cardinality of the query just executed. Here are two ways to do the same.

1) Execution Plan Properties

I enabled execution plan (CTRL+M) when executing queries and right after that I selected one of the operators of the execution plan. Right after that I right clicked and selected properties of the execution plan. You can also enable the properties with shortcut key F4 as well. It will display the properties window on the right side in SSMS.

Over here see the property CardinalityEstimationModelVersion. If the value of this property is 70 it is of previous version. If the value is 120 it means cardinality estimator algorithm is of latest SQL Server version.

 2) XML Execution Plan

This method also requires you to enable execution plan with (CTRL+M). Once the execution plan is enabled right click over the execution plan and click on the option Show XML execution plan. Once this option is clicked it will open a new window with XML plan.

In the XML execution plan, look for the property CardinalityEstimationModelVersion. If the value of this property is 70 it is of previous version. If the value is 120 it means cardinality estimator algorithm is of latest SQL Server version.

Well, it is very simple to know what is the cardinality of query execution.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Backing Up and Recovering the Tail End of a Transaction Log – Notes from the Field #042

[Notes from Pinal]: The biggest challenge which people face is not taking backup, but the biggest challenge is to restore a backup successfully. I have seen so many different examples where users have failed to restore their database because they made some mistake while they take backup and were not aware of the same. Tail Log backup was such an issue in earlier version of SQL Server but in the latest version of SQL Server, Microsoft team has fixed the confusion with additional information on the backup and restore screen itself. Now they have additional information, there are a few more people confused as they have no clue about this. Previously they did not find this as a issue and now they are finding tail log as a new learning.

Linchpin People are database coaches and wellness experts for a data driven world. In this 42nd episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words, Backing Up and Recovering the Tail End of a Transaction Log.


Many times when restoring a database over an existing database SQL Server will warn you about needing to make a tail end of the log backup. This might be your reminder that you have to choose to overwrite the database or could be your reminder that you are about to write over and lose any transactions since the last transaction log backup.

You might be asking yourself “What is the tail end of the transaction log”. The tail end of the transaction log is simply any committed transactions that have occurred since the last transaction log backup. This is a very crucial part of a recovery strategy if you are lucky enough to be able to capture this part of the log.

Most organizations have chosen to accept some amount of data loss. You might be shaking your head at this statement however if your organization is taking transaction logs backup every 15 minutes, then your potential risk of data loss is up to 15 minutes. Depending on the extent of the issue causing you to have to perform a restore, you may or may not have access to the transaction log (LDF) to be able to back up those vital transactions.

For example, if the storage array or disk that holds your transaction log file becomes corrupt or damaged then you wouldn’t be able to recover the tail end of the log. If you do have access to the physical log file then you can still back up the tail end of the log. In 2013 I presented a session at the PASS Summit called “The Ultimate Tail Log Backup and Restore” and have been invited back this year to present it again.

During this session I demonstrate how you can back up the tail end of the log even after the data file becomes corrupt. In my demonstration I set my database offline and then delete the data file (MDF). The database can’t become more corrupt than that. I attempt to bring the database back online to change the state to RECOVERY PENDING and then backup the tail end of the log. I can do this by specifying WITH NO_TRUNCATE. Using NO_TRUNCATE is equivalent to specifying both COPY_ONLY and CONTINUE_AFTER_ERROR. It as its name says, does not try to truncate the log. This is a great demo however how could I achieve backing up the tail end of the log if the failure destroys my entire instance of SQL and all I had was the LDF file?

During my demonstration I also demonstrate that I can attach the log file to a database on another instance and then back up the tail end of the log. If I am performing proper backups then my most recent full, differential and log files should be on a server other than the one that crashed. I am able to achieve this task by creating new database with the same name as the failed database. I then set the database offline, delete my data file and overwrite the log with my good log file. I attempt to bring the database back online and then backup the log with NO_TRUNCATE just like in the first example.

I encourage each of you to view my blog post and watch the video demonstration on how to perform these tasks. I really hope that none of you ever have to perform this in production, however it is a really good idea to know how to do this just in case. It really isn’t a matter of “IF” you will have to perform a restore of a production system but more of a “WHEN”. Being able to recover the tail end of the log in these sever cases could be the difference of having to notify all your business customers of data loss or not.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Cardinality Estimation and Performance – SQL in Sixty Seconds #072

Yesterday I wrote blog post based on my latest Pluralsight course on learning SQL Server 2014. I discussed newly introduced cardinality estimation in SQL Server 2014 and how it improves the performance of the query. The cardinality estimation logic is responsible for quality of query plans and majorly responsible for improving performance for any query. This logic was not updated for quite a while, but in the latest version of SQL Server 2104 this logic is re-designed. The new logic now incorporates various assumptions and algorithms of OLTP and warehousing workload.

I hope my earlier blog post clearly explained how new cardinality estimation logic improves performance. If not, I suggest you watch following quick video where I explain this concept in extremely simple words.

You can download the code used in this course from Simple Demo of New Cardinality Estimation Features of SQL Server 2014.

Action Item

Here are the blog posts I have previously written. You can read it over here:

You can subscribe to my YouTube Channel for frequent updates.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload.

The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes.

Points to remember

  • Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP.
  • Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit.
  • Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table.
  • Interpreted Transact-SQL stored procedures, which is what SQL Server has always used.
  • Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables.
  • Interop refers to interpreted Transact-SQL that references memory-optimized tables.

Using In-Memory OLTP

In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions.

Creating Databases

Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables:

CREATE DATABASE InMemoryDB
ON
PRIMARY
(NAME = [InMemoryDB_data],
FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB),
FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA
(NAME = [InMemoryDB_mod_dir],
FILENAME = 'S:\data\InMemoryDB_mod_dir'),
(
NAME = [InMemoryDB_mod_dir],
FILENAME = 'R:\data\InMemoryDB_mod_dir')
LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB)
COLLATE Latin1_General_100_BIN2;

Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables.

It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same.

ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA;
GO
ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod;
GO

Creating Tables

There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example.

DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY)

Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema.

Indexing Memory Optimized Table

A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified.

CREATE TABLE Mem_Table
(
[Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000),
[City] VARCHAR(32) NULL,
[State_Province] VARCHAR(32) NULL,
[LastModified] DATETIME NOT NULL,
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);

Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables.

We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well.

Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables.

I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise.

USE InMemoryDB
GO
-- Creating a disk based table
CREATE TABLE dbo.Disktable
(
Id INT IDENTITY,
Name CHAR(40)
)
GO
CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id)
GO
-- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA
CREATE TABLE dbo.Memorytable_durable
(
Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000),
Name CHAR(40)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO
-- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only
CREATE TABLE dbo.Memorytable_nondurable
(
Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000),
Name CHAR(40)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only)
GO
-- Now insert 100000 records in dbo.Disktable and observe the Time Taken
DECLARE @i_t bigint
SET @i_t =1
WHILE @i_t<= 100000
BEGIN
INSERT INTO
dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t))
SET @i_t+=1
END
-- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken
DECLARE @i_t bigint
SET @i_t =1
WHILE @i_t<= 100000
BEGIN
INSERT INTO
dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t))
SET @i_t+=1
END
-- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken
DECLARE @i_t bigint
SET @i_t =1
WHILE @i_t<= 100000
BEGIN
INSERT INTO
dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t))
SET @i_t+=1
END

The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM.

This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates.

Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP .

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Activity Monitor to Identify Blocking – Find Expensive Queries

Recently when I was busy working on my next course for Pluralsight, one of my friend pinged me for some help over messenger. And the conversation went on a different spin, here is a gist of what we talked and I felt this was worth a share

 Me: Hi buddy, what’s up?

Jim: Dave, I have an unique problem and thought you can help me.

Me: Sure, let me know how I can be of help.

Jim: I am going on a quick consulting assignment tomorrow and I am wanting some helping hand.

Me: Hmmm Sure, let me know what the problem is?

Jim: This customer has a lot of deadlock and blocking problem. And I wanted to know if you know any easy steps to find blocking queries in a real time system?

Me: Hold on let me check.

Jim: Sure.

Me: Ahha Did you check my script available over my blog?

Jim: Awesome, Pinal I think you saved me a day. And apologize for not doing a quick search over your blog. But Pinal, I have one doubt.

Me: On the script?

Jim: No Pinal, the script was written 4 years back and is still a gold mine to help me tomorrow. I am just curious, have you ever explored any alternatives? I am sure your exploration wouldn’t have stopped.

Me: Well Jim, there are a number of other scripts but the one of the hidden gem I don’t think I have written over this blog is around Activity Monitor.

Jim: Oh is it? Can you show me what you are saying?

We get over a screen sharing and I show couple of neat tricks available with Activity Monitor. Let me take a moment to pen down what I showed Jim yesterday.

The quickest way to initialize the Activity monitor is to use the shortcut (CTRL+ALT+A) inside Management studio or feel free to read this blog on where to find the Activity Monitor.

Activity monitor to identify Blocking

The activity monitor has a number of tabs and the tab under question is the “Processes” tab. If you have a blocking scenario then activity monitor can be really a quick solution.

In the image below, we can see SPID of 51 is head of the Blocker and SPID of 55 is waiting for locks to be released by 51. We can also find the wait time, resource wait type and more from this screen.

As a DBA if you want to kill SPID 51, we can quickly do that from Activity monitor using the right click menu as:

When I showed this to Jim, he was super excited to see and was all awe of what we can do just with SQL Server Management Studio. So Jim quizzed me to show him one more tip using Activity monitor as a challenge. Since he was going for a performance troubleshooting exercise, I thought of sharing one more cooler tip using the Activity monitor.

Activity monitor to find execution plans of expensive queries

I asked Jim how he would find out if there are performance problems for a given query. The obvious answer as an expert was to use an execution plan. The problem of such an approach is that when the query executed with parameters, then when we pull them out from Profiler or other mechanisms based on Reads, Write or CPU usage – the parameters need to be replaced, executed and then we need to view the execution plan.

Interestingly, Activity monitor has an interesting tab called as “Recent Expensive Queries”. These are queries based on Reads, CPU, writes etc. Once you get into this screen, right click the query under question and select “Show Execution Plan”.

End of the screen sharing exercise and about 10 mins of quick talk with me, Jim was super excited and said he will ping me once the assignment is over and how he used some of the tips I shared with him.

Activity Monitor is an awesome hidden gem inside SQL Server Management studio that many fail to use for quick monitoring. Hope this blog post inspired some of you to use it in your environments in the future. Do let me know if you ever used Activity monitor in these ways.

Reference: Pinal Dave (http://blog.sqlauthority.com)