SQL SERVER – Beginning In-Memory OLTP with Sample Example

In-Memory OLTP is a wonderful new feature introduced in SQL Server 2014. My friend Balmukund Lakhani has written amazing series on A-Z of In-Memory on his blog. All serious learner should study it for deep understanding of the same subject. I will try to cover a few of the concepts in simpler word and often you may find me referring Balmukund’s site on this subject.

Why do we need In-Memory?

Here is the paragraph from Balmukund’s blog (published with approval):

Looking at the market trends of tumbling cost of RAM (USD/MB) and performance implication of reading data from memory vs disk, its evident that people would love to keep the data in memory. With this evolution in hardware industry, softwares have to be evolved and modified so that they can take advantage and scale as much as possible. On the other hand, businesses also don’t want to compromise the durability of data – restart would clear RAM, but data should be back in the same state as it was before the failure. To meet hardware trends and durability requirements, SQL Server 2014 has introduced In-Memory OLTP which would solve them in a unique manner.

Before we start on the subject, let us see a few of the reasons, why you want to go for high-performance memory optimized OLTP operation.

  • It naturally integrates with SQL Server relational database
  • It supports Full ACID properties
  • It helps with non-blocking multi-version optimistic concurrency control, in other words, no locks or latches

Well, let us start with a working example. In this example, we will learn a few things – please pay attention to the details.

  1. We will create a database with a file group which will contain memory optimized data
  2. We will create a table with setting memory_optimized set to enabled
  3. We will create a stored procedure which is natively compiled

The procedure of our test is very simple. We will create two stored procedures 1) Regular Stored Procedure 2) Natively Compiled. We will compare the performance of both the SP and see which one performs better.

Let’s Start!

Step 1: Create a database which creates a file group containing memory_optimized_data

CREATE DATABASE InMemory
ON PRIMARY(NAME = InMemoryData,
FILENAME = 'd:\data\InMemoryData.mdf', size=200MB),
-- Memory Optimized Data
FILEGROUP [InMem_FG] CONTAINS MEMORY_OPTIMIZED_DATA(
NAME = [InMemory_InMem_dir],
FILENAME = 'd:\data\InMemory_InMem_dir')
LOG ON (name = [InMem_demo_log], Filename='d:\data\InMemory.ldf', size=100MB)
GO

Step 2: Create two different tables 1) Regular table and 2) Memory Optimized table

USE InMemory
GO
-- Create a Simple Table
CREATE TABLE DummyTable (ID INT NOT NULL PRIMARY KEY,
Name VARCHAR(100) NOT NULL)
GO
-- Create a Memeory Optimized Table
CREATE TABLE DummyTable_Mem (ID INT NOT NULL,
Name VARCHAR(100) NOT NULL
CONSTRAINT ID_Clust_DummyTable_Mem PRIMARY KEY NONCLUSTERED HASH (ID) WITH (BUCKET_COUNT=1000000))
WITH (MEMORY_OPTIMIZED=ON)
GO

Step 3: Create two stored procedures 1) Regular SP and 2) Natively Compiled SP

Stored Procedure – Simple Insert
-- Simple table to insert 100,000 Rows
CREATE PROCEDURE Simple_Insert_test
AS
BEGIN
SET NOCOUNT ON
DECLARE
@counter AS INT = 1
DECLARE @start DATETIME
SELECT
@start = GETDATE()
WHILE (@counter <= 100000)
BEGIN
INSERT INTO
DummyTable VALUES(@counter, 'SQLAuthority')
SET @counter = @counter + 1
END
SELECT
DATEDIFF(SECOND, @start, GETDATE() ) [Simple_Insert in sec]
END
GO

Stored Procedure – InMemory Insert
-- Inserting same 100,000 rows using InMemory Table
CREATE PROCEDURE ImMemory_Insert_test
WITH NATIVE_COMPILATION, SCHEMABINDING,EXECUTE AS OWNER
AS
BEGIN
ATOMIC WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE='english')
DECLARE @counter AS INT = 1
DECLARE @start DATETIME
SELECT
@start = GETDATE()
WHILE (@counter <= 100000)
BEGIN
INSERT INTO
dbo.DummyTable_Mem VALUES(@counter, 'SQLAuthority')
SET @counter = @counter + 1
END
SELECT
DATEDIFF(SECOND, @start, GETDATE() ) [InMemory_Insert in sec]
END
GO

Step 4: Compare the performance of two SPs

Both of the stored procedure measures and print time taken to execute them. Let us execute them and measure the time.

-- Running the test for Insert
EXEC Simple_Insert_test
GO
EXEC ImMemory_Insert_test
GO

Here is the time taken by Simple Insert: 12 seconds

Here is the time taken by InMemory Insert: Nearly 0 second (less than 1 seconds)

Step 5: Clean up!

-- Clean up
USE MASTER
GO
DROP DATABASE InMemory
GO

Analysis of Result

It is very clear that memory In-Memory OLTP improves performance of the query and stored procedure. To implement In-Memory OLTP there are few steps user to have follow with regards to filegroup and table creation. However, the end result is much better in the case of In-Memory OTLP setup.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

Google Drive Trick – Google Spreadsheet Formatting Dates to String

I have been using google drive and google spreadsheet for a while. I work with experts around the world and we all use different date format. The biggest confusion which often happens when we are sharing data is a timestamp. For some people it is natural to have dates time like dd/mm/yyyy and for some it is natural to have mm/dd/yyyy. Sometimes we are just confused looking at the date as there is no instruction about the format. Finally, we all decided that we will spell out the dates. For example, if we have date like 11/9/2014 to avoid confusion if it is November 9, 2014 or September 11, 2014 we decided to just write it in the string.

Now here is the problem with Google Spreadsheet – when I type any datetime, like as following, it will immediately convert them to mm/dd/yyyy format and defeat our purpose of spelling that out.

Finally, we figured out the solution how we can preserve the datetime as a string in Google Spreadsheet. Just start your datetime with a single quotes. For example now we enter datetime as follows:

‘September 9, 2014 instead of September 9, 2014. The best part is that, the single quotes are not visible on the screen.

Well, if there was any other way to do this and I was not aware of it, please let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Flushing Transaction Log to Disk for Current Database

I have written quite a while on Delayed Durability and every blog post when I think it will be the last post on this subject, I always get some question in an email and I end up writing a new blog post.

Here are a few blog posts which I have written on this subject. Please read them if you want to understand delayed durability in detail.

Here is another blog post based on a question I recently received.

Question) I have been using delayed durability for few of my queries and stored procedures. Is there any way, I can be sure that every single transaction log which is not hardened on the disk so far are flushed to disk?

Indeed a very good question. Well, SQL Server has a stored procedure says. sp_flush_log which guarantees that all the previously committed delayed durability transactions are made durable by flushing them to disk. So at any point of time, you want to be sure that you have flushed all of yours delayed transactions, you can execute this stored procedure.

EXECUTE sys.sp_flush_log

Remember, there is one more thing know here as well. If you execute any fully durable transactions right after delayed durable transactions, it will also automatically flush all the transaction to disk as well.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL Authority News – Discount Code for for SQL Live! 360 in November

I thought you might be interested to know that I’ll be speaking at Live! 360, November 17 – 21 in Orlando, FL. Live! 360 brings together five conferences, and the brightest minds in IT and Dev, to explore leading edge technologies and conquer current ones. More info: http://bit.ly/LSPK22HOME

As a speaker, I can extend $600 savings on the 5-day package. Register here: http://bit.ly/LSPK22REG and use code LSPK22.

I’ll be presenting the following sessions:

Secrets of SQL Server: Database Worst Practices
Abstract: Chances are you have heard, or even uttered, this expression. This demo-oriented session will show many examples where database professionals were dumbfounded by their own mistakes, and could even bring back memories of your own early DBA days. The goal of this session is to expose the small details that can be dangerous to the production environment and SQL Server as a whole, as well as talk about worst practices and how to avoid them. Shedding light on some of these perils and the tricks to avoid them may even save your current job. After attending this session, Developers will only need 60 seconds to improve performance of their database server in their SharePoint implementation. We will have a quiz during the session to keep the conversation alive. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Additionally, all attendees of the session will have access to learning material presented in the session.

Workshop: Performance Tuning Best Practices – Every DBA and Developer MUST Know
In this session we will try to bridge the gap between DBA and Developer. We will approach performance as a whole subject rather than dividing them into DBA or Developer problems.

Performance Tuning is a tricky subject, but not as difficult as we think. It is very common to see in this industry that the DBA believes the performance issue is due to bad code and the Developer believes it is due to the bad deployment of the database server.

In this session, we will try to bridge the gap between DBA and Developer. We will approach performance as a whole subject rather than dividing them into DBA or Developer problems.

We will have very different approaches to solving the performance problems of SQL Server – we will have stories related to real world scenarios and practical approach of solving performance problems.

We will primarily discuss:

  • Best Deployment Practices
  • Indexes, Best Practices and Maintenance
  • Identifying slow running queries and Tuning
  • Identifying resource bottleneck with the help of SQL Wait Stats
  • Optimizing Resources – CPU, IO, Memory
  • Best Practices to Resolve ad-hoc issues

You will learn:

  • Performance Tuning
  • Best Practices
  • Index and Query Tuning

SPECIAL OFFER:

As a speaker, I can extend $600 savings on the 5-day package. Register here: http://bit.ly/LSPK22REG and use code LSPK22.

Get better at what’s available, and get ready for what’s coming. We delve deep into the topics that matter most in your world today, covering practical knowledge, fundamentals, new technologies, and futures.  Register now:http://bit.ly/LSPK22REG

Reference: Pinal Dave (http://blog.sqlauthority.com)

Hey DBA – Go Make a Proactive Investment Today! – Notes from the Field #045

[Note from Pinal]: This is a 45th episode of Notes from the Field series. If you know Mike Walsh, he is an amazing person to meet and know. I am fortunate to be his friend. Every single time I meet him, I learn something as a human.

Last time when I met, I had asked him one question – if you have to give me one advice, what will you give? He had said, I have no advice for you, but I have a challenge for you to complete. If you need help to complete that challenge, I will work with you to achieve the goal.

Now think about this entire incident, how many times this kind of conversation happens in our life – very rarely.

In this episode of the Notes from the Field series database expert Mike Walsh give a challenge to all of us. He explains to us why we should be proactive and right after that gives us one challenge which we should all take and complete.


You can spend some time and some money today.. Or you can spend more later..  It’s that simple.

As a SQL Server consultant, you could say I know a little bit about clients spending money. There are jokes out there about how consultants spend the money of our clients. I’ll tell you though, I’ve been playing with SQL Server for fifteen years now, the past 3-4 as a SQL Server consultant, and I’d much rather take less money up front than more money down the line.

Here’s what I mean – I enjoy seeing clients spend money proactively rather than reactively. For a few reasons:

  • It’s almost always less when you spend it on being proactive.
  • Stress levels are down when you aren’t in crisis mode.
  • Decisions are clearer and cleaner when there isn’t a “fire” going on.

And the old adage that I started this post out with is true. I’ve seen it true time and time again. I’ve seen it reflected in my invoices. When a client calls up front and takes care of something proactively and plans for the future – the end result is always less money spend and a more peaceful experience for all involved.

This is a simple post. It’s a quick post. I go into much more detail at an older post of mine about Planting Asparagus. And with it I have one simple challenge for you, the reader of Pinal’s wonderful blog:

Do Something Proactive Today

That’s it. Go into your SQL environment and forget the 400 e-mails in your inbox. Forget the piling up requests in your queue. Forget the avalanche surround you as a DBA. Forget that stuff and do something proactive. Think about the future. Think about some what ifs and save yourself future pain. I’m not saying you have to call us at Linchpin People and ask for a WellDBA™ Exam. I’m not saying you need to call any consultant. Just be proactive.

Here are a few places you can start. These are all places I’ve recently seen clients spend much time and money on in emergency situations because of failures at being proactive. I’m sure Pinal has some great posts about all of these areas.

  • Do a restore test… Check your backups! – I don’t just mean are you taking a backup or does your system administrator say they are backing up your environment. I mean – are you sure you are backing up your environment like your users expect and believe you are? If not? Stop reading the rest and go check.
  • Check database consistency – I’ve spent much time helping clients who had a double whammy situation lately of no good backups AND no proactive checks looking for corruption. This means that their users detected corruption. It wasn’t known how long it was there for and there was no great backup solution. That means data loss. That means expensive solutions to scrap as much data out as is possible. The time you take to get this right? It pays dividends down the line.
  • Perform routine maintenance – Index maintenance, statistics maintenance. If you aren’t doing these basic things – your users could be suffering right now with performance problems. You could be wasting storage space. Things could be less efficient than needed.
  • Setup alerts and monitoring – Whether it is free through using SQL Server Agent Alerts, SQL Agent operators and notifications of jobs and errors – or looking at the plethora of inexpensive SQL monitoring products out there. If you are blind to issues – that means that you could have a smoldering fire building in your environment right now. And you are totally blind to it until that fire gets a little more oxygen and explodes into a down system.

I could keep going. There are a dozen basic things that you can easily do right now. The investment is a little time. Maybe some digging around on the internet to make sure you are in a good spot to do these things. Maybe the investment spending some money having an external review like our exams or the health checks any number of great SQL Server consultancies do. Maybe the investment is in getting some DBA training or mentoring. Whatever the investment is. I can guarantee you from experience that it is an investment that is all but guaranteed to pay off and the dividends are huge.

Ok. Stop reading and clicking. Go make a proactive investment in your environment.

If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Video Introduction to Delayed Durability – SQL in Sixty Seconds #074

Earlier I wrote blog post based on my latest Pluralsight course on learning SQL Server 2014. SQL Server 2014 has introduced many new features and one of performance is Delayed Transaction Durability. This concept is indeed very interesting. To explain this feature in detail, we will require to understand what is Full Transaction Durability. The current default of SQL Server is Full Transaction Durability. A common question I often received is why would we go away from to delayed durability. The answer is – Performance prioritation over Durability.

I hope my earlier blog post clearly explained how delayed durability works for executing query works. If not, I suggest you watch following quick video where I explain this concept in extremely simple words.

You can download the code used in this video from Simple Example of Delayed Durability.

Action Item

Here are the blog posts I have previously written. You can read it over here:

You can subscribe to my YouTube Channel for frequent updates.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Simple Example of Delayed Durability

Earlier I wrote two blog posts on Delayed Durability. Please read them before continuing this example, as they will give you good information about this concept before we start.

Now in this example we will see a working example of the same.

First, we will create a same database with the named DelayedDurability and once we create the same we will set the delayed durability to allow. Right after that we will create a dummy table inside it. Next we will create two stored procedures where one will have a delayed durability enable and one without delayed durability. We will see how much time each of the stored procedure takes to complete the task.

Task 1: Create database with delayed durability

-- Script for Delayed Durability
USE MASTER
GO
-- Create New Database
CREATE DATABASE [DelayedDurability]
GO
BACKUP DATABASE [DelayedDurability] TO DISK = 'NUL';
GO
-- Let us set Delayed Durability at DB level
USE [master]
GO
ALTER DATABASE [DelayedDurability] SET DELAYED_DURABILITY = ALLOWED WITH NO_WAIT
GO

Task 2: Create a dummy table

-- Creating Dummy table
USE [DelayedDurability]
GO
CREATE TABLE DummyTable
(ID INT IDENTITY PRIMARY KEY CLUSTERED,
SALARY VARCHAR(100))
GO

Task 3: Create two Stored Procedures

The first stored procedure is created without delayed durability enabled.

-- First SP with Delayed Durability OFF
CREATE PROCEDURE Simple_Insert
AS
BEGIN
SET NOCOUNT ON
DECLARE
@counter AS INT = 0
DECLARE @start DATETIME
SELECT
@start = GETDATE()
WHILE (@counter < 100000)
BEGIN
BEGIN TRAN
INSERT INTO
DummyTable VALUES( @counter)
SET @counter = @counter + 1
COMMIT WITH (DELAYED_DURABILITY = OFF)
END
SELECT
DATEDIFF(SECOND, @start, GETDATE() ) [Simple_Insert in sec]
END
GO

The second stored procedure is created with delayed durability enabled.

-- Second SP with Delayed Durability ON
CREATE PROCEDURE DelayedDurability_Insert
AS
BEGIN
SET NOCOUNT ON
DECLARE
@counter AS INT = 0
DECLARE @start DATETIME
SELECT
@start = GETDATE()
WHILE (@counter < 100000)
BEGIN
BEGIN TRAN
INSERT INTO
DummyTable VALUES( @counter)
SET @counter = @counter + 1
COMMIT WITH (DELAYED_DURABILITY = ON)
END
SELECT
DATEDIFF(SECOND, @start, GETDATE()) [DelayedDurability_Insert in sec]
END
GO

Task 4: Execute both the stored procedure and note down the result.

-- Now Execute both the SPs and check the time taken.
EXEC Simple_Insert
GO
EXEC DelayedDurability_Insert
GO

Result of SPs

Task 5: Clean up

-- Clean up
USE MASTER
GO
DROP DATABASE [DelayedDurability]
GO

Analysis of Result:

The SP/Query which ran with delayed durability ON performed better than other SP/Query which had delayed durability disabled. This is because we are executing a procedure which generated lots of log file and after a while query has to slow down if it is writing down the log file first before executing next set of instructions. However, if delayed durability is ON, the query will continue executing without worry of the log writing. This will improve performance but due to any reason there is unexpected shutdown, the data loss can happen.

You can learn more about the new features of SQL Server 2014 in my latest Pluralsight Course over here.

Note: Please do not enable delayed durability thinking will improve performance. I suggest you read my two blog posts listed over here and here before implementing it.

Reference: Pinal Dave (http://blog.sqlauthority.com)