In-Memory OLTP is a wonderful new feature introduced in SQL Server 2014. My friend Balmukund Lakhani has written amazing series on A-Z of In-Memory on his blog. All serious learner should study it for deep understanding of the same subject. I will try to cover a few of the concepts in simpler word and often you may find me referring Balmukund’s site on this subject.
Why do we need In-Memory?
Here is the paragraph from Balmukund’s blog (published with approval):
Looking at the market trends of tumbling cost of RAM (USD/MB) and performance implication of reading data from memory vs disk, its evident that people would love to keep the data in memory. With this evolution in hardware industry, softwares have to be evolved and modified so that they can take advantage and scale as much as possible. On the other hand, businesses also don’t want to compromise the durability of data – restart would clear RAM, but data should be back in the same state as it was before the failure. To meet hardware trends and durability requirements, SQL Server 2014 has introduced In-Memory OLTP which would solve them in a unique manner.
Before we start on the subject, let us see a few of the reasons, why you want to go for high-performance memory optimized OLTP operation.
- It naturally integrates with SQL Server relational database
- It supports Full ACID properties
- It helps with non-blocking multi-version optimistic concurrency control, in other words, no locks or latches
Well, let us start with a working example. In this example, we will learn a few things – please pay attention to the details.
- We will create a database with a file group which will contain memory optimized data
- We will create a table with setting memory_optimized set to enabled
- We will create a stored procedure which is natively compiled
The procedure of our test is very simple. We will create two stored procedures 1) Regular Stored Procedure 2) Natively Compiled. We will compare the performance of both the SP and see which one performs better.
Let’s Start!
Step 1: Create a database which creates a file group containing memory_optimized_data
CREATE DATABASE InMemory
ON PRIMARY(NAME = InMemoryData,
FILENAME = 'd:\data\InMemoryData.mdf', size=200MB),
-- Memory Optimized Data
FILEGROUP [InMem_FG] CONTAINS MEMORY_OPTIMIZED_DATA(
NAME = [InMemory_InMem_dir],
FILENAME = 'd:\data\InMemory_InMem_dir')
LOG ON (name = [InMem_demo_log], Filename='d:\data\InMemory.ldf', size=100MB)
GO
Step 2: Create two different tables 1) Regular table and 2) Memory Optimized table
USE InMemory
GO
-- Create a Simple Table
CREATE TABLE DummyTable (ID INT NOT NULL PRIMARY KEY,
Name VARCHAR(100) NOT NULL)
GO
-- Create a Memeory Optimized Table
CREATE TABLE DummyTable_Mem (ID INT NOT NULL,
Name VARCHAR(100) NOT NULL
CONSTRAINT ID_Clust_DummyTable_Mem PRIMARY KEY NONCLUSTERED HASH (ID) WITH (BUCKET_COUNT=1000000))
WITH (MEMORY_OPTIMIZED=ON)
GO
Step 3: Create two stored procedures 1) Regular SP and 2) Natively Compiled SP
Stored Procedure – Simple Insert
-- Simple table to insert 100,000 Rows
CREATE PROCEDURE Simple_Insert_test
AS
BEGIN
SET NOCOUNT ON
DECLARE @counter AS INT = 1
DECLARE @start DATETIME
SELECT @start = GETDATE()
WHILE (@counter <= 100000)
BEGIN
INSERT INTO DummyTable VALUES(@counter, 'SQLAuthority')
SET @counter = @counter + 1
END
SELECT DATEDIFF(SECOND, @start, GETDATE() ) [Simple_Insert in sec]
END
GO
Stored Procedure – InMemory Insert
-- Inserting same 100,000 rows using InMemory Table
CREATE PROCEDURE ImMemory_Insert_test
WITH NATIVE_COMPILATION, SCHEMABINDING,EXECUTE AS OWNER
AS
BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE='english')
DECLARE @counter AS INT = 1
DECLARE @start DATETIME
SELECT @start = GETDATE()
WHILE (@counter <= 100000)
BEGIN
INSERT INTO dbo.DummyTable_Mem VALUES(@counter, 'SQLAuthority')
SET @counter = @counter + 1
END
SELECT DATEDIFF(SECOND, @start, GETDATE() ) [InMemory_Insert in sec]
END
GO
Step 4: Compare the performance of two SPs
Both of the stored procedure measures and print time taken to execute them. Let us execute them and measure the time.
-- Running the test for Insert
EXEC Simple_Insert_test
GO
EXEC ImMemory_Insert_test
GO
Here is the time taken by Simple Insert: 12 seconds
Here is the time taken by InMemory Insert: Nearly 0 second (less than 1 seconds)
Step 5: Clean up!
-- Clean up
USE MASTER
GO
DROP DATABASE InMemory
GO
Analysis of Result
It is very clear that memory In-Memory OLTP improves performance of the query and stored procedure. To implement In-Memory OLTP there are few steps user to have follow with regards to filegroup and table creation. However, the end result is much better in the case of In-Memory OTLP setup.
Reference: Pinal Dave (https://blog.sqlauthority.com)
3 Comments. Leave new
Great Post !
But why is this query slower?
CREATE PROCEDURE Simple_Query_test
AS
BEGIN
SET NOCOUNT ON
DECLARE @start DATETIME
SELECT @start = GETDATE()
select MIN(id),max(id)from DummyTable
SELECT DATEDIFF(ms, @start, GETDATE() ) [Disk_Query in ms]
END
GO
CREATE PROCEDURE ImMemory_Query_test
WITH NATIVE_COMPILATION, SCHEMABINDING,EXECUTE AS OWNER
AS
BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE=’english’)
DECLARE @start DATETIME
SELECT @start = GETDATE()
select MIN(id),max(id) from dbo.DummyTable_Mem
SELECT DATEDIFF(ms, @start, GETDATE() ) [InMemory_Query in ms]
END
GO
— Running the test for Query
EXEC Simple_Query_test
GO
EXEC ImMemory_Query_test
GO
Excellent Article Pinal :-)