SQL SERVER – Location of Natively Compiled Stored Procedure and Naming Convention

Yesterday I wrote about SQL SERVER – Beginning In-Memory OLTP with Sample Example. One of the questions I received right after I published a blog post was why do I call stored procedure natively coded stored procedure when the entire code is in T-SQL. Indeed a very good question. The answer is very simple, the reason we call it natively compiled stored procedure as soon as we execute create statement compiler will convert interpreted T-SQL, query plans and expressions into native code.

You can execute the following query in your SSMS and find out the location of the natively compiled stored procedure.

SELECT name,
description
FROM   sys.dm_os_loaded_modules
WHERE description = 'XTP Native DLL'
GO

To see this DMV in action execute the code from this blog post on your SQL Server.

– Create database
CREATE DATABASE InMemory
ON PRIMARY(NAME = InMemoryData,
FILENAME = 'd:\data\InMemoryData.mdf', size=200MB),
-- Memory Optimized Data
FILEGROUP [InMem_FG] CONTAINS MEMORY_OPTIMIZED_DATA(
NAME = [InMemory_InMem_dir],
FILENAME = 'd:\data\InMemory_InMem_dir')
LOG ON (name = [InMem_demo_log], Filename='d:\data\InMemory.ldf', size=100MB)
GO

– Create table
USE InMemory
GO
-- Create a Memeory Optimized Table
CREATE TABLE DummyTable_Mem (ID INT NOT NULL,
Name VARCHAR(100) NOT NULL
CONSTRAINT ID_Clust_DummyTable_Mem PRIMARY KEY NONCLUSTERED HASH (ID) WITH (BUCKET_COUNT=1000000))
WITH (MEMORY_OPTIMIZED=ON)
GO

– Create stored procedure
-- Inserting same 100,000 rows using InMemory Table
CREATE PROCEDURE ImMemory_Insert_test
WITH NATIVE_COMPILATION, SCHEMABINDING,EXECUTE AS OWNER
AS
BEGIN
ATOMIC WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE='english')
DECLARE @counter AS INT = 1
DECLARE @start DATETIME
SELECT
@start = GETDATE()
WHILE (@counter <= 100000)
BEGIN
INSERT INTO
dbo.DummyTable_Mem VALUES(@counter, 'SQLAuthority')
SET @counter = @counter + 1
END
SELECT
DATEDIFF(SECOND, @start, GETDATE() ) [InMemory_Insert in sec]
END
GO

Now let us execute our script as described.

Now we can see in our result, there are two different dll files.  From the image above I have explained various parts of the dll file.

As per the image, our database id is 11 and if we check it is same as what we have created few seconds ago. Similarly the name of the object id can be found as well.

If we open up the folder where we have created this object we will see two sets of file information. One for stored procedure and one for table.

My friend Balmukund explains this concept very well on his blog over here.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – Beginning In-Memory OLTP with Sample Example

In-Memory OLTP is a wonderful new feature introduced in SQL Server 2014. My friend Balmukund Lakhani has written amazing series on A-Z of In-Memory on his blog. All serious learner should study it for deep understanding of the same subject. I will try to cover a few of the concepts in simpler word and often you may find me referring Balmukund’s site on this subject.

Why do we need In-Memory?

Here is the paragraph from Balmukund’s blog (published with approval):

Looking at the market trends of tumbling cost of RAM (USD/MB) and performance implication of reading data from memory vs disk, its evident that people would love to keep the data in memory. With this evolution in hardware industry, softwares have to be evolved and modified so that they can take advantage and scale as much as possible. On the other hand, businesses also don’t want to compromise the durability of data – restart would clear RAM, but data should be back in the same state as it was before the failure. To meet hardware trends and durability requirements, SQL Server 2014 has introduced In-Memory OLTP which would solve them in a unique manner.

Before we start on the subject, let us see a few of the reasons, why you want to go for high-performance memory optimized OLTP operation.

  • It naturally integrates with SQL Server relational database
  • It supports Full ACID properties
  • It helps with non-blocking multi-version optimistic concurrency control, in other words, no locks or latches

Well, let us start with a working example. In this example, we will learn a few things – please pay attention to the details.

  1. We will create a database with a file group which will contain memory optimized data
  2. We will create a table with setting memory_optimized set to enabled
  3. We will create a stored procedure which is natively compiled

The procedure of our test is very simple. We will create two stored procedures 1) Regular Stored Procedure 2) Natively Compiled. We will compare the performance of both the SP and see which one performs better.

Let’s Start!

Step 1: Create a database which creates a file group containing memory_optimized_data

CREATE DATABASE InMemory
ON PRIMARY(NAME = InMemoryData,
FILENAME = 'd:\data\InMemoryData.mdf', size=200MB),
-- Memory Optimized Data
FILEGROUP [InMem_FG] CONTAINS MEMORY_OPTIMIZED_DATA(
NAME = [InMemory_InMem_dir],
FILENAME = 'd:\data\InMemory_InMem_dir')
LOG ON (name = [InMem_demo_log], Filename='d:\data\InMemory.ldf', size=100MB)
GO

Step 2: Create two different tables 1) Regular table and 2) Memory Optimized table

USE InMemory
GO
-- Create a Simple Table
CREATE TABLE DummyTable (ID INT NOT NULL PRIMARY KEY,
Name VARCHAR(100) NOT NULL)
GO
-- Create a Memeory Optimized Table
CREATE TABLE DummyTable_Mem (ID INT NOT NULL,
Name VARCHAR(100) NOT NULL
CONSTRAINT ID_Clust_DummyTable_Mem PRIMARY KEY NONCLUSTERED HASH (ID) WITH (BUCKET_COUNT=1000000))
WITH (MEMORY_OPTIMIZED=ON)
GO

Step 3: Create two stored procedures 1) Regular SP and 2) Natively Compiled SP

Stored Procedure – Simple Insert
-- Simple table to insert 100,000 Rows
CREATE PROCEDURE Simple_Insert_test
AS
BEGIN
SET NOCOUNT ON
DECLARE
@counter AS INT = 1
DECLARE @start DATETIME
SELECT
@start = GETDATE()
WHILE (@counter <= 100000)
BEGIN
INSERT INTO
DummyTable VALUES(@counter, 'SQLAuthority')
SET @counter = @counter + 1
END
SELECT
DATEDIFF(SECOND, @start, GETDATE() ) [Simple_Insert in sec]
END
GO

Stored Procedure – InMemory Insert
-- Inserting same 100,000 rows using InMemory Table
CREATE PROCEDURE ImMemory_Insert_test
WITH NATIVE_COMPILATION, SCHEMABINDING,EXECUTE AS OWNER
AS
BEGIN
ATOMIC WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE='english')
DECLARE @counter AS INT = 1
DECLARE @start DATETIME
SELECT
@start = GETDATE()
WHILE (@counter <= 100000)
BEGIN
INSERT INTO
dbo.DummyTable_Mem VALUES(@counter, 'SQLAuthority')
SET @counter = @counter + 1
END
SELECT
DATEDIFF(SECOND, @start, GETDATE() ) [InMemory_Insert in sec]
END
GO

Step 4: Compare the performance of two SPs

Both of the stored procedure measures and print time taken to execute them. Let us execute them and measure the time.

-- Running the test for Insert
EXEC Simple_Insert_test
GO
EXEC ImMemory_Insert_test
GO

Here is the time taken by Simple Insert: 12 seconds

Here is the time taken by InMemory Insert: Nearly 0 second (less than 1 seconds)

Step 5: Clean up!

-- Clean up
USE MASTER
GO
DROP DATABASE InMemory
GO

Analysis of Result

It is very clear that memory In-Memory OLTP improves performance of the query and stored procedure. To implement In-Memory OLTP there are few steps user to have follow with regards to filegroup and table creation. However, the end result is much better in the case of In-Memory OTLP setup.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Memory Usage By Memory Optimized Objects Report

At conferences and at speaking engagements at the local UG, there is one question that keeps on coming which I wish were never asked. The question around, “Why is SQL Server using up all the memory and not releasing even when idle?” Well, the answer can be long and with the release of SQL Server 2014, this got even more complicated. This release of SQL Server 2014 has the option of introducing In-Memory OLTP which is completely new concept and our dependency on memory has increased multifold. In reality, nothing much changes but we have memory optimized objects (Tables and Stored Procedures) additional which are residing completely in memory and improving performance. As a DBA, it is humanly impossible to get a hang of all the innovations and the new features introduced in the next version. So today’s blog is around the report added to SSMS which gives a high level view of this new feature addition.

This reports is available only from SQL Server 2014 onwards because the feature was introduced in SQL Server 2014. Earlier versions of SQL Server Management Studio would not show the report in the list.

If we try to launch the report on the database which is not having In-Memory File group defined, then we would see the message in report. To demonstrate, I have created new fresh database called MemoryOptimizedDB with no special file group.

Here is the query used to identify whether a database has memory-optimized file group or not.

SELECT TOP(1) 1 FROM sys.filegroups FG WHERE FG.[type] = 'FX'

Once we add filegroup using below command, we would see different version of report.

USE [master]
GO
ALTER DATABASE [MemoryOptimizedDB] ADD FILEGROUP [IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA
GO

The report is still empty because we have not defined any Memory Optimized table in the database.  Total allocated size is shown as 0 MB. Now, let’s add the folder location into the filegroup and also created few in-memory tables. We have used the nomenclature of IMO to denote “InMemory Optimized” objects.

USE [master]
GO
ALTER DATABASE [MemoryOptimizedDB]
ADD FILE ( NAME = N'MemoryOptimizedDB_IMO', FILENAME = N'E:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\MemoryOptimizedDB_IMO')
TO FILEGROUP [IMO_FG]
GO

You may have to change the path based on your SQL Server configuration. Below is the script to create the table.

USE MemoryOptimizedDB
GO
--Drop table if it already exists.
IF OBJECT_ID('dbo.SQLAuthority','U') IS NOT NULL
DROP TABLE dbo.SQLAuthority
GO
CREATE TABLE dbo.SQLAuthority
(
ID INT IDENTITY NOT NULL,
Name CHAR(500)  COLLATE Latin1_General_100_BIN2 NOT NULL DEFAULT 'Pinal',
CONSTRAINT PK_SQLAuthority_ID PRIMARY KEY NONCLUSTERED (ID),
INDEX hash_index_sample_memoryoptimizedtable_c2 HASH (Name) WITH (BUCKET_COUNT = 131072)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO

As soon as above script is executed, table and index both are created. If we run the report again, we would see something like below.

Notice that table memory is zero but index is using memory. This is due to the fact that hash index needs memory to manage the buckets created. So even if table is empty, index would consume memory. More about the internals of how In-Memory indexes and tables work will be reserved for future posts. Now, use below script to populate the table with 10000 rows

INSERT INTO SQLAuthority VALUES (DEFAULT)
GO 10000

Here is the same report after inserting 1000 rows into our InMemory table.

 

 There are total three sections in the whole report.

  1. Total Memory consumed by In-Memory Objects
  2. Pie chart showing memory distribution based on type of consumer – table, index and system.
  3. Details of memory usage by each table.

The information about all three is taken from one single DMV, sys.dm_db_xtp_table_memory_stats This DMV contains memory usage statistics for both user and system In-Memory tables. If we query the DMV and look at data, we can easily notice that the system tables have negative object IDs.  So, to look at user table memory usage, below is the over-simplified version of query.

USE MemoryOptimizedDB
GO
SELECT OBJECT_NAME(OBJECT_ID), *
FROM sys.dm_db_xtp_table_memory_stats
WHERE OBJECT_ID > 0
GO

This report would help DBA to identify which in-memory object taking lot of memory which can be used as a pointer for designing solution. I am sure in future we will discuss at lengths the whole concept of In-Memory tables in detail over this blog. To read more about In-Memory OLTP, have a look at In-Memory OLTP Series at Balmukund’s Blog.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Memory Consumption Report

The next in line in this series of reports is the “Memory Consumption” Report from SQL Server Management Studio. This is a goldmine of a report in my humble opinion and lesser respected. When I used to be consulted or land into performance tuning exercises for customers in the past, there is one question that gets repeated and echoed every now and then – “My SQL Server is eating away my RAM and it is not releasing it back even in non-peak hours”. I always am smiling when this question comes up. SQL Server or for that matter any database system is highly memory oriented processes. If they had taken for some reason, then they are not going to release it because they assume at a later point in time they will require it again. So instead of depending on the OS to allocate, they go with the assumption of grabbing and never releasing even when it is not required in the interim.

Now that brings to the point the fact what is my SQL Server using this memory for? Well, if you search the internet you will be amazed by the plethora of scripts and it is overwhelming how people have killed this subject to death. But this hidden gem inside SQL Server Management Studio is never talked about. So in this blog post, let me take a tour of what this report contains and how one should read the sections.

This report can be launched by going to Server Node in SQL Server Management Studio (SSMS) right click > Reports > Standard Reports > Memory Consumption.

The report has multiple sections which we would discuss one by one.

Memory Related Counters

These three values can give us a rough indication of memory pressure on SQL Server Instance. These three values are retrieved from SQL Server Memory counters.

SELECT  OBJECT_NAME
,counter_name
,CONVERT(VARCHAR(10),cntr_value) AS cntr_value
FROM sys.dm_os_performance_counters
WHERE ((OBJECT_NAME LIKE '%Manager%')
AND(
counter_name = 'Memory Grants Pending'
OR counter_name='Memory Grants Outstanding'
OR counter_name = 'Page life expectancy'))

As per perfmon counters help, “Memory Grants Outstanding” shows counter shows the current number of processes that have successfully acquired a workspace memory grantgrant, whereas “Memory Grants Pending” counter shows the current number of processes waiting for a workspace memory grant. Page life expectancy is defined as “Number of seconds a page will stay in the buffer pool without references

Top Memory Consuming Components

This section of the report shows various memory consumers (called clerks) in a pie chart based on the amount of memory consumed by each one of them. In most of the situations, SQLBUFFERPOOL would be the biggest consumer of the memory. This output is taken from sys. dm_os_memory_clerks DMV, which is one of the key DMV in monitoring SQL Server memory performance.  We can use sys.dm_os_memory_clerks to identify where exactly SQL’s memory is being consumed.

Buffer Pages Distribution (# Pages)

This particular section of the report shows the state of buffer pages. Behind the scenes it uses DBCC MEMORYSTATUS to get the distribution of buffer in various states. Buffer Distribution can be one of the following as: ‘Stolen’, ‘Free’, ‘Cached’, ‘Dirty’, ‘Kept’, ‘I/O’, ‘Latched’ or ‘Other’. Interestingly, if we run the DBCC MEMORYSTATUS, we may not see all these states. This is because memory status output format has been constantly changing SQL 2000 (KB 271624) and SQL 2005 (KB 907877).

Memory Changes Over Time (Last 7 Days)

This section of the report shows data from default trace. One of the event which is captured by default trace is “Server Memory Change” (Event id 81). Behind the scene, this section reads default trace, looks for event ID 81 and adds a filter (datediff(dd,StartTime,getdate()) < 7) to display last 7 days records. My laptop doesn’t have much load that why we don’t see any memory change. Another reason, as quoted in the text, of no data could be that default trace are disabled.

I am sure in your production or active development boxes these values are not going to be zero for sure.

Memory Usage By Components

At the bottom, there is a table which shows the memory for each component.  This is also taken from the same DMV, which is used in “Top Memory Consuming Components”. The graph earlier shows top 5% consumers by name and the rest would be shown as others. It’s important to note that in SQL 2014, it would always show MEMORYCLERK_XTP which is used by In-Memory OLTP engine (even if it’s not a top consumer).

Here is the little description of various columns:

Allocated Memory Amount of memory allocated to sqlservr.exe
Virtual Memory (Reserved) Memory reserved in Virtual Address Space (VAS)
Virtual Memory (Committed) Memory committed in Virtual Address Space. Once memory is committed in VAS, it would have physical storage (RAM or Pagefile)
AWE Memory Allocated Amount of memory locked in the physical memory and not paged out by the operating system
Shared Memory (Reserved) Amount of shared memory that is reserved
Shared Memory (Committed) Amount of shared memory that is committed

To understand reserve and committed, I always quote this. Imagine that you need to fly to Mumbai on a certain date and you book a flight ticket. This is called reservation. There’s nothing there yet, but nobody else can claim that seat either. If you release your reservation the place can be given to someone else. Committing is actually grabbing the physical seat on the day of travel.

Hope this gives you a fair idea about various pieces of memory consumers. As I mentioned before, this is one of those hidden gem reports that never gets seen. One can learn and know about a current running system and who are using SQL Server Memory from this report easily.

I would be curious to know if in any of your systems if there is any other component apart from BufferPool or SOSNode as the top memory consumers?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Plan Cache and Data Cache in Memory

I get following question almost all the time when I go for consultations or training. I often end up providing the scripts to my clients and attendees. Instead of writing new blog post, today in this single blog post, I am going to cover both the script and going to link to original blog posts where I have mentioned about this blog post.

Plan Cache in Memory

USE AdventureWorks
GO
SELECT [text], cp.size_in_bytes, plan_handle
FROM sys.dm_exec_cached_plans AS cp
CROSS APPLY sys.dm_exec_sql_text(plan_handle)
WHERE cp.cacheobjtype = N'Compiled Plan'
ORDER BY cp.size_in_bytes DESC
GO

Further explanation of this script is over here: SQL SERVER – Plan Cache – Retrieve and Remove – A Simple Script

Data Cache in Memory

USE AdventureWorks
GO
SELECT COUNT(*) AS cached_pages_count,
name AS BaseTableName, IndexName,
IndexTypeDesc
FROM sys.dm_os_buffer_descriptors AS bd
INNER JOIN
(
SELECT s_obj.name, s_obj.index_id,
s_obj.allocation_unit_id, s_obj.OBJECT_ID,
i.name IndexName, i.type_desc IndexTypeDesc
FROM
(
SELECT OBJECT_NAME(OBJECT_ID) AS name,
index_id ,allocation_unit_id, OBJECT_ID
FROM sys.allocation_units AS au
INNER JOIN sys.partitions AS p
ON au.container_id = p.hobt_id
AND (au.TYPE = 1 OR au.TYPE = 3)
UNION ALL
SELECT OBJECT_NAME(OBJECT_ID) AS name,
index_id, allocation_unit_id, OBJECT_ID
FROM sys.allocation_units AS au
INNER JOIN sys.partitions AS p
ON au.container_id = p.partition_id
AND au.TYPE = 2
) AS s_obj
LEFT JOIN sys.indexes i ON i.index_id = s_obj.index_id
AND i.OBJECT_ID = s_obj.OBJECT_ID ) AS obj
ON bd.allocation_unit_id = obj.allocation_unit_id
WHERE database_id = DB_ID()
GROUP BY name, index_id, IndexName, IndexTypeDesc
ORDER BY cached_pages_count DESC;
GO

Further explanation of this script is over here: SQL SERVER – Get Query Plan Along with Query Text and Execution Count

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Minimum Maximum Memory – Server Memory Options

I was recently reading about SQL Server Memory Options over here. While reading this one line really caught my attention is minimum value allowed for maximum memory options.

The default setting for min server memory is 0, and the default setting for max server memory is 2147483647. The minimum amount of memory you can specify for max server memory is 16 megabytes (MB).

This was very interesting to me as I was not familiar with this details. This was one interesting detail for me. In reality I will never set up my max server memory to 16 MB, it will be right out suicide for the server looking at current systems capabilities.

If you try to reset this to lower than 16 MB, SQL Server will automatically make it 16 MB and will not take lower number.

This information was new to me. How about you?

Reference: Pinal Dave (http://blog.SQLAuthority.com)

 

SQL SERVER – Queries Waiting for Memory Allocation to Execute

In one of the recent projects, I was asked to create a report of queries that are waiting for memory allocation. The reason was that we were doubtful regarding whether the memory was sufficient for the application. The following query can be useful in similar case. Queries that do not have to wait on a memory grant will not appear in the resultset of following query.

SELECT TEXT, query_plan, requested_memory_kb,
granted_memory_kb,used_memory_kb, wait_order
FROM sys.dm_exec_query_memory_grants MG
CROSS APPLY sys.dm_exec_sql_text(sql_handle)
CROSS
APPLY sys.dm_exec_query_plan(MG.plan_handle)

Please note that wait_order will give order of query waiting on memory to execute. This is a very important script, I suggest that you keep it in the permanent list of queries. If ever you notice that your queries are running slow and think that memory is the culprit, do run this query. If there are lots of rows in the result, please try to optimize the queries or increase the memory capacity.

Reference: Pinal Dave (http://blog.sqlauthority.com)