SQL SERVER – Error: Msg 701, Level 17, State 103. There is insufficient system memory in resource pool to run this query

Talking and exploring In-Memory topics inside SQL Server 2014 has been interesting to me. When I wrote the blog around table variable not being just an In-Memory structure, one of my course listener (SQL Server 2014 Administration New Features) pinged me on twitter to ask, if In-Memory OLTP was really In-Memory? Wouldn’t SQL Server like to swap the data or memory data to pagination file when there is memory pressure? I told them the concept of In-Memory is that data always resides in memory and the reason for feature name “In-Memory OLTP”.

The next question that came out of this interaction, what happens when we hit the memory boundary? Will SQL Server error out even if there is Memory available in the system or do something else. This was an easy setup to try.

Script Usage

We will create an In-Memory database, restrict it to have limited memory and add some rows to see if there is any error. The basic 4 steps to follow are:

-- Step 1: Create the Resource Pool. Limit to 5% memory.
CREATE RESOURCE POOL InMemory_Pool WITH (MAX_MEMORY_PERCENT = 5);
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

-- Step 2: Create the InMemory DB
CREATE DATABASE InMemory
ON PRIMARY(NAME = InMemoryData,
FILENAME = 'd:\data\InMemoryData.mdf', size=100MB),
-- Memory Optimized Data
FILEGROUP [InMem_FG] CONTAINS MEMORY_OPTIMIZED_DATA(
NAME = [InMemory_InMem_dir],
FILENAME = 'd:\data\InMemory_InMem_dir')
LOG ON (name = [InMem_demo_log], Filename='d:\data\InMemory.ldf', size=50MB)
GO

-- Step 3: Bind the resource pool to DB
EXEC sp_xtp_bind_db_resource_pool 'InMemory', 'InMemory_Pool'
GO

-- Step 4: For RG to take effect, make DB Offline and Online
USE MASTER
GO
ALTER DATABASE InMemory SET OFFLINE
GO
ALTER DATABASE InMemory SET ONLINE
GO

After this we will create a wide table and add rows to exhaust the memory for this resource pool.

USE InMemory
GO
-- Step 5: Create a Memeory Optimized Table
CREATE TABLE DummyTable_Mem (ID INT NOT NULL,
Name CHAR(8000) NOT NULL
CONSTRAINT ID_Clust_DummyTable_Mem PRIMARY KEY NONCLUSTERED HASH (ID) WITH (BUCKET_COUNT=1000000))
WITH (MEMORY_OPTIMIZED=ON, DURABILITY=SCHEMA_ONLY)
GO

-- Step 6: Add a lot of rows to get the error
SET NOCOUNT ON
DECLARE
@counter AS INT = 1
WHILE (@counter <= 1000000)
BEGIN
INSERT INTO
dbo.DummyTable_Mem VALUES(@counter, 'SQLAuthority')
SET @counter = @counter + 1
END
GO

The Step 6 will not complete because of insufficient memory. We will get an error as shown below:

The statement has been terminated.
Msg 701, Level 17, State 103, Line 49
There is insufficient system memory in resource pool 'InMemory_Pool' to run this query.

To add more rows to the table:

  • Make sure more memory is allocated to the resource pool and reconfigure the same.
  • Delete some rows from In-Memory tables on this database to make space for new allocations.

Let me help you clean up the script after this experiment.

-- Clean up
USE MASTER
GO
DROP DATABASE InMemory
GO
DROP RESOURCE POOL InMemory_Pool
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

To learn such interesting SQL Server 2014 enhancements, feel free to listen to my Pluralsight course for more such topics.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #004 – List All Columns and Their Data Type for a View

Earlier this week I wrote a blog about finding stored procedure parameters and their data types. After that blog I received few emails from my blog readers asking for similar script for a view. I asked them what did they like about the script and they said that it gives us base datatype and user defined data type as well. So I have put some more efforts to write similar script about view. This is a fine example of how each one of you out there inspire me with some great content ideas. These keep these emails coming my way.

Here is the question – How to list all columns and their datatype for a view in SQL Server?

Usage of script

To use the below script you need to replace the view name (vEmployee in sample code) and schema name (HumanResources in sample code). Also make sure you are in same database which has stored procedure (AdventureWorks2014 is sample code):

USE AdventureWorks2014
GO
DECLARE  @ViewName NVARCHAR(4000)
       ,
@SchemaName NVARCHAR(4000)
SELECT   @ViewName = N'vEmployee'
      
,@SchemaName = N'HumanResources'
SELECT c.NAME AS [Name]
  
,CAST(ISNULL(ic.index_column_id, 0) AS BIT) AS [InPrimaryKey]
  
,CAST(ISNULL((
              
SELECT TOP 1 1
              
FROM sys.foreign_key_columns AS colfk
              
WHERE colfk.parent_column_id = c.column_id
                  
AND colfk.parent_object_id = c.OBJECT_ID
              
), 0) AS BIT) AS [IsForeignKey]
  
,u_t.NAME AS [DataType]
  
,ISNULL(b_t.NAME, N'') AS [SystemType]
  
,CAST(CASE
          
WHEN b_t.NAME IN (
                  
N'nchar'
                  
,N'nvarchar'
                  
)
               AND
c.max_length <> - 1
              
THEN c.max_length / 2
          
ELSE c.max_length
          
END AS INT) AS [Length]
  
,CAST(c.PRECISION AS INT) AS [NumericPrecision]
  
,CAST(c.scale AS INT) AS [NumericScale]
  
,c.is_nullable AS [Nullable]
  
,c.is_computed AS [Computed]
  
,ISNULL(s.NAME, N'') AS [XmlSchemaNamespaceSchema]
  
,ISNULL(xsc.NAME, N'') AS [XmlSchemaNamespace]
  
,ISNULL((
          
CASE c.is_xml_document
              
WHEN 1
                  
THEN 2
              
ELSE 1
              
END
          
), 0) AS [XmlDocumentConstraint]
  
,CAST(c.is_sparse AS BIT) AS [IsSparse]
  
,CAST(c.is_column_set AS BIT) AS [IsColumnSet]
  
,c.column_id AS [ID]
FROM sys.all_views AS v
INNER JOIN sys.all_columns AS c ON c.OBJECT_ID = v.OBJECT_ID
LEFT
JOIN sys.indexes AS i ON i.OBJECT_ID = c.OBJECT_ID
  
AND 1 = i.is_primary_key
LEFT JOIN sys.index_columns AS ic ON ic.index_id = i.index_id
  
AND ic.column_id = c.column_id
  
AND ic.OBJECT_ID = c.OBJECT_ID
  
AND 0 = ic.is_included_column
LEFT JOIN sys.types AS u_t ON u_t.user_type_id = c.user_type_id
LEFT JOIN sys.types AS b_t ON (
      
b_t.user_type_id = c.system_type_id
      
AND b_t.user_type_id = b_t.system_type_id
      
)
   OR (
       (
b_t.system_type_id = c.system_type_id)
       AND (
b_t.user_type_id = c.user_type_id)
       AND (
b_t.is_user_defined = 0)
       AND (
b_t.is_assembly_type = 1)
       )
LEFT JOIN sys.xml_schema_collections AS xsc ON xsc.xml_collection_id = c.xml_collection_id
LEFT JOIN sys.schemas AS s ON s.schema_id = xsc.schema_id
WHERE (v.TYPE = 'V')
   AND (
      
v.NAME = @ViewName
      
AND SCHEMA_NAME(v.schema_id) = @SchemaName
      
)
ORDER BY [ID] ASC

Here is the sample execution. I have highlighted the modification needed to use the script.

I hope these scripts will help you in your environments. I would love to hear back from you how these can be enhanced if possible.

Click to Download Scripts

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to Bind Resource Governor for InMemory Enabled Databases?

I have done a number of courses for Pluralsight. Last year when SQL Server 2014 was released, I planned to do a session on the same. The course SQL Server 2014 Administration New Features was a lot of hardwork but it did give me a chance to learn something new and share. Do let me know if you have every got a chance to listen to this course. Would love to hear your feedback.

As part of the course, one of the module included the basics of InMemory capabilities of SQL Server 2014. One of the listener from the course pinged me on Twitter with some interesting conversation. This inspired me to write this blog post.

Follower: Thanks for the InMemory basics, I had a simple doubt.

Pinal: Yes, please tell me. Let me see if I can answer the same.

Follower: I plan to upgrade my server to SQL 2014.

Pinal: Glad the course is helping you. Is there any doubt?

Follower: Yes, I might be planning to use the InMemory capabilities for couple of databases as part of application upgrade.

Pinal: Great.

Follower: I want to know can I restrict the amount of memory a particular InMemory OLTP Database can take.

Pinal: Can you please elaborate a bit please?

Follower: Simple Pinal, I want one DB not to take more than 20% of Memory on my server and the other InMemory DB not to take more than another 40% Memory available on the server.

Pinal: Interesting.

Follower: As part of upgrade I am consolidating and hence these restrictions can be of great help.

Pinal: Now I get it, it is possible with Resource Governor. Havent you tried it?

Follower: I think these are great pointers, I will dig it up. Thanks again.

Pinal: You are welcome. I will write it as a blog for sure in future.

This conversation has been in my minds for a while. It has taken some time to finally get this blog. The script one needs to write is simple. Let me walk you through the same.

  1. Create the Resource Pool
  2. Create the InMemory OLTP Filegroup enabled DB
  3. Bind the Resource Pool to our database
  4. Check the DB metadata
  5. Make the DB Offline and Online to make the changes take effect
  6. Clean up

As the script says, let us first create our Resource Group.

-- Step 1: Create the Resource Pool. Limit to 40% memory.
CREATE RESOURCE POOL InMemory_Pool WITH (MAX_MEMORY_PERCENT = 40);
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

Next is to create the InMemory DB. This is same as used in the previous blog – Beginning In-Memory OLTP with Sample Example.

-- Step 2: Create the InMemory DB
CREATE DATABASE InMemory
ON PRIMARY(NAME = InMemoryData,
FILENAME = 'd:\data\InMemoryData.mdf', size=100MB),
-- Memory Optimized Data
FILEGROUP [InMem_FG] CONTAINS MEMORY_OPTIMIZED_DATA(
NAME = [InMemory_InMem_dir],
FILENAME = 'd:\data\InMemory_InMem_dir')
LOG ON (name = [InMem_demo_log], Filename='d:\data\InMemory.ldf', size=50MB)
GO

The next step is where the magic begins. We need to bind the DB and the resource pool. This can be achieved using the next command:

-- Step 3: Bind the resource pool to DB
EXEC sp_xtp_bind_db_resource_pool 'InMemory', 'InMemory_Pool'
GO

The success for this step can be viewed with this message:

A binding has been created. Take database ‘InMemory’ offline and then bring it back online to begin using resource pool ‘InMemory_Pool’

The next logical step is to check the metadata if the same has been mapped. Use the sys.databases DMV for this.

-- Step 4: Check the Database metadata
SELECT dbs.database_id, dbs.name, dbs.resource_pool_id
FROM sys.databases dbs
WHERE name LIKE 'InMemory'
GO

Just like how resource governor configuration needs to be reconfigured. We need to do something similar for databases to make this changes take effect. We will need to take the DB offline and bring it online.

-- Step 5: For RG to take effect, make DB Offline and Online
USE MASTER
GO
ALTER DATABASE InMemory SET OFFLINE
GO
ALTER DATABASE InMemory SET ONLINE
GO

That is it. We are good now. Our InMemory DB will not take more than 40% of the memory allocated to SQL Server. Though this was a simple concept, I thought was worth a share. If you would like to clean up this experiment, please use the below script.

-- Clean up
USE MASTER
GO
DROP DATABASE InMemory
GO
DROP RESOURCE POOL InMemory_Pool
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

To learn such interesting SQL Server 2014 enhancements, feel free to listen to my Pluralsight course for more such topics.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Who is Consuming my TempDB Now?

Off late my love for TempDB and writing on topics of tempDB has been a great learning experience. The more I work with tempDB, the more fascinated I am. TempDb is being used by a number of operations inside SQL Server, let me list some of them here:

  • Temporary user objects like temp tables, table variables
  • Cursors
  • Internal worktables for spool and sorting
  • Row Versioning for snapshot isolation
  • Online Index rebuild operations
  • MARS (Multiple Active Resultsets)
  • AFTER Triggers and more

These are some of the ways in which tempdb in our servers get used. When I was talking about this to one of my DBA friends, he always asks some interesting questions. He said the previous blogs I wrote helped in understanding how to read temp tables, how to see table variable data. Now his requirement was different. He wanted to know if there was any script which will let him know who was consuming tempDB resources. If tempDB is used by many inside the SQL Server context, it is important that we know how to get this information from DMVs. There are a number of DMVs we can use:

  • dm_db_file_space_usage – Returns space usage information for each file in tempdb
  • dm_db_session_space_usage – Returns the number of pages allocated and deallocated by each session
  • dm_db_task_space_usage – Returns page allocation and deallocation activity by task
  • We can join these DMV’s with various other DMV’s like sys.dm_exec_sessions, sys.dm_exec_requests, etc and get to the actual TSQL statement and plan responsible for these allocations.

Here is a simple script that will outline the sessions which are using TempDB currently.
SELECT
st.dbid AS QueryExecutionContextDBID,
DB_NAME(st.dbid) AS QueryExecContextDBNAME,
st.objectid AS ModuleObjectId,
SUBSTRING(st.TEXT,
dmv_er.statement_start_offset/2 + 1,
(
CASE WHEN dmv_er.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX),st.TEXT)) * 2
ELSE dmv_er.statement_end_offset
END - dmv_er.statement_start_offset)/2) AS Query_Text,
dmv_tsu.session_id ,
dmv_tsu.request_id,
dmv_tsu.exec_context_id,
(
dmv_tsu.user_objects_alloc_page_count - dmv_tsu.user_objects_dealloc_page_count) AS OutStanding_user_objects_page_counts,
(
dmv_tsu.internal_objects_alloc_page_count - dmv_tsu.internal_objects_dealloc_page_count) AS OutStanding_internal_objects_page_counts,
dmv_er.start_time,
dmv_er.command,
dmv_er.open_transaction_count,
dmv_er.percent_complete,
dmv_er.estimated_completion_time,
dmv_er.cpu_time,
dmv_er.total_elapsed_time,
dmv_er.reads,dmv_er.writes,
dmv_er.logical_reads,
dmv_er.granted_query_memory,
dmv_es.HOST_NAME,
dmv_es.login_name,
dmv_es.program_name
FROM sys.dm_db_task_space_usage dmv_tsu
INNER JOIN sys.dm_exec_requests dmv_er
ON (dmv_tsu.session_id = dmv_er.session_id AND dmv_tsu.request_id = dmv_er.request_id)
INNER JOIN sys.dm_exec_sessions dmv_es
ON (dmv_tsu.session_id = dmv_es.session_id)
CROSS
APPLY sys.dm_exec_sql_text(dmv_er.sql_handle) st
WHERE (dmv_tsu.internal_objects_alloc_page_count + dmv_tsu.user_objects_alloc_page_count) > 0
ORDER BY (dmv_tsu.user_objects_alloc_page_count - dmv_tsu.user_objects_dealloc_page_count) + (dmv_tsu.internal_objects_alloc_page_count - dmv_tsu.internal_objects_dealloc_page_count) DESC

Have you ever had such requirements to monitor and troubleshoot tempDB in your environments? What have you been using to monitor your tempDB usage? What is the typical output you are getting in your environments? Do let me know as we can learn together.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – The DBA goes to MARS – Maintenance, Architecture, Research and Support – Notes from the Field #063

[Note from Pinal]: This is a 63rdth episode of Notes from the Field series. Storytelling is an art very few have mastered in their life. When I requested Stuart to share his ideas about his famous blog about MARS, I really had not expected the response which I have received in this episode of Notes from the field. It is very critical for any DBA to be proactive in their operation, however are we proactive enough? Well, you need to read this entire episode of notes from the field to grasp the concept.

In this episode of the Notes from the Field series database expert Stuart Ainsworth explains about MARS in this very captivating story.


Database administration can be a complex job, particularly in large scale environments; DBA’s are responsible for a variety of tasks, and there’s a wealth of knowledge on how to administer and support SQL Server.  For this post, however, I wanted to focus not on a specific code-related or configuration tip, but rather on a broader perspective of work in general.  As a manager and a consultant, most SQL Server shops have a common problem: we focus too much on reacting to problems and not enough on preventing them.

In the past, I’ve blogged about what I call the MARS paradigm: Maintenance, Architecture, Research and Support.  Most system administration tasks can be broadly categorized into one of these four areas, and each of these four areas can be conceptually defined in terms of Actions (Proactive\Reactive) and Results (Delayed\Immediate):

Think about any given day, and how much time you spend working in each of the 4 areas; which ones are given priority?  If you’re like most shops, you probably spend the most amount of time working support cases (“my query is slow”, or “can you show me where to find this information in the database?”) instead of focusing on maintenance or architecture.   A healthy IT shop will find ways to balance the need for immediate reaction to the need for proactive long-term results; however, the question that needs to be answered is: why do we react, and how do we become more proactive?

Why Database Administration Is Usually Reactive:

In part, the reactive nature of most IT departments is a cultural thing.  Fighting fires can be fun; people who solve problems are usually lauded as heroes, while people who prevent problems often go unnoticed.  Think about it this way; if a database is corrupted and a restore is required, who gets the credit for saving the day?  The DBA who did the restore or the DBA who set up the backup plan years ago (assuming that they’re not the same DBA)?  Administrators that are quick to act are viewed as being responsive, even though they’re usually built on a foundation that was established invisibly some time ago.

Furthermore, most database administrators are thorough in their work; support issues are usually an opportunity for them to dig into a problem and find the best solution.  Maintenance issues, on the other hand, are addressing problems that don’t yet exist, and therefore don’t yet need a solution.  However, constantly providing support pulls resources away from building a better foundation.  If a DBA is spending time constantly fighting fires, they’re not spending time on monitoring the environment to avoid future problems.   So how do you move from a reactive environment to a proactive environment?

Moving From a Reactive Approach to a Proactive Approach:

First, track your work efforts for some period of time (30 days).  Find a way to categorize your work into one of the four broad areas defined above:

  • Maintenance: efforts needed to keep system performance and stability
  • Architecture: efforts to add new features, functionality, or capability
  • Research: efforts to learn “something new”
  • Support: efforts to sustain the existing functionality or customer requests

Ideally, your work is being tracked in some form of service desk software, but if not, then I’d suggest you start with something simple (like a whiteboard with sticky notes).

Second, figure out a metric for work effort; it could be time spent on an issue, or the number of issues, or some other measure of complexity.  The goal is to have a reasonable method of comparison between categories.  Based on your collection efforts, you should be able to figure out where you’re spending most of your time; if more than 50% of your time is being spent in Support or Research, it’s time to consider becoming more proactive.

Becoming more proactive can be a never-ending journey, but I have a few suggestions on where to start:

  1. Prioritize accordingly. Not every support case deserves to be treated as your top priority; make sure that maintenance and architecture issues are included in your escalation plans.
  2. Set it, but don’t forget it. SQL Server offers a lot of tools for automating maintenance; make sure that you allocate time to review successful executions as well as failures.  Understanding when a system is performing outside of spec is nearly as critical as being notified when it fails.
  3. Communicate with interested parties. Let your manager and your peers know about your maintenance an architecture efforts; the best metric would be a reduction in support calls and cases.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Backups are Non-negotiable Lifeline for DBAs

I have had my share of DBA friends who have told me a number of horror stories from their experiences. Thinking back, these look like moments to laugh away but there are some serious learnings we get from these experiences. I am a personal believer of learning something in everything that we do. That is one of the reasons I have been able to sustain active blogging every day for so many years. Being like a kid, with the inquisitiveness to learn, should never stop within us if we have to survive in the IT industry.

If you were to ask me to pick one single laughing and learning moment that stands out among the rest, I don’t think I could pick just one. Having said that, I do see one common theme when it comes to databases and backups. I am a regular visitor and contributor to SQL Server Bangalore UG and I learn a lot from my friends there. A number of them have asked how you restore system databases once it is corrupted. Do you ever take system databases backups proactively, as a just-in-case practice? How many of us do this in our testing, development and other integration servers today? I can bet you, not many of us.

In a recent user group meeting, someone asked me to simplify how a normal backup happens. For a second all the parameters involved in the UI screen came to my mind and I held my thought. I said, “Buddy it is simpler than you can think.” I told the following flow:

This is the fundamental building block as backups happen. Now we can add spice to this workflow with options like compression, verify file, encryption and others. But the basic structure cannot be simplified more than this.

SQL Safe Backup

There is never just one way to do things. I had the opportunity to look at Idera’s SQL Safe Backup and some of the things that caught my attention are listed below. You need a structured method to automate and build a process for your backups if you are working in an enterprise.

First impressions – Starting a backup

I started the backup wizard to see what it can give me for a starter. The below screenshot shows the set of databases on a particular instance which can be configured for backups. It can be Full, Differential, Log or File backups. All these are out-of-box capabilities. What I saw immediately was the data about when the last backup was done. This is nifty and nice information to start when taking manual backups.

Four settings that surprised me

The wizard has a number of settings that stumped me. For instance the CopyOnly backup option as part of General tab was a pleasant surprise. To add to it, on selection it gives me important information about how a CopyOnly doesn’t break the Log chain etc. If you have a Junior DBA in your organization, these tips will surely help.

The next two settings are great best practices for me. The “Enable Checksum” is something I advocate everyone when working with versions of SQL Server 2005 and above. It is great to see as an option as part of the wizard.

The icing on the cake the tool gave was “Network Resiliency.” The attached screen shot shows how the retry logic happens and for how long when the backup needs to be placed in a network share (UNC path).

Finally, if these were not enough. There is a way to script the whole thing and run it as command line option or TSQL script. This can later be integrated with custom applications or even with SQL Agent. A typical script for my above system databases backup looks like:

Command line script:

“C:\Program Files\Idera\SQLsafe\SQLsafeCmd.exe” Backup master msdb model “C:\Backup\%instance%_%database%_%backuptype%_%timestamp% (%ordinal% of %total%).safe” -CompressionLevel ispeed -RetryWrites 10 300 60 -Server sqldude

T-SQL construct

DECLARE @ResultCode INT
EXEC
@ResultCode = [master].[dbo].[xp_ss_backup]
@database = N'master',
@database = N'msdb',
@database = N'model',
@filename = N'C:\Backup\%instance%_%database%_%backuptype%_%timestamp% (%ordinal% of %total%).safe',
@compressionlevel = N'ispeed',
@retrywrites = N'10 300 60',
@server = N'sqldude'
IF(@ResultCode! = 0)
RAISERROR('One or more operations failed to complete.', 16, 1);

Though all these got me interested, I wish the tool had more options to integrate with native settings. Some of the options I love to see would include – utilizing the native compression technique of SQL Server, MaxTransferSize setting, BlockSize setting, BufferCount settings for backup can be really handy. Also for SQL Server AlwaysOn configuration databases, it would be great to add options or settings to check for secondary servers in case we want to create a backup. The restore process is simple and the wizard is really handy here too. Let me move to the next interesting part.

Defining Policies – Making a blueprint

Every organization has its own retention policies when it comes to backups. They build their own strategy of backups based on RTO and RPO requirements of the business. Having a strategy to recover is the most important task. So Idera’s SQL Safe Backup did give me some great ideas. The wizard for creating the policy can be for backup, restore or for log shipping. The basic backup wizard allows us to select instances, DB’s and create a plan accordingly.

What I loved about the wizard is that, I can create a consolidated single plan in one shot for Full, Transaction logs and Differentials for my organizational mission critical database in one go. The above schedule is a classic screen shot of how you can schedule in one screen how the backups would be taken.

What I would love to see is some graphical representation on a week’s timeline to how backups are taken. This will help the DBA take a call if their strategy is meeting their SLA or not. I am so used to backup timelines with SSMS that I thought this would be a great addition. Apart from this, in enterprises we have a need to define our own custom encryption with SQL Server using Backup Encryption. I hope the feature to import a custom certificate for encryption would help many with the tool.

Final note

All tools evolve, and their fundamental motive is to make life of people easier. SQL Safe Backup is surely a tool geared in that direction. It uses the standard VDI interface to take backups. I wish as the new versions are released, the tool also aligns with utilizing the capabilities of new functionality. Some that I have pointed included external Encryption, native Compression, AlwaysOn secondary backups, backup to URL etc. For today’s deployments, a lot of them require these tools to organize their backup strategy efficiently. I think of SQL Safe Backup as like a Swiss army knife in the DBA’s pocket.

You can download SQL Safe Backup from here.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How DBAs Have to be Proactive in Identifying and Fixing Problems Before They Happen

In real life our dear ones always love to have surprises. If we do things even before they can think, we are in for some fun for rest of the days/weeks. I vividly remember a conversation with my daughter in recent past. I had made a deal to get her a bicycle if she secured the top marks in her Math paper. I wanted to create curiosity and enough incentive for her to learn. Anyways, getting her a bicycle was on my shopping list for her birthday. So I knew she is going to put some extra effort in learning this time. When the day of results came, I was prepared mentally to get her the gift ahead of time. More than a surprise for her, she did surprise me with the extra effort in getting the top grade. When she returned home, I was ready to show her the surprise. Even before she could ask me for the gift, I had it in front of her at the parking lot. For a matter of fact, I went ahead and bought a bicycle so that both of us can cycle our way in the evenings and spend some quality time. Being proactive in our actions can get our dear ones happy and the same holds good at work.

At work, I always have had the feeling to finish work ahead of time and I try to surprise my bosses from time to time. In my consulting days, doing challenging work was my forte and solving tough customer problems were things I cherished a lot. Most of my blogs are based on this one learning that I keep sharing every single day.

In one of the assignments, I had to help a DBA in his day-today activities. The problem statement was so simple that it made me think twice for a possible solution.

Pinal: Hi buddy!

DBA: I need some urgent help.

Pinal: Oh sure. Will be glad to help. Please let me know.

DBA: We are in the business of 24×7 and some of these applications are critical.

Pinal: Ok. Is that a problem?

DBA: No. Some of the application users are complaining of errors in the night. And since most of us are gone, it is very difficult for me to know what is going on.

Pinal: Well, do you know the exact error they are getting?

DBA: I know few are getting few deadlocks because they said they were a victim etc and were asked to contact the administrator. I want to solve this one by one. But I need your guidance Pinal.

Pinal: I am glad you got to the core of the problem. Let us see how I can help you. Have you enabled any Trace Flags to capture the Deadlocks?

DBA: No. We have a long approval cycle to enable anything on our Servers. So we have not done any to my knowledge on this new box.

Pinal: Do you have Profiler, Extended events or anything that captures deadlock errors?

DBA: Not that I am aware off. But my guess was confirmed based on the error messages my application developers sent me. It just states the connection was terminated because of deadlock victim. I am not sure how I will debug this. They want help and are asking me for more details. I am stuck.

There are a number of ways to solve this problem. If nothing else, try to use some of the monitoring tools that are available in the market.

Monitoring using Spotlight

Monitoring SQL Server is a proactive approach to analyzing problems on our servers. In this example, since they had the Spotlight enabled, I had to get to their diagnostics server to see the SQL Server under question. I told my DBA friend get to the Spotlight Home page and look for “Red” marks. He was quick to spot the Blocked Process was in red. I said, answer was right in front of his eyes.

Clicking on the Blocked Process, A pop-up came with the deadlock occurrence date and time. It was corresponding to the previous night and he was surprised. The obvious next question was – “Thanks Pinal. This was a good start. Now how do I know what was Deadlocked? What was the …”. I interrupted and said – “Friend, this is just a start. Let us do our detective work next. We know it is deadlock problem. So let us try to dig little bit more into this.”

My next step was to jump into the “Reports” Section and I quickly went ahead and clicked onto the “Deadlock List” report. The name is self-explanatory. We will be presented with a viewer where the date range has to be selected and we need to search for results.

I knew this was an error from previous night. So I went ahead and selected the date range and pressed on “View Report”. As you can see below, we will be presented with all the deadlocks that have occurred on the server for that time range.

In our example, I knew the time when this occurred. The report has the complete stack with some important information that can be useful to application developers. We will know the connection details, objects under question, what were the locks taken, what were the waits types and many more. Below is a screenshot of information from the same report.

This is one of the many typical out-of-box reports available with Spotlight for SQL Server. My DBA friend was delighted to see the level of information he had now and was all set to see the home page every day as soon as he reaches office. Finding the problem before it is being raised by users is a critical and key competency of an experienced DBA. Moreover he went ahead and sent this information to his application team for code review.

I gave my piece of advice to my DBA friend:

Deadlocks are not a problem of SQL Server. It is the way SQL Server makes sure there are no inconsistency in data. Application developer must know the access pattern to tables and must use the same pattern. In the above scenario, it is a classic cyclic deadlock that has happened. Please tell your application developers to fix this in their code to avoid these errors.

Do let me know if you ever encountered such scenarios in your environment? Have you ever been in a situation to identify the problem proactively? What have you done? Do share your stories and let me know your thoughts.

Try Spotlight on SQL Server free for 30 days!

Spotlight on SQL Server continues to break new ground when it comes to providing DBAs with the ultimate in flexibility to suit their specific needs. Building on the success of the Wait Stat analysis workflow, Spotlight on SQL Server version 11.0 now includes multi-dimensional workload analysis to provide maximum insight into performance issues, an enhanced mobile capability to remotely diagnose issues directly from mobile phones and the introduction of SNMP traps. A fresh new look provides simple to use, powerful visualization to effectively monitor the health of your SQL Server environment.

If you’re looking for additional help on how to optimize your SQL Servers, Visit Dell Software’sSQL Server Resource Center for video, guides and other useful content.

Reference: Pinal Dave (http://blog.sqlauthority.com)