SQL SERVER – Proving that the Source of the Problems aren’t Tied to the Database

I have always wondered how Sherlock Holmes cracked the toughest of mysteries every single time. This fascination always lead me to learn new techniques every single time. The backbone for every successful detective is to be logical in their approach while working on the case. While watching movies, I have always seen a twist in the plot always makes the movie interesting and we are glued to the seat. If there is no drama, high octane action, restrictions – the movie would be boring. These techniques apply to real life too.

Being an application developer can always be demanding. You never know what situations we will get pushed into the next day. Here is one such situation which was just like a detective work. Once I was called for performance tuning exercise of our application that was running healthy for more than a year. The distress call from the customer got me thinking. What could have gone wrong in the application? Why all of a sudden this is happening? Did we release any upgrade or update to the database in the recent past as release? None of these were making sense as I kept thinking about possible root cause analysis. As soon as I landed at the customer site, I starting my interrogation with the DBA.

Pinal: Hello Sir!

DBA: Hi Pinal. Nice to see you here. Thanks for making time.

Pinal: It is a pleasure. I am surprised I had to come in such a short notice. Something must be really wrong then.

DBA: Yes. Our application users are complaining of very bad performance of your application for the past 2 weeks.

Pinal: Was it running fine before that?

DBA: I think, it must have been. They haven’t complained about it for more than a year. This seems to be a recent phenomenon. I think the application data must have increased considerably for this behavior.

Pinal: Hmmm … How is it possible that the application data increased all of a sudden in the past 2 weeks?

DBA: I am not sure. You are the expert. I am just a DBA and not an application developer. Must be something in your code.

Pinal: Can I get a chance to see the server console once?

DBA: Well, you know we are very sensitive with data privacy. I can surely get you access to an SSMS console. That must not be a problem.

Pinal: I can’t see the server even once?

DBA: Wait, let me get you the network / systems admin for this. They must know a way out.

After sometime, I get a chance to meet the Systems admin and continue the conversation.

Pinal: Hi there!

Admin: Hi. How can I be of help?

Pinal: I was wondering, if I can get access to the SQL Server box for my application?

Admin: Well, it is against our policy to open up the server directly. But what do you want to do?

Pinal: Ok. I am not sure what is going wrong on your server. Our application was running fine till about 2 weeks back. And now users are complaining about slowness in performance. I wanted to know what has changed in this timeframe all of a sudden.

Admin: Oh yeah, I forgot to tell you. We are in the process of doing server consolidation and are moving lot of our workloads from Physical to Virtual environment. And I know, we did move your database into a virtual environment.

Pinal: Cool. Are the configurations same as you moved these servers?

Admin: We made sure they are identical. So please don’t worry Pinal. It is something about your application code mostly.

Pinal: hmmm … Sure let me check. Thanks for your time.

I have always been a believer, accept but validate. There is nothing that can confirm your hypothesis but by doing the analysis all by yourself. So I bid a goodbye to Systems Admin and started my conversation with the DBA. I turned around and asked – “What do you use to monitor your SQL Server environments my friend?” He suggested the use of Spotlight on SQL Server for SQL Server. I felt so relieved that they had something.

Spotlight on SQL Server for SQL Server to the rescue

Having some monitoring tool is essential when working with SQL Server. Though there are a number of them, I had had my opportunity to look at Spotlight on SQL Server in detail in the past. It surely has come a long way and has nifty features that are worth a note.

In our conversation above, I was sure there was something fishy about the configuration the Systems Admin have given to SQL Server and I wanted to understand what are the complete Hardware Configuration for our database server.

The first report I pulled out as part of my diagnosis was “Hardware Configurations”. Voila, I had my answers right in front of my eyes and I was delighted my work was complete in less than 15 mins within the organization.

In the previous configuration (physical configuration), they used an 8 Processor machine for the application’s database server. As part of the consolidation, the network team accidentally made it a single CPU machine and it deprived the application of essential CPU cycles. So in a single click I was able to get the answers. I didn’t look further because all the physical hardware configurations were in black-and-white in this dashboard for me.

In fact, if we use the SQL Server Management studio’s dashboard we can get limited information. But nothing as exhaustive as what I got to see as part of data collection on Spotlight on SQL Server. The Server Dashboard does have the “# of CPU’s” values embedded into the report.

Conclusion

A lot of times we get misdirected with the problem in hand. These take us into a spin of wrong diagnosis. Don’t always work on the symptoms of the problem. Try to find out all the data points that lead to the problem and work your way out in solving the same. In our above example, the problem was never about performance of our applications. It was about wrong configuration and wrong migration done by the network team. It is essential we are aware of these pitfalls.

Try Spotlight on SQL Server free for 30 days!

Spotlight on SQL Server continues to break new ground when it comes to providing DBAs with the ultimate in flexibility to suit their specific needs. Building on the success of the Wait Stat analysis workflow, Spotlight on SQL Server version 11.0 now includes multi-dimensional workload analysis to provide maximum insight into performance issues, an enhanced mobile capability to remotely diagnose issues directly from mobile phones and the introduction of SNMP traps. A fresh new look provides simple to use, powerful visualization to effectively monitor the health of your SQL Server environment.

If you’re looking for additional help on how to optimize your SQL Servers, Visit Dell Software’s SQL Server Resource Center for video, guides and other useful content.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – How to use Procedure sp_user_counter1 to sp_user_counter10

There are many performance counters available in SQL Server which can be used to monitor various parameters of SQL Server engine. Have you ever been into a situation where you want to see value in performance counter for a query which you have returned? Imagine a situation where you want to look at fragmentation or numbers of rows in a table over a period of time. And you want to plot the values against standard performance counters available.

SQL Server had this feature from a long time. I noticed it recently while working with one of my friend. He was showing me the impact of a number of database vs the number of threads in the AlwaysOn availability group. He showed me graph and my first question was – how did you plot the number of databases with that counter? The answer was – using a user counter. I did further research and learned something new so sharing that with you in the form of this blog.

There are ten system procedures which are available in SQL Server, which allows user to set specific counter values. They are sp_user_counter1, sp_user_counter2… till sp_user_counter10. That’s why they are called user settable counters. Below is the screenshot of Performance Monitor. (Start > Run > PerfMon)

These counters can be populated using stored procedures. Counter Object name and stored procedure names are same. This means the values for “User Counter 1” can using sp_user_counter1, values for “User counter 2” can using sp_user_counter2 and so on.

Here is a quick example to demonstrate the usage.
-- Drop database, if already present
USE MASTER
GO
IF EXISTS (
SELECT *
FROM sys.databases
WHERE NAME = 'SQLAuthority'
)
BEGIN
ALTER DATABASE
SQLAuthority SET single_user WITH ROLLBACK IMMEDIATE;
DROP DATABASE SQLAuthority;
END
GO
-- Create New Database
CREATE DATABASE SQLAuthority
GO
-- Create Objects Needed for Demo
USE SQLAuthority
GO
CREATE TABLE TrackMe (i INT)
GO
-- while loop to populate the data and populate the counter
SET NOCOUNT ON
DECLARE
@NumberOfRows INT=0,@NumberOfRows_2 INT=0
WHILE (@NumberOfRows<25)
BEGIN
INSERT INTO
TrackMe VALUES (1), (2), (3)
SELECT @NumberOfRows = COUNT(*) FROM TrackMe
EXECUTE sp_user_counter1 @NumberOfRows        -- 3, 6, 9, 12...
WAITFOR delay '00:00:02'
SELECT @NumberOfRows_2 = @NumberOfRows*2
EXECUTE sp_user_counter2 @NumberOfRows_2    -- 6, 12, 18, 24...
WAITFOR delay '00:00:02'
END
-- loop finished reset the counters
EXECUTE sp_user_counter1 0
EXECUTE sp_user_counter2 0

Here is the Performance counter graph where I have captured the counters.

Let me explain little logic of the script. My script is plotting number of rows in the table as counter1 and double of that as counter2. There is a delay of 2 seconds added between each counter so that we can see steps. Once the number of rows in the table are around 50, I am ending the loop and resetting the values to zero. That’s when you are seeing a drop at the end.

If you look at the procedure, it is straight forward. Believe me, here is the code.

CREATE PROCEDURE sys.sp_user_counter1 @newvalue INT AS
DBCC
setinstance ('SQLServer:User Settable', 'Query', 'User counter 1', @newvalue)

Which means, we can also call undocumented command DBCC SETINSTANCE directly to set value for the counters.

Can you think of more innovative use of drawing value in performance monitor?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Performance Counter Missing: How to Get Them Back?

Of the thousands of mails I receive every day about SQL Server problems, I was recently pinged by a friend who reported a weird problem. He started with a simple question. He said that he wants to monitor SQL Server Performance counters for complete day to send a report back to his manager.  That was a simple one and I asked to capture performance counter data of SQL Server using performance monitor tool (PerfMon.exe). I thought the solution was done and was about to close the chat window that I was questioned for the second time. This was tough one I thought – “I am not seeing any performance counter for my instance”.

I asked him to send a screenshot and instance details. Here is how his perfmon counters screen looks like. (Start > Run > Perfmon.exe) and then Right Click “Add Counters…”

Since we are dealing with a default instance of SQL Server, we should see “SQL Server:Access Methods” as the first counters (they are alphabetical). This got me curious and I wanted to really understand why this could ever happen. I politely asked my friend if he was ready to do some sort of screen sharing at a later date.

I looked at the SQL Server ERRORLOG file first and there was nothing interesting under that. I asked to query sys.dm_os_performance_counters to check if we have counter values there. Here is the screenshot.

We can see that counters are available in SQL Server Engine but not shown in performance monitor tool.

Asked to look into below key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSSQLSERVER\Performance

We were not seeing many registry keys as compared to my system.

Knowing something has gone wrong terribly, the only option left at this point was to reload the counters. Here are the commands.

To unload counter

Default Instance: unlodctr MSSQLSERVER

Named Instance: unlodctr MSSQL$<InstanceName>

To load the counter we can look at the same key and look at the value of “PerfIniFile” which is “perf-MSSQLSERVERsqlctr. ini” in the above screenshot. The file is located under BINN folder. For my machine, it is “E:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Binn”

To load counter

Default Instance:

lodctr “E:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Binn\perf-MSSQLSERVERsqlctr.ini”

 

For named instance, we need to check the file and path and run below (my machine has named instance of SQL Server 2014 called SQL2014)

lodctr “E:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\Binn\perf-MSSQL$SQL2014sqlctr.ini”

Once that is done, we should be able to see the counters (shown below)

A big sigh of relief as this was a great learning and sharing time for me and I was able to help my friend. I am sure this was helpful to you too, if you ever encounter this situation. Do let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Search Records with Single Quotes – SQL in Sixty Seconds #075

Earlier I wrote two blog posts about Search Records with Single Quotes in two parts. Refer the blog posts over here.

Though I had clarified that all the methods displayed in these two blog posts have the exact same performance, I kept on getting question on this subject, again and again! Well, as there are so many questions, I have decided to create a quick video which demonstrates that there is no performance difference among the four methods which I have displayed earlier.

Action Item

Here are the blog posts I have previously written. You can read it over here:

You can subscribe to my YouTube Channel for frequent updates.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Do You Know Your Data’s Classification? – Notes from the Field #050

[Notes from Pinal]: Data is a very simple word, yet it is very powerful. There is a famous saying – Know Your Data. I have quite often found that developers do not know their data, they are often confused with the same and not sure how to answer that. My friend Tim Radney is an amazing person who usually have answer to each of the questions which looks simple but are complicated in reality.

Linchpin People are database coaches and wellness experts for a data driven world. In this 50th episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple word. Please follow Tim on his twitter handle at @tradney.


Do you know your data’s classification?

As data professionals, we have many responsibilities. We could be responsible for backing up and restoring data, writing reports, building queries, writing stored procedures, tuning workloads, or any vast number of responsibilities.

If you are in the business of granting access to the data either through reports/queries or provisioning login access, you should be aware of the type of data you are granting access to. Your company likely has policies in place that should guide how access to certain classifications of data should be handled.

Some of the more common types of data that your company would have stricter controls over would be related to PII, PCI, SOX, GLBA, or HIPPA. You should know what the guidelines are within your company for access to this data and help make sure that those standards are being upheld.

These data assets may require additional auditing on who has access to view, print, export, etc. When the data is viewed it may require water marks or headers/footers to be on any reports.

Your organization may require certain types of data, such as PCI to be audited on who is viewing certain elements of that data.

Worst, your organization may need to be doing these things but are not. If not, you should be asking why and helping to implement a data governance plan.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Row Offset in SQL Server For Different Version

A very common question is how to find row offset in SQL Server. Here are a few examples based on different version of SQL Server. I have included SQL Server 2000 event, though it is almost 15 years old product and I encounter is less and less every day on production server.

I have used database adventureworks for example.

USE AdventureWorks2014
GO
--------------------------------------------------
-- SQL Server 2012/2014
--------------------------------------------------
DECLARE @RowsPerPage INT = 10, @PageNumber INT = 6
SELECT SalesOrderDetailID, SalesOrderID, ProductID
FROM Sales.SalesOrderDetail
ORDER BY SalesOrderDetailID
OFFSET
(@PageNumber-1)*@RowsPerPage ROWS
FETCH NEXT @RowsPerPage ROWS ONLY
GO
--------------------------------------------------
-- SQL Server 2008 / R2
-- SQL Server 2005
--------------------------------------------------
DECLARE @RowsPerPage INT = 10, @PageNumber INT = 6
SELECT SalesOrderDetailID, SalesOrderID, ProductID
FROM (
SELECT SalesOrderDetailID, SalesOrderID, ProductID,
ROW_NUMBER() OVER (ORDER BY SalesOrderDetailID) AS RowNum
FROM Sales.SalesOrderDetail ) AS SOD
WHERE SOD.RowNum BETWEEN ((@PageNumber-1)*@RowsPerPage)+1
AND @RowsPerPage*(@PageNumber)
GO
--------------------------------------------------
-- SQL Server 2000
--------------------------------------------------
DECLARE @RowsPerPage INT
DECLARE
@PageNumber INT
SET
@RowsPerPage = 10
SET @PageNumber = 6
SELECT SalesOrderDetailID, SalesOrderID, ProductID
FROM
(
SELECT TOP (@RowsPerPage)
SalesOrderDetailID, SalesOrderID, ProductID
FROM
(
SELECT TOP ((@PageNumber)*@RowsPerPage)
SalesOrderDetailID, SalesOrderID, ProductID
FROM Sales.SalesOrderDetail
ORDER BY SalesOrderDetailID
) AS SOD
ORDER BY SalesOrderDetailID DESC
) AS SOD2
ORDER BY SalesOrderDetailID ASC
GO

The result of the above queries is identical to each other.

This blog post is based on an earlier blog post which had a few errors corrected in this blog.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Performance Dashboard – Historic Information

There are a lot of games that I play with my daughter in spare time. Some of them are for just for fun and a lot of them are fun filled with some learning. As she started to grow up, it was important for me to weave learning into her day-to-day activities. So as soon as she was born, one of the biggest decision I took was to travel and show her the various places in India and even abroad. The idea is to visit these places from fun point of view but also from an historic importance. Learning different cultures, people, places, food habits etc. she gets to learn a lot in that process. I personally feel it is up to each parent to find their own sweet spot of making learning fun for their kids while balancing all that they might do. It is a long process and a lot of planning goes behind making them realize what they see in the books is what they are visiting. I have huge admiration for the culture and history each country brings and we can always learn a lot from our ancestors.

Talking about history bring us to the next set of reports from Performance Dashboard. They are grouped inside the Historical Information and Miscellaneous Information. They are logically grouped based on Waits and Expensive queries. We are looking at expensive queries based on:

  1. CPU
  2. Logical Reads
  3. Logical Writes
  4. Duration
  5. Physical Reads
  6. CLR Time

Apart from this the historical information also gives the Waits and IO Statistics across each of the databases. We can find the same in image below.

Historical Waits

This report has two sections and they are come from the same dataset. The top section is a bar-chart and the bottom is a details for the each.

The dataset for this report comes from msdb database.

EXEC msdb.MS_PerfDashboard.usp_DmOsWaitStats

Executing this gives a table of all the Waits and the amount of waits on each of them. Since I started the server and executed this report, we can see it has been idle for a longer time which is quite possible. Having said that, in a busy systems these can be completely different.

I have blogged about the Wait Stats extensively over this blog as a 28 part series – if you are interested, feel free to read the same from SQL SERVER – 28 Links for Learning SQL Wait Stats from Beginning.

Historical IO Report

This report is an aggregation of IO operations for each database since the server was started. Since this is IO, it compares relative to each database and compares them by Reads and writes. To get this information, the report makes a call to msdb database for the following SP.

EXEC msdb.MS_PerfDashboard.usp_DBFileIO

As we can see in the figure below, this is a great indicator in my server that the AdventureWorks2012 has taken maximum amount of Reads while my TempDB has suffered the maximum writes.

The patterns that emerge out of this tell me stories that we normally find it difficult to get. Based on this analysis we can assume that there are lot of reports or operations we are doing using AdventureWorks2012 and working them into Tempdb tables on our server. I am sure you will be able to analyze similarly on your installation box too.

The second section of the report shows the details of Top 20 objects for each database based on IO. For simplicity sake I have expanded the AdventureWorks2012 database for reference here.

Expensive Queries section

The next logical section we can see in the report are around expensive queries. They are the same data for each report but logically sorted based on CPU, IO, Reads, Writes etc. If we run profiler we can find the below query hitting our msdb database.

EXEC sp_executesql @stmt=N'exec msdb.MS_PerfDashboard.usp_QueryStatsTopN1 @OrderBy_Criteria',@params=N'@OrderBy_Criteria NVarChar(max)',@OrderBy_Criteria=N'CPU'

As mentioned above the same query gets fired with a different sort order. The top section shows a graphical bar chart of Top CPU consuming queries.

And as usual the second section outlines the details. In each case we can see the details of CPU, IO, and Reads etc. are also shown. This is marked in figure below.

On expanding each of the columns, we can also check the Total, Max, Min and Avg. for CPU, Duration, Physical Reads, and Logical Writes and so on.

Miscellaneous Reports

This contains a bunch of reports that are like addendum to already available reports. The one that is additional and worth a mention are the XEvents Session report.

XEvents are a deeper topic to understand. You can learn a primer from SQL SERVER – Introduction to Extended Events – Finding Long Running Queries. Fundamentally, the report shows the active running Xevents on the server at this moment of time. As you might have guessed by now, the report calls an SP from MSDB database to fillup the contents.

EXEC msdb.MS_PerfDashboard.usp_XEventSessions

End of the day these joining various DMVs to get the output. Like in this case the report uses sys.dm_xe_sessions, sys.dm_xe_session_targets and sys.dm_xe_session_event_actions to get values. As we have said a number of times before, the power of DMVs are always underestimated when working with SQL Server.

Another very important link is about Active Traces. To demonstrate the power of this report, I have started a profiler trace on SQL instance.  Trace ID 1 is the default trace which runs with every default installation of SQL Server and many standard reports are dependent on that.

Trace ID 2 in below report has some warning:

  • Rowset trace: client/GUI traces can negatively impact server performance.
  • This trace is configured to capture one or more frequently occurring events or events which typically produce a large amount of trace data; tracing these events may negatively impact system performance. The expensive events are highlighted in yellow in the table below.

As we can see below that this report displays a warning for any trace that is configured to capture events which are frequently occurring or that typically produce a large amount of trace data.  It is advisable to avoid capturing these events unless strictly required to prevent potential performance problems on the system, generally on non-production server when you are troubleshooting some particular issue. If we click on (+) symbols, we can see events captured by trace and it would also highlight the expensive events.

Here is the list of events which are “expensive” and would be highlighted in Yellow automatically.

Lock:Released
Lock:Acquired
SQL:StmtStarting
SQL:StmtCompleted
SP:StmtStarting
SP:StmtCompleted
Scan:Started
Scan:Stopped
TransactionLog
Showplan Text (Unencoded)
Showplan Text
Showplan All
Showplan Statistics Profile
Audit Statement Permission Event
Audit Schema Object Access Event
Showplan XML
Showplan XML Statistics Profile
Audit Database Object Access Event

Ideally, we should use server side traces. I have seen common misconception about client side and server side trace. People think that if they run profiler UI on server itself, it is called as server side trace which is WRONG. Trace using profiler.exe is called as Rowset-based trace or client side trace, which consumes more resources than tracing directly to a file.  I would recommend my readers to use a server side trace writing directly to a fast disk to minimize the performance impact.  We can use the Profiler user interface to configure the events and columns you want to capture and save that those setting to a script by choosing Export – Script Trace Definition under the File menu option.

Reference: Pinal Dave (http://blog.sqlauthority.com)