SQL SERVER – Performance Dashboard: Historic Information

There are a lot of games that I play with my daughter in spare time. Some of them are for just for fun and a lot of them are fun filled with some learning. As she started to grow up, it was important for me to weave learning into her day-to-day activities. So as soon as she was born, one of the biggest decision I took was to travel and show her the various places in India and even abroad. The idea is to visit these places from fun point of view but also from an historic importance. Learning different cultures, people, places, food habits etc. she gets to learn a lot in that process. I personally feel it is up to each parent to find their own sweet spot of making learning fun for their kids while balancing all that they might do. It is a long process and a lot of planning goes behind making them realize what they see in the books is what they are visiting. I have huge admiration for the culture and history each country brings and we can always learn a lot from our ancestors.

Talking about history bring us to the next set of reports from Performance Dashboard. They are grouped inside the Historical Information and Miscellaneous Information. They are logically grouped based on Waits and Expensive queries. We are looking at expensive queries based on:

  1. CPU
  2. Logical Reads
  3. Logical Writes
  4. Duration
  5. Physical Reads
  6. CLR Time

Apart from this the historical information also gives the Waits and IO Statistics across each of the databases. We can find the same in image below.

Historical Waits

This report has two sections and they are come from the same dataset. The top section is a bar-chart and the bottom is a details for the each.

The dataset for this report comes from msdb database.

EXEC msdb.MS_PerfDashboard.usp_DmOsWaitStats

Executing this gives a table of all the Waits and the amount of waits on each of them. Since I started the server and executed this report, we can see it has been idle for a longer time which is quite possible. Having said that, in a busy systems these can be completely different.

I have blogged about the Wait Stats extensively over this blog as a 28 part series – if you are interested, feel free to read the same from SQL SERVER – 28 Links for Learning SQL Wait Stats from Beginning.

Historical IO Report

This report is an aggregation of IO operations for each database since the server was started. Since this is IO, it compares relative to each database and compares them by Reads and writes. To get this information, the report makes a call to msdb database for the following SP.

EXEC msdb.MS_PerfDashboard.usp_DBFileIO

As we can see in the figure below, this is a great indicator in my server that the AdventureWorks2012 has taken maximum amount of Reads while my TempDB has suffered the maximum writes.

The patterns that emerge out of this tell me stories that we normally find it difficult to get. Based on this analysis we can assume that there are lot of reports or operations we are doing using AdventureWorks2012 and working them into Tempdb tables on our server. I am sure you will be able to analyze similarly on your installation box too.

The second section of the report shows the details of Top 20 objects for each database based on IO. For simplicity sake I have expanded the AdventureWorks2012 database for reference here.

Expensive Queries section

The next logical section we can see in the report are around expensive queries. They are the same data for each report but logically sorted based on CPU, IO, Reads, Writes etc. If we run profiler we can find the below query hitting our msdb database.

EXEC sp_executesql @stmt=N'exec msdb.MS_PerfDashboard.usp_QueryStatsTopN1 @OrderBy_Criteria',@params=N'@OrderBy_Criteria NVarChar(max)',@OrderBy_Criteria=N'CPU'

As mentioned above the same query gets fired with a different sort order. The top section shows a graphical bar chart of Top CPU consuming queries.

And as usual the second section outlines the details. In each case we can see the details of CPU, IO, and Reads etc. are also shown. This is marked in figure below.

On expanding each of the columns, we can also check the Total, Max, Min and Avg. for CPU, Duration, Physical Reads, and Logical Writes and so on.

Miscellaneous Reports

This contains a bunch of reports that are like addendum to already available reports. The one that is additional and worth a mention are the XEvents Session report.

XEvents are a deeper topic to understand. You can learn a primer from SQL SERVER – Introduction to Extended Events – Finding Long Running Queries. Fundamentally, the report shows the active running Xevents on the server at this moment of time. As you might have guessed by now, the report calls an SP from MSDB database to fillup the contents.

EXEC msdb.MS_PerfDashboard.usp_XEventSessions

End of the day these joining various DMVs to get the output. Like in this case the report uses sys.dm_xe_sessions, sys.dm_xe_session_targets and sys.dm_xe_session_event_actions to get values. As we have said a number of times before, the power of DMVs are always underestimated when working with SQL Server.

Another very important link is about Active Traces. To demonstrate the power of this report, I have started a profiler trace on SQL instance.  Trace ID 1 is the default trace which runs with every default installation of SQL Server and many standard reports are dependent on that.

Trace ID 2 in below report has some warning:

  • Rowset trace: client/GUI traces can negatively impact server performance.
  • This trace is configured to capture one or more frequently occurring events or events which typically produce a large amount of trace data; tracing these events may negatively impact system performance. The expensive events are highlighted in yellow in the table below.

As we can see below that this report displays a warning for any trace that is configured to capture events which are frequently occurring or that typically produce a large amount of trace data.  It is advisable to avoid capturing these events unless strictly required to prevent potential performance problems on the system, generally on non-production server when you are troubleshooting some particular issue. If we click on (+) symbols, we can see events captured by trace and it would also highlight the expensive events.

Here is the list of events which are “expensive” and would be highlighted in Yellow automatically.

Lock:Released
Lock:Acquired
SQL:StmtStarting
SQL:StmtCompleted
SP:StmtStarting
SP:StmtCompleted
Scan:Started
Scan:Stopped
TransactionLog
Showplan Text (Unencoded)
Showplan Text
Showplan All
Showplan Statistics Profile
Audit Statement Permission Event
Audit Schema Object Access Event
Showplan XML
Showplan XML Statistics Profile
Audit Database Object Access Event

Ideally, we should use server side traces. I have seen common misconception about client side and server side trace. People think that if they run profiler UI on server itself, it is called as server side trace which is WRONG. Trace using profiler.exe is called as Rowset-based trace or client side trace, which consumes more resources than tracing directly to a file.  I would recommend my readers to use a server side trace writing directly to a fast disk to minimize the performance impact.  We can use the Profiler user interface to configure the events and columns you want to capture and save that those setting to a script by choosing Export – Script Trace Definition under the File menu option.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – Performance Dashboard: Current Activity Section Reports

You are driving on the highway and you want to know your speed because there are cops generally in that areas – you watch the dashboard. Your kid seems to be fiddling with your cell phone – you want to see what they are doing actually and not accidentally making any calls to your office colleagues. You walk into your hall and see an ad going on the TV, you want to know what channel it is. All these are classic examples of understanding your current state for the activity in hand. There are surely a number of ways to achieve the same. I am sure you get the drift.

This section of Performance Dashboard is no different. It will give you a glimpse to current activity happening on the system. At a high level they are divided into User requests and User Sessions and the various vital parameters. They are taken from DMVs sys.dm_exec_requests and sys.dm_exec_sessions respectively.

As discussed in our previous post, it is vital to track the Cache Hit Ratio and this needs to be on the high 90’s as much as possible for an OLTP system. There are two hyperlinks that give us even more detailed information about the current system.

Clicking on “User Requests”, we will be presented by the “Current Requests” report. This is the current live report on the requests in queue in our system at this very moment of time.

There are a number of Columns that get reported in this one output. For image restrictions, there is only a part shown above. The complete list of columns available are:

  1. Session ID: Request ID
  2. Start Time
  3. Query Text
  4. Status
  5. Wait Type
  6. Wait Time (ms)
  7. Wait Resource
  8. Blocking Session ID
  9. CPU Time (ms)
  10. Total Elapsed Time
  11. Logical Reads
  12. Physical Reads
  13. Writes
  14. Transaction Isolation Level
  15. Granted Query Memory (KB)
  16. Executing Managed Code

We have capability to sort the result-set by columns by clicking on column name itself. Under the cover, this reports is fetches using a stored procedure in msdb database (created while deploying Dashboard Reports). Here is what you can run

EXEC msdb.MS_PerfDashboard.usp_RequestDetails 0

Second parameter is @include_system_processes means we don’t want to see system processes.

Each of these queries on the “Query Text” column are a hyperlink and can give us more detail when clicked. Below is the output wherein we get the Query Text, Query Plan, Showplan XML and some vital parameters required for the query. These can also be got from our “Execution Plan” and the “Properties” pane for a query.

We can also get this information using DMVs, for example the plan can be got from sys.dm_exec_text_query_plan() passing the SQL Handle.

The next section to look at is the “User Sessions”. If we click on it, we will be presented with all the session currently active on our live system. This report is fetched by running below procedure

EXEC msdb.MS_PerfDashboard.usp_SessionDetails @include_system_processes = 0

Remember, Performance Dashboards Reports are great to understand what is happening in our system at that moment of time. They are not to be seen as proactive reports. As a DBA, if they want to quickly check on the vital parameters in the system interactively, then this is the place to go.

As you can see in the output, the “Sessions Overview” report gives us the active sessions with CPU, Memory usage, Logical Reads, Physical Reads, Writes, Program running on that session, NT User Name and more. A part of this information is already available if we use Activity Monitor, but I sort of like this output because it is not continuously running or refreshing.

Clicking on the Session ID hyperlink or this can also be invoked by clicking on the “Session ID: Request ID” hyperlink in our “User Requests” hyperlink which we discussed earlier in this blog post. Both lead us to the “Session Details” page.

For the given session and the query under question, we will get to know the session ANSI settings, Login Name, Login Time, NT User, CPU & Memory usage, when the last request was fired and many more. I am sure, you will not be able to get this information from other standard reports so easily. So try to use it wherever it might help you.

As discussed above, we can get the top section of the values using DMV query as:

SELECT session_id, login_time, HOST_NAME, program_name, login_name, nt_domain, nt_user_name, status, cpu_time, memory_usage, total_scheduled_time, total_elapsed_time, last_request_start_time,
last_request_end_time, reads, writes, logical_reads,
is_user_process, TEXT_size, language, date_format, date_first,
quoted_identifier, arithabort, ansi_null_dflt_on,
ansi_defaults, ansi_warnings, ansi_padding,
ansi_nulls, concat_null_yields_null,
transaction_isolation_level, lock_timeout,
deadlock_priority, row_count, prev_error
FROM sys.dm_exec_sessions

Quick Tips about Dashboard Reports

  1. We can always use back button available on the top of each report (highlighted below).

  1. The client report viewer control used by SSMS does not support Copy (Ctrl+C and Ctrl+V). The workaround of this would be to right click on the report and choose “export to Excel”. Once you open the exported file we can copy the text.

This is just a sample to how powerful some of the DMV’s are. They have tons of information that go un-noticed by us. Hope you will start using these DMVs effectively in future.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Performance Dashboard Reports – Home Page details

Earlier we discussed in detail what the SSMS: Server Dashboard Report looks like when working with SQL Server Management Studio standard reports. These dashboard reports are an arsenal of amazing wealth of information packed in one single view. The Dashboard homepage for Performance Dashboard reports are no less in that sense. The main page report has tons of useful information which complement the other reports we have seen in the past in new visually appealing format.

The Performance Dashboard starts with 6 various sections which we will discuss one after another. I hope you got a chance to read how the installation and configuration needs to be done for Performance Dashboards. The default landing place for the same would be this Dashboard main page.

Section 1: The Dashboard lands with some vital information about the edition and the local server time on the top.

Then we have a bar chart which shows the CPU Utilization on the server currently for SQL and other processes. This is a quick way to understand who is consuming CPU cycles on our SQL Server box.

There is an interesting warning note we get if SQL Server has been configured to run with lesser number of CPU’s using the configuration. In the image below we can see that I have a Note stating the server is using 5 out of the available 8 CPU’s in the system. As a DBA this is great information because on a dedicated SQL Server box, you don’t want to be running sub-optimal for a given instance.

At this point, I would also want to call out that a similar information is presented in our ErrorLogs when the instance is started. In my local server you can see this information is shown as 8 logical processors being used by my SQL Server instance.

You might want to know how this information is retrieved about CPU history. Under the hood, this report calls stored procedure MS_PerfDashboard.usp_Main_GetCPUHistory from msdb database which shows readable information from below query. It shown last 15 snapshots

SELECT TIMESTAMP, CONVERT(XML, record) AS record
FROM sys.dm_os_ring_buffers
WHERE ring_buffer_type = N'RING_BUFFER_SCHEDULER_MONITOR'
AND record LIKE '%<SystemHealth>%'

 

Section 2: This is the second section on the top right pane. Generally this section is empty if there are no waits in the system. In my case, I have mimicked a dummy workload to get few waits in our database. The warning note in the top mentions, excessive waits can degrade the performance of the system.

Waits are sometimes inevitable in a super active transactional OLTP systems. The deal is not to have excessive waits and waits that are longer. This brings down the overall efficiency of the system. If you are interested in learning about Wait Stats – look at the month long series I wrote a while back. The same with a bunch of additional information got published as a book too, so feel free to grab that if you are interested – SQL Wait Stats Kindle.

This section fetches information from DMVs sys.dm_exec_requests and sys.dm_exec_sessions. Below is the outline of the query under MS_PerfDashboard.usp_Main_GetRequestWaits

SELECT r.session_id
,r.wait_type
,r.wait_type
,r.wait_time
FROM sys.dm_exec_requests AS r
INNER JOIN sys.dm_exec_sessions AS s ON r.session_id = s.session_id
WHERE r.wait_type IS NOT NULL
AND
s.is_user_process = 0x1

Back to our Performance Dashboard, if we click on any of the Wait Stats from the above graph, we will be presented with more details to what are the queries waiting, what they are waiting, what is the session id etc.

Feel free to click the back button and we will get back to the main dashboard.

Section 3: This section is a summary section of User Requests and Sessions by count, CPU Time, Wait Time and the overall cache hit ratio. As we are on the topic of cache, I want to recall the post from Jonathan Kehayias – though this is about Wait Type, there is plethora of links and resources pointing to cache. It is worth a read for sure. User Requests fetches information from sys.dm_exec_requests whereas User Sessions gets the information from sys.dm_exec_sessions.

Section 4 & 5: The next section is all about a bunch of links which will lead us to a number of other reports. They range from CPU, Logical Reads, Logical Writes, IO statistics, Physical reads etc. The miscellaneous information is about additional information around Active Traces, Extended Event Sessions, Databases and more.

If you are a seasoned DBA, it would be easy for you to guess that “Active Traces” can be seen via sys.traces, “Active XEvent Sessions” by sys.dm_xe_sessions and “databases” by sys.databases catalog view. Based on usage of the system, sometime we might also see missing indexes hyper link which is picked from sys.dm_db_missing_index_group_stats DMV.

In future post, we will use look at some of the reports carved out from this link. Do let me know if you started using these performance dashboard reports. I am eager to know your experiences for sure.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Performance Dashboard Installation and Configuration

If the standard set of reports were not enough, in this blog, let me introduce an interesting addition to the reports. This is called as Performance Dashboard reports. There are a set of predefined additional custom reports, built by Microsoft SQL Server Support team members, which are worth a look and a great addition to SQL Server Management Studio reports. In this blog, let me take you through the installation steps for the same.

The set of reports can be got from Microsoft Downloads site:

  1. Microsoft® SQL Server® 2012 Performance Dashboard Reports – Used in this blog post.
  2. SQL Server 2005 Performance Dashboard Reports

For SQL Server Management Studio version of 2008 and above, use the first link. It works even with the SQL Server 2014 version without any problem. In my opinion, these reports for sure compliment the already existing reports with additional information which any Developer / DBA would want to explore. I must point out that standard reports are not drill down reports which means you can just look at data shown in report. In the dashboard, you can keep clicking on hyperlink available in the report itself and keep going to other reports. We can explore the health of the server by clicking on the various charts and hyperlinks in the report.

Once you download, please go ahead by double clicking the setup.exe and the wizard starts as shown below:

I generally prefer the defaults and one of the important steps is to keep a note of the default installation location as marked below. Please take a special note because we will be using this path later in our configuration.

So go through the rest of the steps and finish. Open the folder under question as described above. We will be presented with the readme.txt automatically and feel free to follow the steps there.

The initial step is to run the “setup.sql” file from this folder.  An important point to note is that the reports will not work for SQLAzure databases. Though the folder has only one .rdl file, in reality this is a bunch of 21 report files which are actually hidden for end user. If you enable hidden files, the complete list can be viewed here.

Now that we have got all the installation done, it is time to visualize the reports. To invoke the report, Right click Server node -> Reports and select “Custom Reports…”. This will bring the file Open dialog to select the file. Point it to the same folder as discussed above.

Be careful in this step, because if you select any other .rdl apart from “performance_dashboard_main.rdl”, we will get an error like below.

The 'version string' parameter is missing a value
OR
The 'plan handle' parameter is missing a value

The errors are caused because all other reports are drill down reports and a value is passed from parent report to child report. Hence it is good in a way to have the other .rdl files as hidden because we will never get into these problems. If we point the dialog to “performance_dashboard_main.rdl” and select. The Dashboard will open like shown below.

This is the high-level report and the starting point for other reports to feed. In future blogs, we will look into each of the sections and how to land into each of these reports.

Do let me know if you have ever used these “Performance Reports” in your environment. It is worth a try for sure and let me know your experience in using the same.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Index Usage Statistics, Physical Statistics & User Statistics

When I used to be in the consulting business, the most request session or topics always revolved around Performance Tuning. And whenever I talk to folks around what they want to get covered, they often get to a point of when and how to create efficient indexes in my database. The concepts of indexes and how they function have been in the industry for more than 2 decades and it still baffles me how they are still a topic of debate and interest. The core fundamentals of Indexes have surely stood the challenges of time and are still a great starting point for building highly performant SQL Server or for that matter data centric applications. I consider creating indexes as science rather than rules of thumb. SQL Server is a cost based optimizer and does a pretty good job in identifying, evaluating and using the right index based on your query needs. Having said that, it is also upto an administrator to monitor how the indexes are being used in a given database. Which are the ones mostly used and which least. It is a fine line to walk to understand and eliminate unwanted and duplicate indexes from time to time created by developers. I do have a starter script for duplicate indexes which is worth a look.

In this blog, we would take a quick look at three reports which are available under Database node.

  1. Index Usage Statistics
  2. Index Physical Statistics
  3. User Statistics

Index Usage Statistics

This report show the information about Index operational statistics and usage statistics. When I say usage I mean, the statistics about scans, seeks, when it was last accessed, how many user scans etc. The queries used in the background is pretty straight forward. Index Operational Statistics section shows data using below DMV sys.dm_db_index_operational_stats(db_id(),null,null,null) long with other catalog views to convert object ids to name. Next section Index Usage Statistics shows the data using DMV sys.dm_db_index_usage_stats which is well documented in book online.

Using above report, we can see which Index is used by various queries and if they are doing seek or scan. If it is a small table then it might be OK to see scan but in general, I don’t get a good feeling when I see scan of an index on huge table. Sometimes I get questions on which indexes are not being used by SQL Server. This report can give us the indexes which are used. If you don’t see index not shown here, it would mean that no query has fired which can utilize that index. So, please don’t drop the indexes which the assumption that they are not getting used. Also if the server was restarted recently, there wouldn’t be statistics for us to play with or show in this report.

Index Physical Statistics

As the report name says, it show how the index is physically laid out on data files. Here is how the report looks like:

Under the hood, this report uses below objects.

SELECT t4.name AS [schema_name]
,       t3.name AS table_name
,       t2.name AS index_name
,      t1.OBJECT_ID
,      t1.index_id
,      t1.partition_number
,      t1.index_type_desc
,      t1.index_depth
,      t1.avg_fragmentation_in_percent
,      t1.fragment_count
,      t1.avg_fragment_size_in_pages
,      t1.page_count
FROM sys.dm_db_index_physical_stats(DB_ID(),NULL,NULL,NULL,'LIMITED') t1
INNER JOIN sys.objects t3 ON (t1.OBJECT_ID = t3.OBJECT_ID)
INNER JOIN sys.schemas t4 ON (t3.schema_id = t4.schema_id)
INNER JOIN sys.indexes t2 ON (t1.OBJECT_ID = t2.OBJECT_ID AND  t1.index_id = t2.index_id)
WHERE index_type_desc <> 'HEAP'
ORDER BY t4.name,t3.name,t2.name,partition_number

If we look at the report it gives recommendation about “Operation Recommended” – Rebuild or Reorganize. The recommendation is depending up-on the fragmentation in the index. I must point out that the report does NOT consider the size of table. For example, if I have an index which is having only 10 pages, even rebuild or reorganize might not remove the fragmentation.  So, you should click on (+) symbol and see how big the table it. To demonstrate this, I have selected first table.

Even if I rebuild the index, it would still recommend Rebuild. Notice that this index has just 2 pages so we don’t have to worry about fragmentation.

User Statistics

This report has no relation with Indexes but since this is a small report, I would club them in this blog. This particular report shows information about the users who are connected to database for which report is launched. This report would be useful in situation multiple applications use different logins to connect SQL Server Instance.

Here is the query used under the hood.

SELECT (
dense_rank() OVER (
ORDER BY login_name
,nt_user_name
)
) %
2 AS row_num
,(
row_number() OVER (
ORDER BY login_name
,nt_user_name
,sessions.session_id
)
) %
2 AS session_num
,login_name
,nt_user_name
,sessions.session_id AS session_id
,COUNT(DISTINCT connections.connection_id) AS connection_count
,COUNT(DISTINCT CONVERT(CHAR, sessions.session_id) + '_' + CONVERT(CHAR, requests.request_id)) AS request_count
,COUNT(DISTINCT cursors.cursor_id) AS cursor_count
,CASE
WHEN SUM(requests.open_tran) IS NULL
THEN 0
ELSE SUM(requests.open_tran)
END AS transaction_count
,sessions.cpu_time + 0.0 AS cpu_time
,sessions.memory_usage * 8 AS memory_usage
,sessions.reads AS reads
,sessions.writes AS writes
,sessions.last_request_start_time AS last_request_start_time
,sessions.last_request_end_time AS last_request_end_time
FROM sys.dm_exec_sessions sessions
LEFT JOIN sys.dm_exec_connections connections ON sessions.session_id = connections.session_id
LEFT JOIN MASTER..sysprocesses requests ON sessions.session_id = requests.spid
LEFT JOIN sys.dm_exec_cursors(NULL) cursors ON sessions.session_id = cursors.session_id
WHERE (
sessions.is_user_process = 1
AND requests.dbid = DB_ID()
)
GROUP BY sessions.login_name
,sessions.nt_user_name
,sessions.session_id
,sessions.cpu_time
,sessions.memory_usage
,sessions.reads
,sessions.writes
,sessions.last_request_start_time
,sessions.last_request_end_time

All you need to notice is that information is fetched from sys.dm_exec_sessions, sys.dm_exec_connections and master..sysprocesses. You can also observe that filter has been added for DB_ID() which gives the database id for the connection which is running the query.  Here is the report.

In the column “Active Sessions” there is a (+) symbol for each row which can be used to get details about the sessions used by that particular login.

This is a long and exhaustive list of SSMS reports that we covered as part of the series. Let us see how this learning can be put to use in our environments and let me know if you learnt something new in this series till now.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Resource Locking and Object Execution Statistics Report

As a DBA or SQL Developer, the amount of time we work with SQL Server Management can never be measured. As a database person, these hidden gems of SQL Server can never be underestimated. With each release the tool improves to give the next set of features. I have seen in multiple blogs where the Product Managers constantly ask for feedbacks and are looking at making enhancements to the tool. If you goto the connect site, the votes for a feature requests decides if they will ever make it into the next release of SSMS. If you get the drift, knowing the statistics on a given request decides how the feature will get added. These statistics and questions gives product team the confidence to people’s request. Similarly when working with SQL Server objects, it will be useful if we can know how many times a procedure were called and how many CPU, IO cycles were spent.

In this part of blog we would talk about two reports from the Database level standard reports as shown in the diagram.

Resource Locking Statistics by Objects

This report is similar to earlier blocking report which is available at server level. Since this report is database level, it shows information about blocking within the database for which we launched the report. In case of no blocking in the database, the report would launch and show message as below:

Currently, this graph does not have any data to show”. This is shown in the diagram below.

For demonstrate purpose I have created two blocking chains, one for table t1 and another for table t2. And have launched the repot again. If we compare this with earlier report, it’s clearly visible that the warning is not available and as highlighted in the image, we can see non-zero value under “# Waiting Transactions”. Without clicking on (+) it would not be possible to see who is blocked.

Once we expand Object No. 1, below is what we get. We can clearly see that there are 5 locks which are granted and 1 lock on RID is in WAIT status. The SPID is shown at the same level.

The majority of blocking information in this report is picked from DMV sys.dm_tran_locks and sys.dm_exec_sessions. Other information about object metadata is from catalog views – sys.partitions, sys.objects, sys.schemas

Here is the list of possible locks shown in the report: METADATA, DATABASE, FILE, TABLE, HOBT, EXTENT, PAGE, KEY, RID, ALLOCATION_UNIT and APPLICATION.

Object Execution Statistics

This is one of the interesting report which shows information about the execution of objects in the database. If we launch this report for a database which is not used and plans for any object, stored procedure, function etc. is not available in plan cache then we would see “empty” report like below.

There are five graphs available in the report. All of them are taken from the information available in DMV sys.dm_exec_query_stats which contains historical information about query executions since it was started. If plan is evicted from the cache then the report would not show the query which might be a problem.

To show some data in report, I have executed some stored procedures and functions in AdventureWorks2014 sample database. Here is the report when we have plans available in cache. This is what you would see on production server.

You would notice that out of 5 graphs only 3 are shown. That’s due to the fact that they are not significant higher and would be shown as zero.

Under the cover, it runs below query. I have modified query a little bit to show meaningful column name.

SELECT  CASE WHEN sch.name IS NULL THEN '' ELSE sch.name END AS schema_name
,       dense_rank() OVER (ORDER BY s2.objectid) AS SPRank
,       s3.name AS [Obj Name]
,       s3.TYPE AS [Obj Type]
,       (SELECT TOP 1 SUBSTRING(TEXT,(s1.statement_start_offset+2)/2, (CASE WHEN s1.statement_end_offset = -1  THEN LEN(CONVERT(NVARCHAR(MAX),TEXT))*2 ELSE s1.statement_end_offset END - s1.statement_start_offset) /2  ) FROM sys.dm_exec_sql_text(s1.sql_handle)) AS [SQL Statement]
,       execution_count
,       plan_generation_num
,       last_execution_time
,       ((total_worker_time+0.0)/execution_count)/1000 AS [avg_worker_time]
,       total_worker_time/1000.0 'total_worker_time'
,       last_worker_time/1000.0 'last_worker_time'
,       min_worker_time/1000.0 'min_worker_time'
,       max_worker_time/1000.0 'max_worker_time'
,       ((total_logical_reads+0.0)/execution_count) AS [avg_logical_reads]
,       total_logical_reads
,       last_logical_reads
,       min_logical_reads
,       max_logical_reads
,       ((total_logical_writes+0.0)/execution_count) AS [avg_logical_writes]
,       total_logical_writes
,       last_logical_writes
,       min_logical_writes
,       max_logical_writes
,       ((total_logical_writes+0.0)/execution_count + (total_logical_reads+0.0)/execution_count) AS [avg_logical_IO]
,       total_logical_writes + total_logical_reads 'Total IO'
,       last_logical_writes +last_logical_reads 'Last IO'
,       min_logical_writes +min_logical_reads 'Min IO'
,       max_logical_writes + max_logical_reads  'MAX IO'
FROM    sys.dm_exec_query_stats s1
CROSS apply sys.dm_exec_sql_text(sql_handle) AS  s2
INNER JOIN sys.objects s3  ON ( s2.objectid = s3.OBJECT_ID )
LEFT OUTER JOIN sys.schemas sch ON(s3.schema_id = sch.schema_id)
WHERE s2.dbid = DB_ID()
ORDER BY  s3.name;

This query would give the results for the database in which it would execute. Notice that there is a filter for db_id()  in where clause. In the graph, we can observed object # assigned which is detailed in the table shown below the graph under heading “All Executable Objects”

If we click on (+) sign near Object No column, we can see more details about individual statement in the object. In below image, I have clicked on the (+) symbols near 2 and we can see statement within the function. You would see more rows if there are more statements in the procedure.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Memory Usage By Memory Optimized Objects Report

At conferences and at speaking engagements at the local UG, there is one question that keeps on coming which I wish were never asked. The question around, “Why is SQL Server using up all the memory and not releasing even when idle?” Well, the answer can be long and with the release of SQL Server 2014, this got even more complicated. This release of SQL Server 2014 has the option of introducing In-Memory OLTP which is completely new concept and our dependency on memory has increased multifold. In reality, nothing much changes but we have memory optimized objects (Tables and Stored Procedures) additional which are residing completely in memory and improving performance. As a DBA, it is humanly impossible to get a hang of all the innovations and the new features introduced in the next version. So today’s blog is around the report added to SSMS which gives a high level view of this new feature addition.

This reports is available only from SQL Server 2014 onwards because the feature was introduced in SQL Server 2014. Earlier versions of SQL Server Management Studio would not show the report in the list.

If we try to launch the report on the database which is not having In-Memory File group defined, then we would see the message in report. To demonstrate, I have created new fresh database called MemoryOptimizedDB with no special file group.

Here is the query used to identify whether a database has memory-optimized file group or not.

SELECT TOP(1) 1 FROM sys.filegroups FG WHERE FG.[type] = 'FX'

Once we add filegroup using below command, we would see different version of report.

USE [master]
GO
ALTER DATABASE [MemoryOptimizedDB] ADD FILEGROUP [IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA
GO

The report is still empty because we have not defined any Memory Optimized table in the database.  Total allocated size is shown as 0 MB. Now, let’s add the folder location into the filegroup and also created few in-memory tables. We have used the nomenclature of IMO to denote “InMemory Optimized” objects.

USE [master]
GO
ALTER DATABASE [MemoryOptimizedDB]
ADD FILE ( NAME = N'MemoryOptimizedDB_IMO', FILENAME = N'E:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\MemoryOptimizedDB_IMO')
TO FILEGROUP [IMO_FG]
GO

You may have to change the path based on your SQL Server configuration. Below is the script to create the table.

USE MemoryOptimizedDB
GO
--Drop table if it already exists.
IF OBJECT_ID('dbo.SQLAuthority','U') IS NOT NULL
DROP TABLE dbo.SQLAuthority
GO
CREATE TABLE dbo.SQLAuthority
(
ID INT IDENTITY NOT NULL,
Name CHAR(500)  COLLATE Latin1_General_100_BIN2 NOT NULL DEFAULT 'Pinal',
CONSTRAINT PK_SQLAuthority_ID PRIMARY KEY NONCLUSTERED (ID),
INDEX hash_index_sample_memoryoptimizedtable_c2 HASH (Name) WITH (BUCKET_COUNT = 131072)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO

As soon as above script is executed, table and index both are created. If we run the report again, we would see something like below.

Notice that table memory is zero but index is using memory. This is due to the fact that hash index needs memory to manage the buckets created. So even if table is empty, index would consume memory. More about the internals of how In-Memory indexes and tables work will be reserved for future posts. Now, use below script to populate the table with 10000 rows

INSERT INTO SQLAuthority VALUES (DEFAULT)
GO 10000

Here is the same report after inserting 1000 rows into our InMemory table.

 

 There are total three sections in the whole report.

  1. Total Memory consumed by In-Memory Objects
  2. Pie chart showing memory distribution based on type of consumer – table, index and system.
  3. Details of memory usage by each table.

The information about all three is taken from one single DMV, sys.dm_db_xtp_table_memory_stats This DMV contains memory usage statistics for both user and system In-Memory tables. If we query the DMV and look at data, we can easily notice that the system tables have negative object IDs.  So, to look at user table memory usage, below is the over-simplified version of query.

USE MemoryOptimizedDB
GO
SELECT OBJECT_NAME(OBJECT_ID), *
FROM sys.dm_db_xtp_table_memory_stats
WHERE OBJECT_ID > 0
GO

This report would help DBA to identify which in-memory object taking lot of memory which can be used as a pointer for designing solution. I am sure in future we will discuss at lengths the whole concept of In-Memory tables in detail over this blog. To read more about In-Memory OLTP, have a look at In-Memory OLTP Series at Balmukund’s Blog.

Reference: Pinal Dave (http://blog.sqlauthority.com)