SQL SERVER – Performance Dashboard: Current Activity Section Reports

You are driving on the highway and you want to know your speed because there are cops generally in that areas – you watch the dashboard. Your kid seems to be fiddling with your cell phone – you want to see what they are doing actually and not accidentally making any calls to your office colleagues. You walk into your hall and see an ad going on the TV, you want to know what channel it is. All these are classic examples of understanding your current state for the activity in hand. There are surely a number of ways to achieve the same. I am sure you get the drift.

This section of Performance Dashboard is no different. It will give you a glimpse to current activity happening on the system. At a high level they are divided into User requests and User Sessions and the various vital parameters. They are taken from DMVs sys.dm_exec_requests and sys.dm_exec_sessions respectively.

As discussed in our previous post, it is vital to track the Cache Hit Ratio and this needs to be on the high 90’s as much as possible for an OLTP system. There are two hyperlinks that give us even more detailed information about the current system.

Clicking on “User Requests”, we will be presented by the “Current Requests” report. This is the current live report on the requests in queue in our system at this very moment of time.

There are a number of Columns that get reported in this one output. For image restrictions, there is only a part shown above. The complete list of columns available are:

  1. Session ID: Request ID
  2. Start Time
  3. Query Text
  4. Status
  5. Wait Type
  6. Wait Time (ms)
  7. Wait Resource
  8. Blocking Session ID
  9. CPU Time (ms)
  10. Total Elapsed Time
  11. Logical Reads
  12. Physical Reads
  13. Writes
  14. Transaction Isolation Level
  15. Granted Query Memory (KB)
  16. Executing Managed Code

We have capability to sort the result-set by columns by clicking on column name itself. Under the cover, this reports is fetches using a stored procedure in msdb database (created while deploying Dashboard Reports). Here is what you can run

EXEC msdb.MS_PerfDashboard.usp_RequestDetails 0

Second parameter is @include_system_processes means we don’t want to see system processes.

Each of these queries on the “Query Text” column are a hyperlink and can give us more detail when clicked. Below is the output wherein we get the Query Text, Query Plan, Showplan XML and some vital parameters required for the query. These can also be got from our “Execution Plan” and the “Properties” pane for a query.

We can also get this information using DMVs, for example the plan can be got from sys.dm_exec_text_query_plan() passing the SQL Handle.

The next section to look at is the “User Sessions”. If we click on it, we will be presented with all the session currently active on our live system. This report is fetched by running below procedure

EXEC msdb.MS_PerfDashboard.usp_SessionDetails @include_system_processes = 0

Remember, Performance Dashboards Reports are great to understand what is happening in our system at that moment of time. They are not to be seen as proactive reports. As a DBA, if they want to quickly check on the vital parameters in the system interactively, then this is the place to go.

As you can see in the output, the “Sessions Overview” report gives us the active sessions with CPU, Memory usage, Logical Reads, Physical Reads, Writes, Program running on that session, NT User Name and more. A part of this information is already available if we use Activity Monitor, but I sort of like this output because it is not continuously running or refreshing.

Clicking on the Session ID hyperlink or this can also be invoked by clicking on the “Session ID: Request ID” hyperlink in our “User Requests” hyperlink which we discussed earlier in this blog post. Both lead us to the “Session Details” page.

For the given session and the query under question, we will get to know the session ANSI settings, Login Name, Login Time, NT User, CPU & Memory usage, when the last request was fired and many more. I am sure, you will not be able to get this information from other standard reports so easily. So try to use it wherever it might help you.

As discussed above, we can get the top section of the values using DMV query as:

SELECT session_id, login_time, HOST_NAME, program_name, login_name, nt_domain, nt_user_name, status, cpu_time, memory_usage, total_scheduled_time, total_elapsed_time, last_request_start_time,
last_request_end_time, reads, writes, logical_reads,
is_user_process, TEXT_size, language, date_format, date_first,
quoted_identifier, arithabort, ansi_null_dflt_on,
ansi_defaults, ansi_warnings, ansi_padding,
ansi_nulls, concat_null_yields_null,
transaction_isolation_level, lock_timeout,
deadlock_priority, row_count, prev_error
FROM sys.dm_exec_sessions

Quick Tips about Dashboard Reports

  1. We can always use back button available on the top of each report (highlighted below).

  1. The client report viewer control used by SSMS does not support Copy (Ctrl+C and Ctrl+V). The workaround of this would be to right click on the report and choose “export to Excel”. Once you open the exported file we can copy the text.

This is just a sample to how powerful some of the DMV’s are. They have tons of information that go un-noticed by us. Hope you will start using these DMVs effectively in future.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – SSMS: Performance Dashboard Reports – Home Page details

Earlier we discussed in detail what the SSMS: Server Dashboard Report looks like when working with SQL Server Management Studio standard reports. These dashboard reports are an arsenal of amazing wealth of information packed in one single view. The Dashboard homepage for Performance Dashboard reports are no less in that sense. The main page report has tons of useful information which complement the other reports we have seen in the past in new visually appealing format.

The Performance Dashboard starts with 6 various sections which we will discuss one after another. I hope you got a chance to read how the installation and configuration needs to be done for Performance Dashboards. The default landing place for the same would be this Dashboard main page.

Section 1: The Dashboard lands with some vital information about the edition and the local server time on the top.

Then we have a bar chart which shows the CPU Utilization on the server currently for SQL and other processes. This is a quick way to understand who is consuming CPU cycles on our SQL Server box.

There is an interesting warning note we get if SQL Server has been configured to run with lesser number of CPU’s using the configuration. In the image below we can see that I have a Note stating the server is using 5 out of the available 8 CPU’s in the system. As a DBA this is great information because on a dedicated SQL Server box, you don’t want to be running sub-optimal for a given instance.

At this point, I would also want to call out that a similar information is presented in our ErrorLogs when the instance is started. In my local server you can see this information is shown as 8 logical processors being used by my SQL Server instance.

You might want to know how this information is retrieved about CPU history. Under the hood, this report calls stored procedure MS_PerfDashboard.usp_Main_GetCPUHistory from msdb database which shows readable information from below query. It shown last 15 snapshots

SELECT TIMESTAMP, CONVERT(XML, record) AS record
FROM sys.dm_os_ring_buffers
WHERE ring_buffer_type = N'RING_BUFFER_SCHEDULER_MONITOR'
AND record LIKE '%<SystemHealth>%'

 

Section 2: This is the second section on the top right pane. Generally this section is empty if there are no waits in the system. In my case, I have mimicked a dummy workload to get few waits in our database. The warning note in the top mentions, excessive waits can degrade the performance of the system.

Waits are sometimes inevitable in a super active transactional OLTP systems. The deal is not to have excessive waits and waits that are longer. This brings down the overall efficiency of the system. If you are interested in learning about Wait Stats – look at the month long series I wrote a while back. The same with a bunch of additional information got published as a book too, so feel free to grab that if you are interested – SQL Wait Stats Kindle.

This section fetches information from DMVs sys.dm_exec_requests and sys.dm_exec_sessions. Below is the outline of the query under MS_PerfDashboard.usp_Main_GetRequestWaits

SELECT r.session_id
,r.wait_type
,r.wait_type
,r.wait_time
FROM sys.dm_exec_requests AS r
INNER JOIN sys.dm_exec_sessions AS s ON r.session_id = s.session_id
WHERE r.wait_type IS NOT NULL
AND
s.is_user_process = 0x1

Back to our Performance Dashboard, if we click on any of the Wait Stats from the above graph, we will be presented with more details to what are the queries waiting, what they are waiting, what is the session id etc.

Feel free to click the back button and we will get back to the main dashboard.

Section 3: This section is a summary section of User Requests and Sessions by count, CPU Time, Wait Time and the overall cache hit ratio. As we are on the topic of cache, I want to recall the post from Jonathan Kehayias – though this is about Wait Type, there is plethora of links and resources pointing to cache. It is worth a read for sure. User Requests fetches information from sys.dm_exec_requests whereas User Sessions gets the information from sys.dm_exec_sessions.

Section 4 & 5: The next section is all about a bunch of links which will lead us to a number of other reports. They range from CPU, Logical Reads, Logical Writes, IO statistics, Physical reads etc. The miscellaneous information is about additional information around Active Traces, Extended Event Sessions, Databases and more.

If you are a seasoned DBA, it would be easy for you to guess that “Active Traces” can be seen via sys.traces, “Active XEvent Sessions” by sys.dm_xe_sessions and “databases” by sys.databases catalog view. Based on usage of the system, sometime we might also see missing indexes hyper link which is picked from sys.dm_db_missing_index_group_stats DMV.

In future post, we will use look at some of the reports carved out from this link. Do let me know if you started using these performance dashboard reports. I am eager to know your experiences for sure.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Performance Dashboard Installation and Configuration

If the standard set of reports were not enough, in this blog, let me introduce an interesting addition to the reports. This is called as Performance Dashboard reports. There are a set of predefined additional custom reports, built by Microsoft SQL Server Support team members, which are worth a look and a great addition to SQL Server Management Studio reports. In this blog, let me take you through the installation steps for the same.

The set of reports can be got from Microsoft Downloads site:

  1. Microsoft® SQL Server® 2012 Performance Dashboard Reports – Used in this blog post.
  2. SQL Server 2005 Performance Dashboard Reports

For SQL Server Management Studio version of 2008 and above, use the first link. It works even with the SQL Server 2014 version without any problem. In my opinion, these reports for sure compliment the already existing reports with additional information which any Developer / DBA would want to explore. I must point out that standard reports are not drill down reports which means you can just look at data shown in report. In the dashboard, you can keep clicking on hyperlink available in the report itself and keep going to other reports. We can explore the health of the server by clicking on the various charts and hyperlinks in the report.

Once you download, please go ahead by double clicking the setup.exe and the wizard starts as shown below:

I generally prefer the defaults and one of the important steps is to keep a note of the default installation location as marked below. Please take a special note because we will be using this path later in our configuration.

So go through the rest of the steps and finish. Open the folder under question as described above. We will be presented with the readme.txt automatically and feel free to follow the steps there.

The initial step is to run the “setup.sql” file from this folder.  An important point to note is that the reports will not work for SQLAzure databases. Though the folder has only one .rdl file, in reality this is a bunch of 21 report files which are actually hidden for end user. If you enable hidden files, the complete list can be viewed here.

Now that we have got all the installation done, it is time to visualize the reports. To invoke the report, Right click Server node -> Reports and select “Custom Reports…”. This will bring the file Open dialog to select the file. Point it to the same folder as discussed above.

Be careful in this step, because if you select any other .rdl apart from “performance_dashboard_main.rdl”, we will get an error like below.

The 'version string' parameter is missing a value
OR
The 'plan handle' parameter is missing a value

The errors are caused because all other reports are drill down reports and a value is passed from parent report to child report. Hence it is good in a way to have the other .rdl files as hidden because we will never get into these problems. If we point the dialog to “performance_dashboard_main.rdl” and select. The Dashboard will open like shown below.

This is the high-level report and the starting point for other reports to feed. In future blogs, we will look into each of the sections and how to land into each of these reports.

Do let me know if you have ever used these “Performance Reports” in your environment. It is worth a try for sure and let me know your experience in using the same.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – A Question on SSMS Prompt about Connect to Server

Today is Sunday, so letting us have some fun thing to discuss and think about. If you have not read my earlier blog post SQL SERVER – Connecting to Azure Storage with SSMS. I have discussed about how to connect to the SQL Azure storage via SSMS. Here is a follow up question I have on the same subject.

Question: When we connect to object explorer via the main toolbar menu, we get a prompt where we can connect to Azure Storage but when we open first object explorer why do we see the option to Azure Storage.

To better explain my question, please see the image attached herewith.

Do you have any answer for the same? If yes, please share.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Connecting to Azure Storage with SSMS

SQL Server Management Studio can now connect to Azure Storage.

It is very simple to connect to Azure Storage. First open the Object Explorer in SQL Server Management Studio (SSMS). Now click on Connect. Upon clicking on the connect it will bring up five options as displayed in the image below.

Click on “Azure Storage” and it will bring up another screen asking for storage account and account key.

Now go to your Azure account and get the name of the storage account, and account Key.

It will bring up following screen.

Enter the name of the account and account key at the prompt in SSMS.

Once you enter the details, it will immediately connect you with your Azure Storage. Next, expand the node and you will notice your containers.

The same containers will be visible in your Azure account online.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Index Usage Statistics, Physical Statistics & User Statistics

When I used to be in the consulting business, the most request session or topics always revolved around Performance Tuning. And whenever I talk to folks around what they want to get covered, they often get to a point of when and how to create efficient indexes in my database. The concepts of indexes and how they function have been in the industry for more than 2 decades and it still baffles me how they are still a topic of debate and interest. The core fundamentals of Indexes have surely stood the challenges of time and are still a great starting point for building highly performant SQL Server or for that matter data centric applications. I consider creating indexes as science rather than rules of thumb. SQL Server is a cost based optimizer and does a pretty good job in identifying, evaluating and using the right index based on your query needs. Having said that, it is also upto an administrator to monitor how the indexes are being used in a given database. Which are the ones mostly used and which least. It is a fine line to walk to understand and eliminate unwanted and duplicate indexes from time to time created by developers. I do have a starter script for duplicate indexes which is worth a look.

In this blog, we would take a quick look at three reports which are available under Database node.

  1. Index Usage Statistics
  2. Index Physical Statistics
  3. User Statistics

Index Usage Statistics

This report show the information about Index operational statistics and usage statistics. When I say usage I mean, the statistics about scans, seeks, when it was last accessed, how many user scans etc. The queries used in the background is pretty straight forward. Index Operational Statistics section shows data using below DMV sys.dm_db_index_operational_stats(db_id(),null,null,null) long with other catalog views to convert object ids to name. Next section Index Usage Statistics shows the data using DMV sys.dm_db_index_usage_stats which is well documented in book online.

Using above report, we can see which Index is used by various queries and if they are doing seek or scan. If it is a small table then it might be OK to see scan but in general, I don’t get a good feeling when I see scan of an index on huge table. Sometimes I get questions on which indexes are not being used by SQL Server. This report can give us the indexes which are used. If you don’t see index not shown here, it would mean that no query has fired which can utilize that index. So, please don’t drop the indexes which the assumption that they are not getting used. Also if the server was restarted recently, there wouldn’t be statistics for us to play with or show in this report.

Index Physical Statistics

As the report name says, it show how the index is physically laid out on data files. Here is how the report looks like:

Under the hood, this report uses below objects.

SELECT t4.name AS [schema_name]
,       t3.name AS table_name
,       t2.name AS index_name
,      t1.OBJECT_ID
,      t1.index_id
,      t1.partition_number
,      t1.index_type_desc
,      t1.index_depth
,      t1.avg_fragmentation_in_percent
,      t1.fragment_count
,      t1.avg_fragment_size_in_pages
,      t1.page_count
FROM sys.dm_db_index_physical_stats(DB_ID(),NULL,NULL,NULL,'LIMITED') t1
INNER JOIN sys.objects t3 ON (t1.OBJECT_ID = t3.OBJECT_ID)
INNER JOIN sys.schemas t4 ON (t3.schema_id = t4.schema_id)
INNER JOIN sys.indexes t2 ON (t1.OBJECT_ID = t2.OBJECT_ID AND  t1.index_id = t2.index_id)
WHERE index_type_desc <> 'HEAP'
ORDER BY t4.name,t3.name,t2.name,partition_number

If we look at the report it gives recommendation about “Operation Recommended” – Rebuild or Reorganize. The recommendation is depending up-on the fragmentation in the index. I must point out that the report does NOT consider the size of table. For example, if I have an index which is having only 10 pages, even rebuild or reorganize might not remove the fragmentation.  So, you should click on (+) symbol and see how big the table it. To demonstrate this, I have selected first table.

Even if I rebuild the index, it would still recommend Rebuild. Notice that this index has just 2 pages so we don’t have to worry about fragmentation.

User Statistics

This report has no relation with Indexes but since this is a small report, I would club them in this blog. This particular report shows information about the users who are connected to database for which report is launched. This report would be useful in situation multiple applications use different logins to connect SQL Server Instance.

Here is the query used under the hood.

SELECT (
dense_rank() OVER (
ORDER BY login_name
,nt_user_name
)
) %
2 AS row_num
,(
row_number() OVER (
ORDER BY login_name
,nt_user_name
,sessions.session_id
)
) %
2 AS session_num
,login_name
,nt_user_name
,sessions.session_id AS session_id
,COUNT(DISTINCT connections.connection_id) AS connection_count
,COUNT(DISTINCT CONVERT(CHAR, sessions.session_id) + '_' + CONVERT(CHAR, requests.request_id)) AS request_count
,COUNT(DISTINCT cursors.cursor_id) AS cursor_count
,CASE
WHEN SUM(requests.open_tran) IS NULL
THEN 0
ELSE SUM(requests.open_tran)
END AS transaction_count
,sessions.cpu_time + 0.0 AS cpu_time
,sessions.memory_usage * 8 AS memory_usage
,sessions.reads AS reads
,sessions.writes AS writes
,sessions.last_request_start_time AS last_request_start_time
,sessions.last_request_end_time AS last_request_end_time
FROM sys.dm_exec_sessions sessions
LEFT JOIN sys.dm_exec_connections connections ON sessions.session_id = connections.session_id
LEFT JOIN MASTER..sysprocesses requests ON sessions.session_id = requests.spid
LEFT JOIN sys.dm_exec_cursors(NULL) cursors ON sessions.session_id = cursors.session_id
WHERE (
sessions.is_user_process = 1
AND requests.dbid = DB_ID()
)
GROUP BY sessions.login_name
,sessions.nt_user_name
,sessions.session_id
,sessions.cpu_time
,sessions.memory_usage
,sessions.reads
,sessions.writes
,sessions.last_request_start_time
,sessions.last_request_end_time

All you need to notice is that information is fetched from sys.dm_exec_sessions, sys.dm_exec_connections and master..sysprocesses. You can also observe that filter has been added for DB_ID() which gives the database id for the connection which is running the query.  Here is the report.

In the column “Active Sessions” there is a (+) symbol for each row which can be used to get details about the sessions used by that particular login.

This is a long and exhaustive list of SSMS reports that we covered as part of the series. Let us see how this learning can be put to use in our environments and let me know if you learnt something new in this series till now.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Using the SSIS Term Extraction for Data Exploration – Notes from the Field #046

[Notes from Pinal]: SSIS is very well explored subject, however, there are so many interesting elements when we read, we learn something new. A similar concept has been Using the SSIS Term Extraction for Data Exploration.

Linchpin People are database coaches and wellness experts for a data driven world. In this 46th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to understand SSIS Term Extraction for Data Exploration.


Data exploration is an essential piece of any new ETL (extraction-transformation-load) process.  Knowing the structure, type, and even the semantics of data sources can help the ETL developer or architect to make better decisions on data type and length settings, transformation logic, and error handling procedures.  Additionally, there are situations in which the exploration of the data is the principal purpose of the ETL, such as text mining and other pattern analysis.  In most scenarios, SQL Server Integration Services is used as a traditional ETL tool and not necessarily for data exploration or text mining.  However, there are some tools built into SSIS that are ideally suited for exploring source feed to expose patterns in the data.  In this brief post, I’ll cover one such tool: the term extraction component.

The term extraction component is a data flow transformation that will aggregate the frequency of words found in a specified column supplied by an upstream data source.  This component expects a Unicode text or text stream field as an input, and calculates as an output the frequency of the nouns and/or noun phrases in the specified source column.  As shown below on the data flow surface, the term extraction component (circled) is always used as a transformation, accepting exactly one input and expecting either one or two outputs (the second being an optional error output).

Configuring the term extraction component can be a relatively simple exercise.  As mentioned, you’ll need to specify the upstream column that will be used as an input, and this column must be Unicode text (DT_WSTR) or stream (DT_NTEXT).

Purposefully skipping over the Exclusion tab for just a moment, we’ll review the Advanced tab.  This pane exposes several of the key configuration elements of the term extraction component.  In particular, we can use this page to set the output for noun and/or noun phrase, set the score type to frequency (raw count) or TFIDF (a calculation based both on the frequency of the term and the ratio of the analyzed terms to the entire document), and set the minimum score threshold and maximum term length.  We can also set the case-sensitive flag, in case our analytics need to be case sensitive.

Additionally, the exclusion list (on the Exclusion tab) can help to refine your calculation results.  This feature allows the ETL developer to specify a known list of terms that will be excluded from the output.  This is useful in cases where industry or common language terms might occur so frequently that including them in the output simply dilutes the rest of the data.

Although the term extraction component does not have a lot of moving parts, it does provide a quick way to perform data exploration and simple text analysis.

Conclusion

In this post, I have briefly demonstrated the attributes of the term extraction transformation, an SSIS data flow tool that can be used for data exploration.  This tool is a quick and easy way to perform triage analysis and mining of raw text data.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)