SQL SERVER – SSMS: Index Usage Statistics, Physical Statistics & User Statistics

When I used to be in the consulting business, the most request session or topics always revolved around Performance Tuning. And whenever I talk to folks around what they want to get covered, they often get to a point of when and how to create efficient indexes in my database. The concepts of indexes and how they function have been in the industry for more than 2 decades and it still baffles me how they are still a topic of debate and interest. The core fundamentals of Indexes have surely stood the challenges of time and are still a great starting point for building highly performant SQL Server or for that matter data centric applications. I consider creating indexes as science rather than rules of thumb. SQL Server is a cost based optimizer and does a pretty good job in identifying, evaluating and using the right index based on your query needs. Having said that, it is also upto an administrator to monitor how the indexes are being used in a given database. Which are the ones mostly used and which least. It is a fine line to walk to understand and eliminate unwanted and duplicate indexes from time to time created by developers. I do have a starter script for duplicate indexes which is worth a look.

In this blog, we would take a quick look at three reports which are available under Database node.

  1. Index Usage Statistics
  2. Index Physical Statistics
  3. User Statistics

Index Usage Statistics

This report show the information about Index operational statistics and usage statistics. When I say usage I mean, the statistics about scans, seeks, when it was last accessed, how many user scans etc. The queries used in the background is pretty straight forward. Index Operational Statistics section shows data using below DMV sys.dm_db_index_operational_stats(db_id(),null,null,null) long with other catalog views to convert object ids to name. Next section Index Usage Statistics shows the data using DMV sys.dm_db_index_usage_stats which is well documented in book online.

Using above report, we can see which Index is used by various queries and if they are doing seek or scan. If it is a small table then it might be OK to see scan but in general, I don’t get a good feeling when I see scan of an index on huge table. Sometimes I get questions on which indexes are not being used by SQL Server. This report can give us the indexes which are used. If you don’t see index not shown here, it would mean that no query has fired which can utilize that index. So, please don’t drop the indexes which the assumption that they are not getting used. Also if the server was restarted recently, there wouldn’t be statistics for us to play with or show in this report.

Index Physical Statistics

As the report name says, it show how the index is physically laid out on data files. Here is how the report looks like:

Under the hood, this report uses below objects.

SELECT t4.name AS [schema_name]
,       t3.name AS table_name
,       t2.name AS index_name
,      t1.OBJECT_ID
,      t1.index_id
,      t1.partition_number
,      t1.index_type_desc
,      t1.index_depth
,      t1.avg_fragmentation_in_percent
,      t1.fragment_count
,      t1.avg_fragment_size_in_pages
,      t1.page_count
FROM sys.dm_db_index_physical_stats(DB_ID(),NULL,NULL,NULL,'LIMITED') t1
INNER JOIN sys.objects t3 ON (t1.OBJECT_ID = t3.OBJECT_ID)
INNER JOIN sys.schemas t4 ON (t3.schema_id = t4.schema_id)
INNER JOIN sys.indexes t2 ON (t1.OBJECT_ID = t2.OBJECT_ID AND  t1.index_id = t2.index_id)
WHERE index_type_desc <> 'HEAP'
ORDER BY t4.name,t3.name,t2.name,partition_number

If we look at the report it gives recommendation about “Operation Recommended” – Rebuild or Reorganize. The recommendation is depending up-on the fragmentation in the index. I must point out that the report does NOT consider the size of table. For example, if I have an index which is having only 10 pages, even rebuild or reorganize might not remove the fragmentation.  So, you should click on (+) symbol and see how big the table it. To demonstrate this, I have selected first table.

Even if I rebuild the index, it would still recommend Rebuild. Notice that this index has just 2 pages so we don’t have to worry about fragmentation.

User Statistics

This report has no relation with Indexes but since this is a small report, I would club them in this blog. This particular report shows information about the users who are connected to database for which report is launched. This report would be useful in situation multiple applications use different logins to connect SQL Server Instance.

Here is the query used under the hood.

SELECT (
dense_rank() OVER (
ORDER BY login_name
,nt_user_name
)
) %
2 AS row_num
,(
row_number() OVER (
ORDER BY login_name
,nt_user_name
,sessions.session_id
)
) %
2 AS session_num
,login_name
,nt_user_name
,sessions.session_id AS session_id
,COUNT(DISTINCT connections.connection_id) AS connection_count
,COUNT(DISTINCT CONVERT(CHAR, sessions.session_id) + '_' + CONVERT(CHAR, requests.request_id)) AS request_count
,COUNT(DISTINCT cursors.cursor_id) AS cursor_count
,CASE
WHEN SUM(requests.open_tran) IS NULL
THEN 0
ELSE SUM(requests.open_tran)
END AS transaction_count
,sessions.cpu_time + 0.0 AS cpu_time
,sessions.memory_usage * 8 AS memory_usage
,sessions.reads AS reads
,sessions.writes AS writes
,sessions.last_request_start_time AS last_request_start_time
,sessions.last_request_end_time AS last_request_end_time
FROM sys.dm_exec_sessions sessions
LEFT JOIN sys.dm_exec_connections connections ON sessions.session_id = connections.session_id
LEFT JOIN MASTER..sysprocesses requests ON sessions.session_id = requests.spid
LEFT JOIN sys.dm_exec_cursors(NULL) cursors ON sessions.session_id = cursors.session_id
WHERE (
sessions.is_user_process = 1
AND requests.dbid = DB_ID()
)
GROUP BY sessions.login_name
,sessions.nt_user_name
,sessions.session_id
,sessions.cpu_time
,sessions.memory_usage
,sessions.reads
,sessions.writes
,sessions.last_request_start_time
,sessions.last_request_end_time

All you need to notice is that information is fetched from sys.dm_exec_sessions, sys.dm_exec_connections and master..sysprocesses. You can also observe that filter has been added for DB_ID() which gives the database id for the connection which is running the query.  Here is the report.

In the column “Active Sessions” there is a (+) symbol for each row which can be used to get details about the sessions used by that particular login.

This is a long and exhaustive list of SSMS reports that we covered as part of the series. Let us see how this learning can be put to use in our environments and let me know if you learnt something new in this series till now.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – SSMS: Resource Locking and Object Execution Statistics Report

As a DBA or SQL Developer, the amount of time we work with SQL Server Management can never be measured. As a database person, these hidden gems of SQL Server can never be underestimated. With each release the tool improves to give the next set of features. I have seen in multiple blogs where the Product Managers constantly ask for feedbacks and are looking at making enhancements to the tool. If you goto the connect site, the votes for a feature requests decides if they will ever make it into the next release of SSMS. If you get the drift, knowing the statistics on a given request decides how the feature will get added. These statistics and questions gives product team the confidence to people’s request. Similarly when working with SQL Server objects, it will be useful if we can know how many times a procedure were called and how many CPU, IO cycles were spent.

In this part of blog we would talk about two reports from the Database level standard reports as shown in the diagram.

Resource Locking Statistics by Objects

This report is similar to earlier blocking report which is available at server level. Since this report is database level, it shows information about blocking within the database for which we launched the report. In case of no blocking in the database, the report would launch and show message as below:

Currently, this graph does not have any data to show”. This is shown in the diagram below.

For demonstrate purpose I have created two blocking chains, one for table t1 and another for table t2. And have launched the repot again. If we compare this with earlier report, it’s clearly visible that the warning is not available and as highlighted in the image, we can see non-zero value under “# Waiting Transactions”. Without clicking on (+) it would not be possible to see who is blocked.

Once we expand Object No. 1, below is what we get. We can clearly see that there are 5 locks which are granted and 1 lock on RID is in WAIT status. The SPID is shown at the same level.

The majority of blocking information in this report is picked from DMV sys.dm_tran_locks and sys.dm_exec_sessions. Other information about object metadata is from catalog views – sys.partitions, sys.objects, sys.schemas

Here is the list of possible locks shown in the report: METADATA, DATABASE, FILE, TABLE, HOBT, EXTENT, PAGE, KEY, RID, ALLOCATION_UNIT and APPLICATION.

Object Execution Statistics

This is one of the interesting report which shows information about the execution of objects in the database. If we launch this report for a database which is not used and plans for any object, stored procedure, function etc. is not available in plan cache then we would see “empty” report like below.

There are five graphs available in the report. All of them are taken from the information available in DMV sys.dm_exec_query_stats which contains historical information about query executions since it was started. If plan is evicted from the cache then the report would not show the query which might be a problem.

To show some data in report, I have executed some stored procedures and functions in AdventureWorks2014 sample database. Here is the report when we have plans available in cache. This is what you would see on production server.

You would notice that out of 5 graphs only 3 are shown. That’s due to the fact that they are not significant higher and would be shown as zero.

Under the cover, it runs below query. I have modified query a little bit to show meaningful column name.

SELECT  CASE WHEN sch.name IS NULL THEN '' ELSE sch.name END AS schema_name
,       dense_rank() OVER (ORDER BY s2.objectid) AS SPRank
,       s3.name AS [Obj Name]
,       s3.TYPE AS [Obj Type]
,       (SELECT TOP 1 SUBSTRING(TEXT,(s1.statement_start_offset+2)/2, (CASE WHEN s1.statement_end_offset = -1  THEN LEN(CONVERT(NVARCHAR(MAX),TEXT))*2 ELSE s1.statement_end_offset END - s1.statement_start_offset) /2  ) FROM sys.dm_exec_sql_text(s1.sql_handle)) AS [SQL Statement]
,       execution_count
,       plan_generation_num
,       last_execution_time
,       ((total_worker_time+0.0)/execution_count)/1000 AS [avg_worker_time]
,       total_worker_time/1000.0 'total_worker_time'
,       last_worker_time/1000.0 'last_worker_time'
,       min_worker_time/1000.0 'min_worker_time'
,       max_worker_time/1000.0 'max_worker_time'
,       ((total_logical_reads+0.0)/execution_count) AS [avg_logical_reads]
,       total_logical_reads
,       last_logical_reads
,       min_logical_reads
,       max_logical_reads
,       ((total_logical_writes+0.0)/execution_count) AS [avg_logical_writes]
,       total_logical_writes
,       last_logical_writes
,       min_logical_writes
,       max_logical_writes
,       ((total_logical_writes+0.0)/execution_count + (total_logical_reads+0.0)/execution_count) AS [avg_logical_IO]
,       total_logical_writes + total_logical_reads 'Total IO'
,       last_logical_writes +last_logical_reads 'Last IO'
,       min_logical_writes +min_logical_reads 'Min IO'
,       max_logical_writes + max_logical_reads  'MAX IO'
FROM    sys.dm_exec_query_stats s1
CROSS apply sys.dm_exec_sql_text(sql_handle) AS  s2
INNER JOIN sys.objects s3  ON ( s2.objectid = s3.OBJECT_ID )
LEFT OUTER JOIN sys.schemas sch ON(s3.schema_id = sch.schema_id)
WHERE s2.dbid = DB_ID()
ORDER BY  s3.name;

This query would give the results for the database in which it would execute. Notice that there is a filter for db_id()  in where clause. In the graph, we can observed object # assigned which is detailed in the table shown below the graph under heading “All Executable Objects”

If we click on (+) sign near Object No column, we can see more details about individual statement in the object. In below image, I have clicked on the (+) symbols near 2 and we can see statement within the function. You would see more rows if there are more statements in the procedure.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Memory Usage By Memory Optimized Objects Report

At conferences and at speaking engagements at the local UG, there is one question that keeps on coming which I wish were never asked. The question around, “Why is SQL Server using up all the memory and not releasing even when idle?” Well, the answer can be long and with the release of SQL Server 2014, this got even more complicated. This release of SQL Server 2014 has the option of introducing In-Memory OLTP which is completely new concept and our dependency on memory has increased multifold. In reality, nothing much changes but we have memory optimized objects (Tables and Stored Procedures) additional which are residing completely in memory and improving performance. As a DBA, it is humanly impossible to get a hang of all the innovations and the new features introduced in the next version. So today’s blog is around the report added to SSMS which gives a high level view of this new feature addition.

This reports is available only from SQL Server 2014 onwards because the feature was introduced in SQL Server 2014. Earlier versions of SQL Server Management Studio would not show the report in the list.

If we try to launch the report on the database which is not having In-Memory File group defined, then we would see the message in report. To demonstrate, I have created new fresh database called MemoryOptimizedDB with no special file group.

Here is the query used to identify whether a database has memory-optimized file group or not.

SELECT TOP(1) 1 FROM sys.filegroups FG WHERE FG.[type] = 'FX'

Once we add filegroup using below command, we would see different version of report.

USE [master]
GO
ALTER DATABASE [MemoryOptimizedDB] ADD FILEGROUP [IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA
GO

The report is still empty because we have not defined any Memory Optimized table in the database.  Total allocated size is shown as 0 MB. Now, let’s add the folder location into the filegroup and also created few in-memory tables. We have used the nomenclature of IMO to denote “InMemory Optimized” objects.

USE [master]
GO
ALTER DATABASE [MemoryOptimizedDB]
ADD FILE ( NAME = N'MemoryOptimizedDB_IMO', FILENAME = N'E:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\MemoryOptimizedDB_IMO')
TO FILEGROUP [IMO_FG]
GO

You may have to change the path based on your SQL Server configuration. Below is the script to create the table.

USE MemoryOptimizedDB
GO
--Drop table if it already exists.
IF OBJECT_ID('dbo.SQLAuthority','U') IS NOT NULL
DROP TABLE dbo.SQLAuthority
GO
CREATE TABLE dbo.SQLAuthority
(
ID INT IDENTITY NOT NULL,
Name CHAR(500)  COLLATE Latin1_General_100_BIN2 NOT NULL DEFAULT 'Pinal',
CONSTRAINT PK_SQLAuthority_ID PRIMARY KEY NONCLUSTERED (ID),
INDEX hash_index_sample_memoryoptimizedtable_c2 HASH (Name) WITH (BUCKET_COUNT = 131072)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO

As soon as above script is executed, table and index both are created. If we run the report again, we would see something like below.

Notice that table memory is zero but index is using memory. This is due to the fact that hash index needs memory to manage the buckets created. So even if table is empty, index would consume memory. More about the internals of how In-Memory indexes and tables work will be reserved for future posts. Now, use below script to populate the table with 10000 rows

INSERT INTO SQLAuthority VALUES (DEFAULT)
GO 10000

Here is the same report after inserting 1000 rows into our InMemory table.

 

 There are total three sections in the whole report.

  1. Total Memory consumed by In-Memory Objects
  2. Pie chart showing memory distribution based on type of consumer – table, index and system.
  3. Details of memory usage by each table.

The information about all three is taken from one single DMV, sys.dm_db_xtp_table_memory_stats This DMV contains memory usage statistics for both user and system In-Memory tables. If we query the DMV and look at data, we can easily notice that the system tables have negative object IDs.  So, to look at user table memory usage, below is the over-simplified version of query.

USE MemoryOptimizedDB
GO
SELECT OBJECT_NAME(OBJECT_ID), *
FROM sys.dm_db_xtp_table_memory_stats
WHERE OBJECT_ID > 0
GO

This report would help DBA to identify which in-memory object taking lot of memory which can be used as a pointer for designing solution. I am sure in future we will discuss at lengths the whole concept of In-Memory tables in detail over this blog. To read more about In-Memory OLTP, have a look at In-Memory OLTP Series at Balmukund’s Blog.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Database Consistency History Report

Doctor and Database

The last place I like to visit is always a hospital. With the monsoon season starting, intermittent rains, it has become sort of a routine to get a cycle of fever every other year (seriously I hate it). So when I visit my doctor, it is always interesting in the way he quizzes me. The routine question of – “How many days have you had this?”, “Is there any pattern?”, “Did you drench in rain?”, “Do you have any other symptom?” and so on. The idea here is that the doctor wants to find any anomaly or a pattern that will guide him to a viral or bacterial type. Most of the time they get it based on experience and sometimes after a battery of tests. So if there is consistent behavior to your problem, there is always a solution out. SQL Server has its way to find if the server data / files are in consistent state using the DBCC commands.

Back to SQL Server

In real life, Database consistency check is one of the critical operations a DBA generally doesn’t give much priority. Many readers of my blogs have asked many times, how do we know if the database is consistent? How do I read output of DBCC CHECKDB and find if everything is right or not?

My common answer to all of them is – look at the bottom of checkdb (or checktable) output and look for below line.

CHECKDB found 0 allocation errors and 0 consistency errors in database ‘DatabaseName’.

Above is a “good sign” because we are seeing zero allocation and zero consistency error. If you are seeing non-zero errors then there is some problem with the database. Sample output is shown as below:

CHECKDB found 0 allocation errors and 2 consistency errors in database ‘DatabaseName’.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (DatabaseName).

If we see non-zero error then most of the time (not always) we get repair options depending on the level of corruption. There is risk involved with above option (repair_allow_data_loss), that is – we would lose the data. Sometimes the option would be repair_rebuild which is little safer. Though these options are available, it is important to find the root cause to the problem.

In standard report, there is a report which can show the history of checkdb executed for the selected database. Since this is a database level report, we need to right click on database, click Reports, click Standard Reports and then choose “Database Consistency History” report.

The information in this report is picked from default trace. If default trace is disabled or there is no checkdb run or information is not there in default trace (because it’s rolled over), we would get report like below.

As we can see report says it very clearly: Currently, no execution history of CHECKDB is available or default trace is not enabled.

To demonstrate, I have caused corruption in one of the database and did below steps.

  1. Run CheckDB so that errors are reported.
  2. Fix the corruption by losing the data using repair option
  3. Run CheckDB again to check if corruption is cleared.

After that I have launched the report and below is what we would see.

If you are lazy like me and don’t want to run the report manually for each database then below query would be handy to provide same report for all database. This query is runs behind the scenes by the report. All I have done is remove the filter for database name (at the last – highlighted).

DECLARE @curr_tracefilename VARCHAR(500);
DECLARE @base_tracefilename VARCHAR(500);
DECLARE @indx INT;
SELECT @curr_tracefilename = path FROM sys.traces WHERE is_default = 1;
SET @curr_tracefilename = REVERSE(@curr_tracefilename);
SELECT @indx  = PATINDEX('%\%', @curr_tracefilename) ;
SET @curr_tracefilename = REVERSE(@curr_tracefilename);
SET @base_tracefilename = LEFT( @curr_tracefilename,LEN(@curr_tracefilename) - @indx) + '\log.trc';
SELECT  SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),36, PATINDEX('%executed%',TEXTData)-36) AS command
,       LoginName
,       StartTime
,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%found%',TEXTData) +6,PATINDEX('%errors %',TEXTData)-PATINDEX('%found%',TEXTData)-6)) AS errors
,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%repaired%',TEXTData) +9,PATINDEX('%errors.%',TEXTData)-PATINDEX('%repaired%',TEXTData)-9)) repaired
,       SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%time:%',TEXTData)+6,PATINDEX('%hours%',TEXTData)-PATINDEX('%time:%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%hours%',TEXTData) +6,PATINDEX('%minutes%',TEXTData)-PATINDEX('%hours%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%minutes%',TEXTData) +8,PATINDEX('%seconds.%',TEXTData)-PATINDEX('%minutes%',TEXTData)-8) AS time
FROM::fn_trace_gettable( @base_tracefilename, DEFAULT)
WHERE EventClass = 22 AND SUBSTRING(TEXTData,36,12) = 'DBCC CHECKDB'
-- AND DatabaseName = @DatabaseName;

Don’t get worried about the logic above. All it is doing is reading the trace files, parsing below entry and getting out information for underlined words.

DBCC CHECKDB (CorruptedDatabase) executed by sa found 2 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.  Internal database snapshot has split point LSN = 00000029:00000030:0001 and first LSN = 00000029:00000020:0001.

Hopefully now onwards you would run checkdb and understand the importance of it. As responsible DBAs I am sure you are already doing it, let me know how often do you actually run them on you production environment?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Backup and Restore Events Report

A DBA wears multiple hats and in fact does more than what an eye can see. One of the core task of a DBA is to take backups. This looks so trivial that most developers shrug this off as the only activity a DBA might be doing. I have huge respect for DBA’s all around the world because even if they seem cool with all the scripting, automation, maintenance works round the clock to keep the business working almost 365 days 24×7, their worth is knowing that one day when the systems / HDD crashes and you have an important delivery to make. So these backup tasks / maintenance jobs that have been done come handy and are no more trivial as they might seem to be as considered by many. So the important question like: “When was the last backup taken?”, “How much time did the last backup take?”, “What type of backup was taken last?” etc are tricky questions and this report lands answers to the same in a jiffy.

So the SSMS report, we are talking can be used to find backups and restore operation done for the selected database. Whenever we perform any backup or restore operation, the information is stored in the msdb database. This report can utilize that information and provide information about the size, time taken and also the file location for those operations.

Here is how this report can be launched.

 

Once we launch this report, we can see 4 major sections shown as listed below.

  • Average Time Taken For Backup Operations
  • Successful Backup Operations
  • Backup Operation Errors
  • Successful Restore Operations

Let us look at each section next.

Average Time Taken For Backup Operations

Information shown in “Average Time Taken For Backup Operations” section is taken from a backupset table in the msdb database. Here is the query and the expanded version of that particular section

USE msdb;
SELECT (ROW_NUMBER() OVER (ORDER BY t1.TYPE))%2 AS l1
,       1 AS l2
,       1 AS l3
,       t1.TYPE AS [type]
,       (AVG(DATEDIFF(ss,backup_start_date, backup_finish_date)))/60.0 AS AverageBackupDuration
FROM backupset t1
INNER JOIN sys.databases t3 ON ( t1.database_name = t3.name)
WHERE t3.name = N'AdventureWorks2014'
GROUP BY t1.TYPE
ORDER BY
t1.TYPE

On my small database the time taken for differential backup was less than a minute, hence the value of zero is displayed. This is an important piece of backup operation which might help you in planning maintenance windows.

Successful Backup Operations

Here is the expanded version of this section.

 

This information is derived from various backup tracking tables from msdb database.  Here is the simplified version of the query which can be used separately as well.

SELECT *
FROM sys.databases t1
INNER JOIN backupset t3 ON (t3.database_name = t1.name)
LEFT OUTER JOIN backupmediaset t5 ON ( t3.media_set_id = t5.media_set_id)
LEFT OUTER JOIN backupmediafamily t6 ON ( t6.media_set_id = t5.media_set_id)
WHERE (t1.name = N'AdventureWorks2014')
ORDER BY backup_start_date DESC,t3.backup_set_id,t6.physical_device_name;

The report does some calculations to show the data in a more readable format. For example, the backup size is shown in KB, MB or GB. I have expanded first row by clicking on (+) on “Device type” column. That has shown me the path of the physical backup file.

Personally looking at this section, the Backup Size, Device Type and Backup Name are critical and are worth a note. As mentioned in the previous section, this section also has the Duration embedded inside it.

Backup Operation Errors

This section of the report gets data from default trace. You might wonder how. One of the event which is tracked by default trace is “ErrorLog”. This means that whatever message is written to errorlog gets written to default trace file as well. Interestingly, whenever there is a backup failure, an error message is written to ERRORLOG and hence default trace. This section takes advantage of that and shows the information. We can read below message under this section, which confirms above logic.

No backup operations errors occurred for (AdventureWorks2014) database in the recent past or default trace is not enabled.

Successful Restore Operations

This section may not be very useful in production server (do you perform a restore of database?) but might be useful in the development and log shipping secondary environment, where we might be interested to see restore operations for a particular database. Here is the expanded version of the section. To fill this section of the report, I have restored the same backups which were taken to populate earlier sections.

Here is the simplified version of the query used to populate this output.

USE msdb;
SELECT *
FROM restorehistory t1
LEFT OUTER JOIN restorefile t2 ON ( t1.restore_history_id = t2.restore_history_id)
LEFT OUTER JOIN backupset t3 ON ( t1.backup_set_id = t3.backup_set_id)
WHERE t1.destination_database_name = N'AdventureWorks2014'
ORDER BY restore_date DESC,  t1.restore_history_id,t2.destination_phys_name

Have you ever looked at the backup strategy of your key databases? Are they in sync and do we have scope for improvements? Then this is the report to analyze after a week or month of maintenance plans running in your database. Do chime in with what are the strategies you are using in your environments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Disk Usage Report

Let us start with humor! 

I think we the series on various reports, we come to a logical point. We covered all the reports at server level. This means the reports we saw were targeted towards activities that are related to instance level operations. These are mostly like how a doctor diagnoses a patient. At this point I am reminded of a dialog which I read somewhere:

Patient: Doc, It hurts when I touch my head.

Doc: Ok, go on. What else have you experienced?

Patient: It hurts even when I touch my eye, it hurts when I touch my arms, it even hurts when I touch my feet, etc.

Doc: Hmmm

Patient: I feel it hurts when I touch anywhere in my body.

Doc: Ahh now I get it. You need a plaster to your finger John.

Sometimes the server level gives an indicator to what is happening in the system, but we need to get to the root cause for a specific database. So, this is the first blog in series where we would start discussing about database level reports. To launch database level reports, expand selected server in Object Explorer, expand the Databases folder, and then right-click any database for which we want to look at reports. From the menu, select Reports, then Standard Reports, and then any of database level reports.

In this blog, we would talk about four “disk” reports because they are similar:

  1. Disk Usage
  2. Disk Usage by Top Tables
  3. Disk Usage by Table
  4. Disk Usage by Partition

Disk Usage

This report shows multiple information about the database. Let us discuss them one by one.  We have divided the output into 5 different sections.

Section 1 shows the high level summary of the database. It shows the space used by database files (mdf and ldf). Under the hood, the report uses, various DMVs and DBCC Commands, it is using sys.data_spaces and DBCC SHOWFILESTATS.

Section 2 and 3 are pie charts. One for data file allocation and another for the transaction log file. Pie chart for “Data Files Space Usage (%)” shows space consumed data, indexes, allocated to the SQL Server database, and unallocated space which is allocated to the SQL Server database but not yet filled with anything. “Transaction Log Space Usage (%)” used DBCC SQLPERF (LOGSPACE) and shows how much empty space we have in the physical transaction log file.

Section 4 shows the data from Default Trace and looks at Event IDs 92, 93, 94, 95 which are for “Data File Auto Grow”, “Log File Auto Grow”, “Data File Auto Shrink” and “Log File Auto Shrink” respectively. Here is an expanded view for that section.

If default trace is not enabled, then this section would be replaced by the message “Trace Log is disabled” as highlighted below.

Section 5 of the report uses DBCC SHOWFILESTATS to get information. Here is the enhanced version of that section. This shows the physical layout of the file.

In case you have In-Memory Objects in the database (from SQL Server 2014), then report would show information about those as well. Here is the screenshot taken for a different database, which has In-Memory table. I have highlighted new things which are only shown for in-memory database.

The new sections which are highlighted above are using sys.dm_db_xtp_checkpoint_files, sys.database_files and sys.data_spaces. The new type for in-memory OLTP is ‘FX’ in sys.data_space.

The next set of reports is targeted to get information about a table and its storage. These reports can answer questions like:

  1. Which is the biggest table in the database?
  2. How many rows we have in table?
  3. Is there any table which has a lot of reserved space but its unused?
  4. Which partition of the table is having more data?

Disk Usage by Top Tables

This report provides detailed data on the utilization of disk space by top 1000 tables within the Database. The report does not provide data for memory optimized tables.

Disk Usage by Table

This report is same as earlier report with few difference.

  1. First Report shows only 1000 rows
  2. First Report does order by values in DMV sys.dm_db_partition_stats whereas second one does it based on name of the table.

Both of the reports have interactive sort facility. We can click on any column header and change the sorting order of data.

Disk Usage by Partition

This report shows the distribution of the data in table based on partition in the table.

This is so similar to previous output with the partition details now. Here is the query taken from profiler.

SELECT
row_number() OVER (ORDER BY a1.used_page_count DESC, a1.index_id) AS row_number
,      (dense_rank() OVER (ORDER BY a5.name, a2.name))%2 AS l1
,      a1.OBJECT_ID
,      a5.name AS [schema]
,       a2.name
,       a1.index_id
,       a3.name AS index_name
,       a3.type_desc
,       a1.partition_number
,       a1.used_page_count * 8 AS total_used_pages
,       a1.reserved_page_count * 8 AS total_reserved_pages
,       a1.row_count
FROM sys.dm_db_partition_stats a1
INNER JOIN sys.all_objects a2  ON ( a1.OBJECT_ID = a2.OBJECT_ID) AND a1.OBJECT_ID NOT IN (SELECT OBJECT_ID FROM sys.tables WHERE is_memory_optimized = 1)
INNER JOIN sys.schemas a5 ON (a5.schema_id = a2.schema_id)
LEFT OUTER JOIN  sys.indexes a3  ON ( (a1.OBJECT_ID = a3.OBJECT_ID) AND (a1.index_id = a3.index_id) )
WHERE (SELECT MAX(DISTINCT partition_number)
FROM sys.dm_db_partition_stats a4
WHERE (a4.OBJECT_ID = a1.OBJECT_ID)) >= 1
AND a2.TYPE <> N'S'
AND  a2.TYPE <> N'IT'
ORDER BY a5.name ASC, a2.name ASC, a1.index_id, a1.used_page_count DESC, a1.partition_number

Using all of the above reports, you should be able to get the usage of database files and also space used by tables.

I think this is too much disk information for a single blog and I hope you have used them in the past to get data. Do let me know if you found anything interesting using these reports in your environments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Top Object and Batch Execution Statistics Reports

The month of June till mid of July has been the fever of sports. First, it was Wimbledon Tennis and then the Soccer fever was all over. There is a huge number of fan followers and it is great to see the level at which people sometimes worship these sports. Being an Indian, I cannot forget to mention the India tour of England later part of July. Following these sports and as the events unfold to the finals, there are a number of ways the statisticians can slice and dice the numbers. Cue from soccer I can surely say there is a team performance against another team and then there is individual member fairs against a particular opponent. Such statistics give us a fair idea to how a team in the past or in the recent past has fared against each other, head-to-head stats during World cup and during other neutral venue games.

All these statistics are just pointers. In reality, they don’t reflect the calibre of the current team because the individuals who performed in each of these games are totally different (Typical example being the Brazil Vs Germany semi-final match in FIFA 2014). So at times these numbers are misleading. It is worth investigating and get the next level information.

Similar to these statistics, SQL Server Management studio is also equipped with a number of reports like a) Object Execution Statistics report and b) Batch Execution Statistics reports. As discussed in the example, the team scorecard is like the Batch Execution statistics and individual stats is like Object Level statistics. The analogy can be taken only this far, trust me there is no correlation between SQL Server functioning and playing sports – It is like I think about diet all the time except while I am eating.

Performance – Batch Execution Statistics

Let us view the first report which can be invoked from Server Node -> Reports -> Standard Reports -> Performance – Batch Execution Statistics.

Most of the values that are displayed in this report come from the DMVs sys.dm_exec_query_stats and sys.dm_exec_sql_text(sql_handle).

This report contains 3 distinctive sections as outline below.

 

Section 1: This is a graphical bar graph representation of Average CPU Time, Average Logical reads and Average Logical Writes for individual batches. The Batch numbers are indicative and the details of individual batch is available in section 3 (detailed below).

Section 2: This represents a Pie chart of all the batches by Total CPU Time (%) and Total Logical IO (%) by batches. This graphical representation tells us which batch consumed the highest CPU and IO since the server started, provided plan is available in the cache.

Section 3: This is the section where we can find the SQL statements associated with each of the batch Numbers. This also gives us the details of Average CPU / Average Logical Reads and Average Logical Writes in the system for the given batch with object details. Expanding the rows, I will also get the # Executions and # Plans Generated for each of the queries.

Performance – Object Execution Statistics

The second report worth a look is Object Execution statistics. This is a similar report as the previous but turned on its head by SQL Server Objects.

The report has 3 areas to look as above. Section 1 gives the Average CPU, Average IO bar charts for specific objects. The section 2 is a graphical representation of Total CPU by objects and Total Logical IO by objects.

The final section details the various objects in detail with the Avg. CPU, IO and other details which are self-explanatory.

At a high-level both the reports are based on queries on two DMVs (sys.dm_exec_query_stats and sys.dm_exec_sql_text) and it builds values based on calculations using columns in them:

SELECT *
FROM    sys.dm_exec_query_stats s1
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2
WHERE   s2.objectid IS NOT NULL AND DB_NAME(s2.dbid) IS NOT NULL
ORDER BY  s1.sql_handle;

This is one of the simplest form of reports and in future blogs we will look at more complex reports.

I truly hope that these reports can give DBAs and developers a hint about what is the possible performance tuning area. As a closing point I must emphasize that all above reports pick up data from the plan cache. If a particular query has consumed a lot of resources earlier, but plan is not available in the cache, none of the above reports would show that bad query.

Reference: Pinal Dave (http://blog.sqlauthority.com)