SQL SERVER – Is tempDB behaving like a Normal DB?

The more I work with SQL Server, more I am baffled with the way SQL Server works. In all these interactions, I have seen DBA’s and Developers think about TempDB as something totally different from a usage point of view and treat it differently. Yes, I do agree TempDB is a special purpose database and needs special attention because it is a shared database for the SQL Server instance. As the name suggests, it is for temporary working of SQL Server. In this blog, I am not going to rehash hundreds of blogs / whitepaper on how important tempDB is and what are the processes which utilize tempDB.

In this blog I would take a middle ground of how I used to work and coach DBA’s when doing performance tuning activities during my consulting days. These are interesting ways of looking at TempDB because in my opinion, tempDB is also similar to normal DB in multiple ways with a twist. Whenever I say this statement many question to how I can say this? If tempDB is like normal database, then it should also function like a normal database.

I generally take this challenge and say the twist is there for a reason. But the fundamentals of databases, pages, allocations etc. are all the same – no matter what database we are talking about. Hence, in my opinion TempDB is similar to normal database in many ways. Generally to illustrate this, I use the following explanation.

USE tempdb
GO
CHECKPOINT
GO
SELECT Operation, Context, [Transaction ID], AllocUnitId, AllocUnitName, [Page ID],
[Transaction Name], [Description]
FROM sys.fn_dblog(NULL, NULL)
GO

Since TempDB can be set with SIMPLE recovery model only, the CHECKPOINT will make sure it flushes all the data and we will normally be left out 3 rows. To drive home concept, we will try to create a simple single table and insert a row. It is shown below:

CREATE TABLE #Understanding_temp_table
(id INT,
col1 CHAR(20),
col2 DATETIME,
col3 CHAR(50));
INSERT INTO #Understanding_temp_table
VALUES (1, 'Pinal', '01-Dec-2014 07:01:01', '* * DUMMY * *')
SELECT * FROM #Understanding_temp_table

Now, if we go ahead and execute the below DMV query again we will get close to 150+ rows. If we look at the PFS page allocation, IAM allocations, Data page allocation and many more entries. The values and way to read fn_dblog() will be reserved for some other blog post, but let me take show you some interesting things to lookout for.

SELECT Operation, Context, [Transaction ID], AllocUnitId, AllocUnitName, [Page ID],
[Transaction Name], [Description]
FROM sys.fn_dblog(NULL, NULL)
GO

In the output of above DMV, we can see these two entries available on our Description column which are interesting to look at. In my example, I have taken the two rows and shown for reference.

Changed type HOBT_FIRST_IAM (2) for AU 1008806317071335424 in rowset 1008806317066092544 from page 0000:00000000 to page 0001:0000008e

Changed type HOBT_FIRST (1) for AU 1008806317071335424 in rowset 1008806317066092544 from page 0000:00000000 to page 0001:0000007f

If we take the page of 0000008e , 0000007f and convert it from Hex to Decimal, the values are: 142 and 127 respectively. Now let us try to map this to the allocations inside our TempDB database.

SELECT allocated_page_file_id, allocated_page_page_id, allocated_page_iam_page_id, *
FROM sys.dm_db_database_page_allocations(2, NULL , NULL, NULL, 'DETAILED')
WHERE OBJECT_ID NOT IN (SELECT ID FROM sysobjects WHERE TYPE IN ('S','IT','SQ'))

From the query output, we can see the pages 142 and 127 are allocated to IAM page and Data pages respectively. In a sense this is almost similar to how a normal database would function too. The allocation of pages would be similar. The only catch here is the object ID is negative. These are user defined temp tables inside tempDB, that is the only difference. We can take a look at the object name using the below query. I remember reading an article from my good friend Vinod Kumar around Database Page Basics if you want a quick refresher.

As defined before, we know the data page is 127. Let us use the DBCC command to dump the page data and check what is on our page. When I execute the command, you can see the single row we inserted at the start of this article. This is very much in line to our understanding of inserting into a table.

The twist is simple, if we go ahead and close the session that created the Temp Table (#Understanding_temp_table), then SQL Server goes ahead and cleans the pages allocated. This can be confirmed again by running the command. The typical output is shown below.

I hope this blog gives you a feeler to how allocations happen to tables, irrespective of them being temp or normal. Secondly, we can also see once the session is closed, the allocations are released back for temp tables. In future blogs, we will take a deeper look into tempdb and learn about how they are different when compared to normal databases.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – SQLCMD to Build Comma Separated String

We have an active usergroup in Bangalore called SQLBangalore. Whenever I get a chance to attend the local usergroup sessions (which happens often – trust me), I get to learn something every single time. In one of the recent user group meetings I had the opportunity to attend a Tips and tricks session by my good friends Balmukund and Vinod Kumar. Having friends who can stump you every now and then is an experience by itself for me.

Now, the scenario we are talking is simple. How can I create a comma separated string of table’s data? This is like exporting a table’s data using SQLCMD. At first this looks interesting, but can be a great learning of how SQLCMD works behind the scenes.

We will create a file for which output needs to be made. In my example, I have created a file called as Contacts-People.sql which contains the following statement.

SET NOCOUNT ON
SELECT
FirstName, Lastname FROM Adventureworks2012.[Person].[Person]
WHERE Lastname = 'Ferrier'

This returns close to 16 rows on my database. Now the requirement is to get the output as a comma separated string. We can go in steps to achieve the same.

Step 1: Give the input file to SQLCMD command

C:\Temp> sqlcmd -i Contacts-People.sql

The –i option can be used to define input file.

Step 2: Make the columns width narrow to get a concise output

C:\Temp> sqlcmd -i Contacts-People.sql -W

Step 3: Next step is to add the comma separator between the columns.

C:\Temp> sqlcmd -i Contacts-People.sql -W -s,

This adds the separator.

Step 4: This is the icing on the cake. We want to remove the header so that we get ONLY the column values as part of our output.

C:\Temp> sqlcmd -i Contacts-People.sql -W -s, -h-1

There you have it. This is an easy and quick way we can generate a comma separate string of the columns from a table. I am sure there are more than one way to do this. Feel free to send the output to a flat file and name it as .csv. Do let me know if you are aware of any other method.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Restricting Access to Contained Databases using Logon Triggers

When I wrote the article on “A Simple Example of Contained Databases” lesser did I know how people might be planning to use this capability? Having said that, there are environments where people want to use the concept of contained databases yet have flexibility of restricting the users. Recently, a DBA friend of mine had an interesting requirement and I was pleasantly surprised with the question:

Pinal: Hi Buddy.

DBA: Hi Pinal, Sorry to disturb you in the middle of the night.

Pinal: No problems, please go ahead.

DBA: I have a query around Contained Databases.

Pinal: Sure, let me see if I can help you. Not sure though.

DBA: Well, contained databases creates users inside the database and it is also used for authentication right?

Pinal: You are correct in your understanding. Now is there a problem?

DBA: No, there is no problem. I am worried about contained database implementation.

Pinal: Hmmm I don’t quite understand. Did you get a chance to search my site for a simple sample of implementing the same?

DBA: I did read the article on SQLAuthority, which is not the problem. It is about auditing and security processes inside my company.

Pinal: That is an interesting point of view. Please tell me more.

DBA: In our company, we are very strict and want to track every logins which are getting authenticated inside SQL Server. With contained databases, I want to build a different process.

Pinal: Go ahead, I am all ears.

DBA: We have enabled the creation of Contained Databases on the server. I want to restrict the databases which can be created as Contained Databases.

Pinal: Wow, why would you want to do like this?

DBA: Simple, as a DBA and Server Administrator – want to make sure only authorized users and databases which can act as contained databases can use the capability. We don’t want random users to create contained database on the server.

Pinal: Interesting, yes it is quite possible and the idea here is to use Logon Triggers. Let me make sure I try to blog this someday.

This blog is completely inspired by this conversation and let us go through the implementation. Here are the steps:

  1. Create a table in Master to hold databases that can be used for authentication.
  2. Insert the list of databases into the created table.
  3. Create the Logon Trigger to restrict the login available
  4. Try to Logon into the DB as normal user and as Contained Database user.
  5. Delete an entry from our table and check by logging in.
  6. Cleanup the script.

So you might ask, why master database and not any other database? Since logon triggers are server-scoped objects, we will create any necessary additional objects in master.

Note: Please DONOT use the script as-is because we are creating a Logon trigger. Please test, verify and then use the below script.

Let me walk you through the script:

-- Step 1 - Create a table in Master to hold databases that can be used for authentication.
USE MASTER
GO
CREATE TABLE [dbo].[authenticated_DBs](dbs INT PRIMARY KEY);
GO
-- We want anyone to be able to access this data for read-only purposes
GRANT SELECT ON [dbo].[authenticated_DBs] TO PUBLIC;
GO

The first step is to create a table which will host all the authenticated DBs we want for this particular instance that the DBA is monitoring. We have created the same in the master so that only authenticated and authorized users can change this.

-- Step 2 - Insert the list of databases into the created table.
-- Add the DB id for all of the DBs authorized to authenticate
-- including/excluding master DB
INSERT INTO [dbo].[authenticated_DBs] VALUES (DB_ID('master'));
GO
-- Used the script from previous blog to create the Contained DB
-- http://blog.sqlauthority.com/2011/03/31/sql-server-denali-a-simple-example-of-contained-databases/
-- Insert database id for ContainedDatabase
INSERT INTO [dbo].[authenticated_DBs] VALUES (DB_ID('ContainedDatabase'));
GO

The next step is to identify which all databases can allow login process. In our example, make sure we have made an entry for “master” and the database called “ContainedDatabase” based on the script from the other blog post.

-- Step 3 - Create the Logon Trigger to restrict the login available
-- This logon trigger will verify the current logon matches with one of the authorized DBs.
-- If it does, it allows the logon process to continue,
-- otherwise it will rollback, causing the session to terminate
CREATE TRIGGER Logon_authenticated_dbs
ON ALL SERVER FOR LOGON
AS
BEGIN
IF
NOT (EXISTS(SELECT * FROM [dbo].[authenticated_DBs]
WHERE dbs IN (SELECT authenticating_database_id
FROM sys.dm_exec_sessions
WHERE session_id = @@spid)))
ROLLBACK;
END;
GO

The Logon Trigger is the tricky part of this whole blog. You can be innovative with other parameters too. But be careful when playing with Logon triggers as this can lock you out of your SQL Server instance. Please use it carefully and cautiously.

SELECT * FROM [dbo].[authenticated_DBs]

I am showing the two entries that are on my local machine at the moment. We will try to mimic the login process and see what happens.

-- Step 5 - Delete an entry from our table and check by logging in.
-- Your [ContainedDatabase] database id might not be 7. Change appropriately
DELETE [dbo].[authenticated_DBs] WHERE dbs = 7
-- Try to connect again using ContainedUser
-- ContainedUser fails to connect due to trigger
INSERT INTO [dbo].[authenticated_DBs] VALUES (DB_ID('ContainedDatabase'));
GO
-- Try connect again as ContainedUser. Now it should work.

If you are have an entry into our authenticated table and are a valid user in the “Containeddatabase”, then the Login process will succeed as shown below.

If the entry into the table [dbo].[authenticated_DBs] is removed, we will be presented with an error message as shown below.

We are likely to get a similar error message if we are connecting to a contained database which was created by a user and the DBA has not given explicit rights by adding an entry into the server node.

USE MASTER
GO
-- CAUTION: If you forget to do this and drop the table authenticated_DBs then you need to manually delete this trigger or else you can't get into master too
DROP TRIGGER Logon_authenticated_dbs
ON ALL SERVER
GO
DROP TABLE authenticated_DBs
GO

In case you write a bad trigger which is always going to fail then you would be locked out from the server. In those situation DAC connections can help. I have written this earlier

SQL SERVER – Fix : Error : 17892 Logon failed for login due to trigger execution. Changed database context to ‘master’.

Please make sure to do the cleanup else you are likely to get errors if you are using this script for testing purposes and are an user of Contained Databases in your environments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to Chang Audit Location?

The auditing capability inside SQL Server is a hidden gem and not known to many in my opinion. If you have a requirement to audit your SQL Server environment and want to keep track of the activities such as DBCC commands on the server, backup, restore, failed logins, login creations, database access, database permission changes, user password changes, trace changes and much more can be audited at the server node. A similar longer list is available at the database audit node too which we can use. In this blog post let me talk about a simple task of changing the location of the audit log after it was created.

This post is inspired by one of the sessions I attended where one of the attendees had asked, if we can audit and store the information of audit on a network share. What will happen when the network share is unavailable because of network issues? Is there a seamless way to switch or change the network location anyhow? This was an interesting question and I do know from SQL Server 2012, the concept of audit resilience is available. Moreover we can also change the share which holds the Audit information using TSQL.

In this blog post we will go in sequence below:

  1. Create an Audit on a network share
  2. Enable the Audit
  3. Create an Server Audit Specification
  4. Enable the Server Audit Specification
  5. Look at the metadata and access the Logs
  6. Create an failure by removing sharing the Network drive
  7. Change the Audit location to a local drive
  8. Access the Audit logs

So let us go through the scripts for the above steps:

-- Step 1 - Create an Audit on a network share
CREATE SERVER AUDIT [NetworkAudit]
TO FILE(FILEPATH='\\SQLAuthority\Audit\Network') /* substitute in here network drive */
WITH (ON_FAILURE=FAIL_OPERATION, QUEUE_DELAY=0);

-- Step 2 - Enable the Audit
ALTER SERVER AUDIT [NetworkAudit] WITH (STATE=ON);

-- Step 3 - Create a Server Audit Specification
CREATE SERVER AUDIT SPECIFICATION [complianceServerSpec] FOR SERVER AUDIT [NetworkAudit]
ADD (SCHEMA_OBJECT_ACCESS_GROUP);

-- Step 4 - Enable the Server Audit Specification
ALTER SERVER AUDIT SPECIFICATION [complianceServerSpec] WITH (STATE=ON);

If you ask me these are the most common steps we will take to create ANY audit. As outlined above, we have created a server audit where we are auditing any Schema Object Access.

-- Step 5 - Look at the metadata and access the Logs
SELECT * FROM sys.server_file_audits
SELECT * FROM sys.fn_get_audit_file('\\SQLAuthority\Audit\Network\*', NULL, NULL);
SELECT * FROM sys.dm_server_audit_status

Let us simulate an error on the network path where the audit files are located. Check the metadata where the filesize if 0. That indicates we have a lost network location.

-- Step 6 - Create an failure by unsharing the Network drive
SELECT * FROM sys.dm_server_audit_status

If we try to read the file location we will get the following error:

SELECT * FROM sys.fn_get_audit_file('\\SQLAuthority\Audit\Network\*', NULL, NULL);

Msg 33224, Level 16, State 4, Line 20
The specified pattern did not return any files or does not represent a valid file share. Verify the pattern parameter and rerun the command. 

Since this network drive is unavailable, we need to move the auditing to a local drive. Here is how we can do the same.

-- Step 7 - Change the Audit location to a local drive
ALTER SERVER AUDIT [NetworkAudit] WITH (STATE=OFF);
ALTER SERVER AUDIT [NetworkAudit] TO FILE(FILEPATH='D:\Audit');
ALTER SERVER AUDIT [NetworkAudit] WITH (STATE=ON);

In this example I am moving the drive to a local D:\ drive. At many times it is quite possible we might not have access or permission to write to a local folder on the server. If that is the case, we are likely to get the following error when we try to enable the STATE=ON.

Msg 33222, Level 16, State 1, Line 29
Audit 'NetworkAudit' failed to start. For more information, see the SQL Server error log. You can also query sys.dm_os_ring_buffers where ring_buffer_type = 'RING_BUFFER_XE_LOG'.

If we query the dm_os_ring_buffers, it will mention the file creation error. This is because we do not have enough permissions to write on the folder. Go ahead and change the permissions appropriately and rerun the command.

<Record id = “443” type =”RING_BUFFER_XE_LOG” time =”4084184″><XE_LogRecord message=”file: file create or open failed (last error: 5)”></XE_LogRecord></Record>

There is yet another error that can come when configuring the location to a local server. If the folder specified in the ALTER SERVER AUDIT is not, accessible or not available, SQL Server will raise an error as:

Msg 33072, Level 16, State 1, Line 2
The audit log file path is invalid.

So make sure the folder specified exists before issuing the command. Let us check the metadata to see if the changes have taken effect:

-- Step 8 - Access the Audit logs
SELECT * FROM sys.dm_server_audit_status
SELECT * FROM sys.fn_get_audit_file('D:\Audit\*', NULL, NULL);

The output would look like, take a note of file size to be non-zero:

This concludes the logical steps one needs to do to setup and change the audit file location inside SQL Server. As a final step, here are the steps to clean up the objects created in this blog post.

-- Cleanup
ALTER SERVER AUDIT SPECIFICATION [complianceServerSpec] WITH (STATE=OFF);
ALTER SERVER AUDIT [NetworkAudit] WITH (STATE=OFF);
DROP SERVER AUDIT SPECIFICATION [complianceServerSpec];
DROP SERVER AUDIT [NetworkAudit];

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Fix – Msg 230, Level 14, State 1 – The SELECT permission was denied on the column of the object , database , schema

Being a DBA is one of the most rewarding experience because you control the server as well as the way the permissions are given to each and every individual users. In my career I have often seen people try to underestimate the use of GRANT, DENY and REVOKE. Not so much for the third part, but the first two are super important and super critical.

In this blog post, let me take you through a common requirement that I hear from application developers and business users about how to secure the environment. Let us take a simple scenario to illustrate the basics of GRANT and DENY in this article.

Assume you have a table inside your database which has sensitive personally identifiable information like SSN. Your organization mandates that no one should be able to read the SSN column but if they read the other columns, it is absolutely fine. These kind of requirements are all over the place and I have seen developers struggle to implement the same.

Here is my simple shot at the scenario. Let us create our database and the table having the SSN field:

CREATE DATABASE PermissionsDB
GO
USE PermissionsDB
GO
CREATE USER Pinal WITHOUT LOGIN;
GO
CREATE TABLE SecureTbl (ID INT, Name VARCHAR(50), SSN VARCHAR(20))
GO
INSERT INTO SecureTbl VALUES (1, 'Pinal', '111-22-3333')
GO

Additionally, I have created a user called ‘Pinal’ who needs to be denied permission. Now that we got our object under question, I am going to do two simple steps, a) First GRANT SELECT rights to everyone using this database. This can be restricted to the specific user account too. b) Explicitly we are going to DENY access to the SSN column.

-- Explicitly grant at object level
GRANT SELECT ON SecureTbl TO PUBLIC
-- DENY at the column level
DENY SELECT (SSN) ON SecureTbl TO Pinal
GO

The next step for us if to check if the user context of ‘Pinal’ as SELECT privileges on the table. To illustrate this, we are changing the user context using the EXECUTE AS command and trying to access the table.

-- Switch the context
EXECUTE AS USER='Pinal'
-- below would give Error as we are selecting SSN too
SELECT * FROM SecureTbl
-- No Error from below as SSN column is not in select list
SELECT ID, Name FROM SecureTbl
REVERT

In the above query, the first SELECT statement will result in the below error. This is because we have explicitly DENIED permission to user ‘Pinal’ from accessing the SSN field. The error also mentions this clearly.

Msg 230, Level 14, State 1, Line 21
The SELECT permission was denied on the column 'SSN' of the object 'SecureTbl', database 'PermissionsDB', schema 'dbo'.

Hence for all practical purposes, even though we have explicitly given permission to read on the Table Object, the DENY makes sure we will not be able to read the column values. This is one of the simplest way to look at GRANT and DENY implementation inside SQL Server.

Do let me know if you have used this technique inside your environments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – User Defined Audit with SQL Server

Talk to any financial institution or bank they will be super paranoid when it comes to security and auditing policies applied to their organizations. In a recent session to one of our customers, I had to enter their premises and I had to go through a number of screening. From the entrance, car parking, reception, elevators and everywhere there was someone asking us for some information. The more I think about it, the more frustrated I become. After returning home, I thought through the complete incident with my family. I was pleasantly surprised the calmness at which they were talking to me about process.

Wow, it is a great way to learn patience and why this is important. Organizations have a reason, they need data secure, they want to have procedures so that there is no data loss, no theft of data and many more. In this competitive world this is super critical. Hence these procedures are important, critical for survival. And hence this blog is inspired to the Auditing capability with SQL Server 2012 and what I found interesting.

Creating our Audit

Creating an audit is as simple as going through a series of wizard with some basic data. Go to SQL Server Management Studio -> Security -> Audits and create a new audit. Go ahead and add the FilePath, in our example I have added it as “C:\Audit”.

The TSQL equivalent for the same is:

CREATE SERVER AUDIT [Audit-20141115-213944]
TO FILE
(  FILEPATH = N'C:\Audit'
,MAXSIZE = 0 MB
,MAX_ROLLOVER_FILES = 2147483647
,RESERVE_DISK_SPACE = OFF)
WITH
(  QUEUE_DELAY = 1000
,ON_FAILURE = CONTINUE)
GO

After creating the same, we will make sure this Audit is enabled. This ensures where our Auditing data will get into. In our example it is configured to a File as shown above.

Once we enable the Audit node, the next step is to add the Audit operation. Here we are going to add the “User Defined Audit Group”

CREATE SERVER AUDIT SPECIFICATION [ServerAuditSpecification-20141115-214433]
FOR SERVER AUDIT [Audit-20141115-213944]
ADD (USER_DEFINED_AUDIT_GROUP)
GO

Creating User defined Audit entries

This is a capability that was introduced with SQL Server 2012 and I personally feel this is quite powerful for a couple of reasons. Now, apart from standard auditing capability now applications can raise specific auditing into the audit log so that we can track or audit logically from an applications point of view.

The simplest way to raise an audit record from an application is using the stored procedure sp_audit_write command. A typical command looks like:

EXEC sys.sp_audit_write 1, 0, N'This is an audit from Pinal'

This command will now put an entry into my audit, we just defined in the previous. To view the entry, select the “View Audit Log” option from the Audit we just created. The Log viewer looks like below:

In the example above, we can see the details. Please note the Audit “Succeeded as False” because we sent the second parameter to sys.sp_audit_write as 0 hence it is False. If you want to enter it as True, pass the value as 0. Also the statement that got us this information is available as part of “Statements” line item.

If you want to view all the entries into our audit file, we can use the sys.fn_get_audit_file function.

SELECT * FROM sys.fn_get_audit_file('c:\Audit\*', NULL, NULL)

I hope you got a flavor of how custom user defined audits can be defined. This is a great way to use the auditing feature from your application’s point of view. Do let me know if you found this useful and if you are planning to use the same in your environments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Filtering CPU Bound Execution Plans with Extended Events

Let me take a tour to what are we talking here from a scenario point of view. Prior to SQL Server 2012, tracing the execution plan is an all or nothing approach. In profiler, if we select execution plan, we will get every occurrence of every execution plan for all the statements that are executed inside SQL Server captured by the trace. This will bloat up the trace file very quickly and unimaginable for us to do any sort of analysis. So in general performance templates used inside profiler we do not capture execution plans – and rightly so.

I have been waiting for the ability to capture execution plan only when a query executes longer than a certain amount of time. Yes, I can add a few filters to profile, but this wasn’t the most efficient way to work. In this blog we will look at the Extended Event query_post_execution_showplan (similar to Showplan XML Statistics Profile) in SQL Server 2012 which has an interesting addition of cpu_time and duration for filtering. Personally, I thought this is a great addition to the usage of Extended Events.

Why use this?

We want to use this to monitor the overall server performance and troubleshooting when things go wrong. When the server responses are slow and we have not identified any particular query, we can choose to capture all the execution plans that consumed CPU_Time or duration exceeding certain threshold. This is one of the ways this feature can be used effectively.

Extended Event UI

If you are not familiar with the Extended Events UI, here is a great start for this task. Goto SQL Server Management Studio -> Management Node -> Extended Events -> Sessions -> Right Click and select “New Session…”.

This starts a wizard which will make our configuration quite easy. In Event library, search using the keyword “showplan“. Query_post_execution_showplan will show up as a search result. Assuming you are on SQL Server 2012 and above, note the cpu_time and duration are among available event fields.

Next, select this event and move the entry to the “Selected events:” list. This enables the “Configure” button at the top of the dialog box.

Click the Filter (Predicate) tab. You can choose cpu_time, operator and value. Note that cpu_time is in microseconds. For example, if you want to capture execution plans on all queries taking 10 seconds of CPU, you will choose 10000000. This can be done for queries that are executing more than 10 seconds too by selecting the “duration” field.

Extended Event T-SQL

Instead of showing the UI version of the command, I have gone ahead and scripted out with the duration field the same extended events. The command for the same is:

CREATE EVENT SESSION [CPU_XEvent] ON SERVER
ADD EVENT sqlserver.query_post_execution_showplan(    ACTION(package0.process_id,sqlserver.database_name,sqlserver.nt_username,sqlserver.sql_text)
WHERE ([duration]>=(10000000)))
GO

In addition to the filter, I have gone ahead and added few global variables which we might be interested as part of the collection process. Use this with caution and be careful in collecting every single event using Extended Events. I was pleasantly surprised to see the cpu_time and duration in SQL Server 2014 and hence thought it is worth a mention.

Finally, if you are doing it on a Live server. Then make sure to enable the Extended Event and we can use the “watch live data” which I found very useful in SSMS.

If you have used other events of Extended Events in your environment, I would like to learn from you on how you used them and to enable what scenarios.

Reference: Pinal Dave (http://blog.sqlauthority.com)