SQL SERVER – Collect and Analyze SQL Server Data Efficiently

dbForge Event Profiler is one of the most useful SQL Server “build-in” tools. The Profiler records data associated with various SQL Server events for further analysis. This data is stored in a trace file that can later be analyzed or used to replay a specific series of steps when trying to diagnose SQL Server relates problems. The tool allows you to view communications between a client and SQL Server, and gives you an insight into its internal performance. To take full advantage of its potential, download dbForge Event Profiler for SQL Server for free now.

The tool offers a large spectrum of features that can assist you in:

  • analyzing trace results by grouping or aggregating them;
  • auditing user activity;
  • creating your own custom traces and save them for future use;
  • debugging T-SQL code and stored procedures;
  • executing quality assurance check;
  • identifying performance-related problems with front-end applications, queries, T-SQL, transactions, and so forth;
  • performing stress testing;
  • performing query analysis of execution plans;
  • viewing SQL Server performance when interacting with a client.

Essentially, the Event Profiler is designed to quickly and efficiently track down and fix many SQL Server related problems, such as poorly-performing queries, locking and blocking, excessive table/index scanning, and a lot more. For example, you can monitor the execution of a stored procedure to see whether it hampers SQL Server performance.

Using the Profiler, you can monitor the events that you are interested in. For example, you may want to capture events from a specific user or a given database. Smart filters allow you to collect only the events that you want, filtering out those of no interest. This reduces the amount of data that is stored in your trace.

dbForge Event Profiler provides a rich graphical user interface that can be used to create, analyze, and replay trace results. As the trace data is being collected, you can stop or pause the trace at a certain point and store the trace results to a physical file on a local hard disc. The saved SQL Server Profiler document has the “.*ssp” extension. This file may then be viewed to analyze data captured, share it with others, or compare the trace results to traces performed later.

Powerful Tool in Action

Below is an example of how you create a new trace using dbForge Event Profiler.

To create a new trace, you follow these steps:

  1. On the Start page, click Profile Server Events. The Profile Server Events wizard appears.
  2. Specify the connection.
  3. Optionally, select a profiling template, modify the trace file settings and the data storage settings. Click Next.

  1. On the Events to Capture page, select the events you want to capture. Click Next.
  2. Optionally, on the Actions page, select the actions you want to capture in the current events session. Click Next.
  3. Optionally, on the Event Filters page, specify filter options to limit the tracing data.
  4. Click Execute. The Trace will start and the Server Event Profiler Document opens.

Once you have collected enough data, stop the trace by clicking the Stop Trace button on the toolbar. At this point, you may review the collected data or save it to a file for future use.

Now that you have learned about many advantages of this smart tool, you can start mastering and making practical use of the dbForge Event Profiler for SQL Server by downloading it now for FREE.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Using “High Performance” Power Plan for SQL Server

A lot of times, I have seen DBA’s and administrators have their own startup tasks that they perform when rebuilding or bringing a new server online. Some of these practices are powerful and are mean to enhance performance of the box that you have bought. Recently, I was at a customer location looking at what a typical Admin was doing and this step of his caught my attention. I couldn’t stop myself from blogging this because it was a simple, powerful and yet less appreciated setting available on the Server.

On Windows Server 2008 and above, set the “High Performance” power plan in Control Panel -> Power Options -> OK. By default, Windows Server sets the “Balanced” power plan, which enables energy conservation by scaling the processor performance based on current CPU utilization. From Intel X5500 and other last-generation CPUs, the clock is throttled down to save power (Processor P-state), and only increases when CPU utilization reaches a certain point. The Minimum and Maximum Processor Performance State parameters are expressed as a percentage of maximum processor frequency, with a value in the range 0 – 100.

If a server requires ultra-low latency, invariant CPU frequency, or the very highest performance levels, such as a database servers like SQL Server, it might not be helpful that the processors keep switching to lower-performance states. As such, the High Performance power plan caps the minimum processor performance state at 100 percent.

The typical setting looks like:

At this point, thought it would be helpful in bringing out what these plans are for my readers. These are the built-in power plans and their common use case scenarios:

Plan Description Common applicable scenarios Implementation highlights
Balanced Default setting. Highest energy efficiency with minimum performance impact. General computing. Matches capacity to demand. Energy-saving features balance power and performance.
High Performance Increases performance at the cost of high energy consumption. Should not be used unless absolutely necessary. Low latency.Application code sensitive to processor frequency changes. Processors are always locked at the highest performance state.

I hope most of you are aware and are using these settings on your servers too. Do let me know some of the settings that you use as part of your daily environment. I am sure there will a few things I can learn from you too. Do drop a line as part of comments for the benefit of all.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #017 – Performance Comparison of Union vs Union All

Here is what I just learned from email. One of the very prestigious organization asks one of their candidate following questions –

Question: If you have options to use Union or Union All – which one of the option will use keeping performance as a top most requirement for the query.

Answer: Though many of you may be surprised to know that this kind of questions can exist, I am personally not surprised to see this in an interview. Here is my answer – UNION and UNION ALL can’t be compared as they are absolutely different things – they are like apples and oranges. Just like Apples and Oranges are fruits Union and Union All our operators, but they both are there for very different purposes.

UNION
The UNION command is used to select related information from two tables, much like the JOIN command. However, when using the UNION command all selected columns need to be of the same data type. With UNION, only distinct values are selected.

UNION ALL
The UNION ALL command is equal to the UNION command, except that UNION ALL selects all values.

You can’t compare their performance as they do an absolutely different task.

Here are articles you can read for further understanding this issue.

Performance comparison: SQL SERVER – Difference Between Union vs. Union All – Optimal Performance Comparison

Different between Union and Union All: SQL SERVER – UNION ALL and UNION are Different Operation

Other relevant articles:

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Generate Different Random Number for Each Group Using RAND() Function

You know that RAND() function in SQL Server generates a random float value 0 through 1 (excluding 0 and 1).

Let us create the following dataset

CREATE TABLE #random(no INT)
INSERT INTO #random(no)
SELECT 1 UNION ALL
SELECT 1 UNION ALL
SELECT 1 UNION ALL
SELECT 2 UNION ALL
SELECT 3 UNION ALL
SELECT 3

If you want to generate a random value, you can use the following code

SELECT no, RAND() AS random_number FROM #random

which results to

no random_number
 1 0.370366365964781
 1 0.370366365964781
 1 0.370366365964781
 2 0.370366365964781
 3 0.370366365964781
 3 0.370366365964781

Please note that when you execute it , you may get different random number than what I got for column 2 but all will be same

What if you want to generate random number in such a way that it is reset to each column value (in this case the column is no)?
Did you know that RAND() accepts a seed value as well?

If you execute the following code,

SELECT no, RAND() AS random_number,RAND(no) AS random_number_reset FROM #random

the result is

no random_number random_number_reset
 1 0.58334760467751 0.713591993212924
 1 0.58334760467751 0.713591993212924
 1 0.58334760467751 0.713591993212924
 2 0.58334760467751 0.713610626184182
 3 0.58334760467751 0.71362925915544
 3 0.58334760467751 0.71362925915544

Please note that when you execute it , you may get different random number than what I got for column2 2 and 3. If you notice the result the values of second column is same for all rows, whereas the values of third column is same withing a group (column no), and different across groups. So if you want to generate random number differently for each group, you can make use of this method.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Different Methods to Know Parameters of Stored Procedure

Suppose you have a stored procedure with some input parameters. There are two methods to know the list of parameters defined in a stored procedure.

Let us create the following stored procedure

CREATE PROCEDURE TEST_PROCEDURE
(
@CUST_ID INT,
@YEAR INT
)
AS
SELECT
@CUST_ID,@YEAR

Now to know the parameters used in the TEST_PROCEDURE stored procedure, you can use the following methods

1 Use SP_HELP system stored procedure

EXEC sp_HELP 'TEST_PROCEDURE'

When you execute the above method, this is the result of the second result set.

Parameter_name Type Length Prec Scale Param_order Collation
------------------ ----------- ----------- ----------- ------------------------------
@cust_id int 4 10 0 1 NULL
@year int 4 10 0 2 NULL

2 Use INFORMATION_SCHEMA.PARAMETERS system view

SELECT
PARAMETER_NAME,DATA_TYPE ,ORDINAL_POSITION
FROM
INFORMATION_SCHEMA.PARAMETERS
WHERE
SPECIFIC_NAME='TEST_PROCEDURE'

The result is

PARAMETER_NAME DATA_TYPE ORDINAL_POSITION
------------------------ ------------ ----------------
@cust_id int 1
@year int 2

If you know any other methods feel free to post as a comment.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL Server – Error While Enabling CDC on a Table – The specified ‘@server’ is invalid (valid values are returned by sp_helpserver)

Every environment is different and it is no doubt some of these environment changes can give you results that are not expected. Recently one of my blog reader followed below blogs:

and they reported that when he is trying to enable CDC for a table, he was getting below error:

Msg 22832, Level 16, State 1, Procedure sp_cdc_enable_table_internal, Line 623
Could not update the metadata that indicates table [HumanResources].[Shift] is enabled for Change Data Capture. The failure occurred when executing the command ‘[sys].[sp_cdc_add_job] @job_type = N’capture”. The error returned was 22836: ‘Could not update the metadata for database AdventureWorks2014 to indicate that a Change Data Capture job has been added. The failure occurred when executing the command ‘sp_add_jobstep_internal’. The error returned was 14234: ‘The specified ‘@server’ is invalid (valid values are returned by sp_helpserver).’. Use the action and error to determine the cause of the failure and resubmit the request.’. Use the action and error to determine the cause of the failure and resubmit the request.

So the best way to learn these new things is by exploring the events how it happened. Below is the script he was using (taken from my blog)

-- You can run this stored procedure in the context of each database to enable CDC at database level.
-- (The following script will enable CDC in AdventureWorks2014 database)
USE AdventureWorks2014
GO
EXEC sys.sp_cdc_enable_db
GO
-- Following script will enable CDC on HumanResources.Shift table.
USE AdventureWorks2014
GO
EXEC sys.sp_cdc_enable_table
@source_schema = N'HumanResources',
@source_name   = N'Shift',
@role_name     = NULL
GO

First command was working fine. Error was raised by second command. I asked him to capture the profiler and share with me. I found that first SQL Server get instance name using below query:

SELECT @server = CONVERT(SYSNAME,SERVERPROPERTY('ServerName'))

The value is then passed to create the job for CDC. In procedure sp_verify_jobstep, below is the condition which was failing.

IF (@server IS NOT NULL) AND (NOT EXISTS (SELECT *
FROM MASTER.dbo.sysservers
WHERE (UPPER(srvname) = UPPER(@server))))
RAISERROR(14234, -1, -1, '@server', 'sp_helpserver')

Notice that this is the same error in error message (which I have highlighted) I asked him to check and he verified that SQL Server name was changed but sp_dropserver and sp_addserver was not executed. Here is the command to fix the issue. Please change the parameter values as per your SQL Instance.

sp_dropserver 'HostName\InstanceName_incorrect'
GO
sp_addserver 'HostName\InstanceName', 'local'
GO

In short, error is caused due to mismatch in value between SERVERPROPERTY(‘ServerName’)) and master.dbo.sysservers

Have you ever done similar troubleshooting in your environments using Profiler? Do let me know what you did and what was the learning. I am sure a lot of readers are going to benefit from your stories and scenarios.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Hey DBA – Watch Your Vendors! Hey Vendors – Try a Little Harder! – Notes from the Field #077

[Note from Pinal]: This is a 77th episode of Notes from the Field series. Every week, I personally wait for the notes from the fields from Mike because it contains lessons of life which directly impacts DBA and Developer’s life positively. Mike Walsh is an amazing person to meet and know. I am fortunate to be his friend. Every single time I meet him, I learn something as a human. He is always looking for an opportunity to improve the life of DBAs and Developers.

Everybody uses third party softwares and third party vendors in their organization. Our ecosystem is built around that concept and it is absolutely difficult to avoid this situation. When two different parties depend on each other to succeed in their life there should be some basic ground rules to follow for mutual success. I asked the very same question “How should be the relationship between Vendors and DBA?” and database expert Mike Walsh decided to guide me with the answer of this question.

Read the entire story in his own words.


It seems I rant about independent software vendors about once every 6-8 months. I’ve not done it on a post here on Pinal’s fine blog, though. Why do I do this? Because it really matters. Because it really should be different. Because I’ve seen the difference when DBAs and Vendors come together, plan properly and have a working relationship. Finally – because I’ve seen environments brought down hard by a failure in this important relationship and in this important “unwritten contract” between the parties. So, I’m continuing with the theme of the “Hey DBA!” type posts with advice we can all use, but I’m talking to vendors, too.

If your company buys software that your business runs on – don’t get stuck in this trap and learn some questions to ask the vendors.

If your company sells software to other companies? Make sure these things are clear for all of your customers and people know what they own and don’t own.

What Does Failure Look Like Here?

It takes different shapes. In most cases the following symptoms are there (not all are there, but many may be):

  • No maintenance is being done (no index rebuilds, no statistics updates, no DBCC CHECKDB jobs)
  • Sometimes an incorrect backup strategy exists
  • The database for an application is on an undersized, underperforming SQL Server
  • SQL Server is missing a lot of best practices and configurations, instead it feels more like it was installed by clicking next a bunch of times and finish once and never again touched.

But these symptoms can lead to downtime. They can lead to unresolved, and undetected corruption. They can lead to a restore that doesn’t go exactly as the business planned or expected or as the application vendor wants.

In many environments, a company will buy software that runs on SQL Server and not even fully realize or understand that it is running on an enterprise database platform that needs attention, care and regular maintenance. Often times the client eventually realizes it only when they’ve had a disaster and are trying to pick up the pieces and get their critical application online.

DBAs – What can you do?

Be on the lookout for applications coming into the company. This is sometimes easier said than done – but trying to get yourself into meetings before an application is purchased to understand the database needs can make a huge difference. Let it be known that you are there to help and you want to make a difference in the stability, performance and uptime of any new applications that come in and have a database backend. Let your managers speak on your behalf, let the project managers know, be vigilant and get involved when applications are brought in. And earlier is better. If a vendor is doing something dangerous, against best practices, or in violation of a security audit you have to go through – it is easier to get them to make those changes in the sales process than after you’ve purchased the application.

Be proactive – in so many of my posts here and on my own blog at http://www.straightpathsql.com and over at http://www.linchpinpeople.com, I focus on being proactive. Get to know your environments. Run checks on them, understand the uses, and understand the maintenance. Over at Linchpin People we have a free WellDBA™ Checklist that you can use to go through and check the most common mistakes we find in production environments. Grab that checklist and check your own environment. Dig in where there are issues. You can get that free checklist here (http://www.linchpinpeople.com/landing/well-dba-sql-server-checklist/)

Be Inquisitive – On my blog, I have a list of some sample questions you can ask a vendor. This checklist is a bit older now and I should probably update it – but the point in the questions should become clear as you look at them. You want to ask the questions of the vendor to know what challenges you’ll face as a DBA. You want to gauge who is responsible for which pieces, you want to get to know and trust your vendor and understand their expectations of you and what they are great at and what you may need to spend more time looking. Create a vendor interview question yourself and make sure any new software vendors bringing applications into your environment give you the information you need. You can see an example checklist on my personal blog – http://www.straightpathsql.com/archives/2009/01/dba-questions-to-ask-a-vendor/

Vendors – What can you do?

It’s pretty simple – be clear, be informative and be prescriptive…

Be Clear – Your customers may or may not understand what they are getting into with a database. They may have a great DBA, no DBA or a mediocre DBA. Be clear on what you expect from them and what they have to worry about. You can’t just say, “they’ll figure it out… backups are really important so they’ll understand this and they’ll start doing them…” You’ve made some big assumptions there. That’s true about maintenance, knowing what to do when there are issues, etc. Be clear on what you are responsible for, what you are not responsible for and what they should be worried about and what they should take care of on their own.

Be Informative – Don’t just be clear – but get to know your customers. If you are deploying a SQL server application – you should explain that you like to work with a DBA during the pre-sales and installation process. Get to know the DBA team. Talk about challenges that folks face – check out that set of questions above and think of other questions you’d want to ask if you were a DBA. Have answers to them and supply them even if the client doesn’t ask.

Be Prescriptive – I love it when I see a vendor with best practice guides, deploy guides, maintenance guides, SQL Server configuration guides, etc. These things are gold to your customers. But they are also gold to your image, and your respect and your referrals. If you prescribe what the environment should generally look like, what kind of maintenance should be done, what kind of recovery typically works best and give advice on how to configure SQL for success – you are equipping your customers for greater success. You don’t need to go specifics on everything but general prescriptions here, with latitude for companies with DBA teams that understand how to turn your guidelines into reality within their existing policies is great. But you should consider being specific enough to show the smaller shops with a system administrator playing the role of DBA what they should be learning about and either figuring out or bringing in consulting or mentoring help to get right with them.

Doing these things makes you ahead of the curve of software vendors and puts you in a great spot – and it probably saves your support budget down the road with calls that are not needed by your customers not paying attention to their SQL environment.

Everyone – What can you do?

Talk… Seriously – many of these issues come down to communication. A lot of DBAs complain about software vendors. A lot of software vendors probably secretly complain about their customers. If we all talked just a bit more about these things we’d be in a better spot and our environments would be better for it.

If you want to get started with performance analytics and Database Healthcheck of SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to Identify a DB is Using Cross Database Transactions?

I cannot iterate this often, some of the best learnings always come from times you least expect a question to come from. Recently, when I was at a conference in Delhi, one of the attendees came to me to ask a question. The conversation was as below:

Attendee: Hi Pinal.

Pinal: Hi to you too.

Attendee: Thanks for the blogs on SQLAuthority. I am a regular reader of the same.

Pinal: Thanks for your kind words.

Attendee: I have a quick question for you and hope you are fine.

Pinal: Sure, shoot it. I would love to help you buddy.

Attendee: Pinal, I work in a financial institution and I manage a number of databases in the production DB.

Pinal: Sure, go on. Isn’t that a good thing.

Attendee: I know it is a good thing, but recently the management was paranoid of running different application databases on the same massive production box. So they asked me a question.

Pinal: Great, so you have done some consolidation of your environment.

Attendee: Yes, indeed – we did consolidation of our servers and few DBs into a box and that is where the management is paranoid of security. They asked me, if there is any way to know if a database is involved in cross-database transactions because it is important to know. They don’t want databases to be involved in such calls in this new server.

Pinal: Hmmm, that is interesting.

Attendee: To start with, they want to know if any of the databases are involved – then they will look at the next steps of hardening the server.

Pinal: I am sure there is someway. Meet me in the morning tomorrow and I will try to get you something. And BTW, People like you who ask me such questions are the real heroes of this blog. I am glad you walked up to ask this question. Thanks again for your time.

After this question, I was not able to sleep and on quick digging on how distributed transactions work – I gave him a simple script when run on a DB will let him know if it was involved in cross-database calls.

IF EXISTS(SELECT 1 FROM fn_dblog(NULL, NULL) WHERE Operation = 'LOP_PREP_XACT' AND [Master DBID] <> 0)
PRINT 'Based on the active part of the transaction log read, there is evidence that this database has participated in cross-database transactions.'
ELSE
PRINT
'Based on the active part of the transaction log read, there is no evidence of this database having participated in cross-database transactions.'

This solution is based on an undocumented command of fn_dblog() which reads the Transaction log records. It has been safe and I have seen many use this to great effect. You can also use yet another predicate to look at the same.

-- Or We can use this alternative predicate:
SELECT 1 FROM fn_dblog(NULL, NULL) WHERE Operation = 'LOP_PREP_XACT' AND [Master XDESID] <> '0000:00000000'

Both these options are based on the fn_dblog() function and your active part of Transaction Log. I am sure this is not a foolproof mechanism but I am sure it is a good start for folks who want to know about this information.

Having said that, I am keen on knowing if any of you have ever encountered such requirements and what has been your solution for the same. Do drop a line over the comments and I would love to learn from you too.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Split Comma Separated List Without Using a Function

The best way to learn something is to revisit some of the older blogs and then look at finding a more efficient way to work on the same problem. Recently I was looking at the forums to find a question come up again and again around making a comma separated list. This is not new over this blog and if you search you will get few. Long time ago I have written below blogs to show the values stored in table as comma separate list – this is not a rehash of what was written before but a newer way to solve the problem again.

SQL SERVER – Creating Comma Separate Values List from Table – UDF – SP
SQL SERVER – Comma Separated Values (CSV) from Table Column

Recently someone asked me a reverse of that. They were storing comma separated values in a column of table. This table was getting populated from some source via SSIS package. Here is the simplified version of the problem I was presented.

DECLARE @t TABLE
(
EmployeeID INT,
Certs VARCHAR(8000)
)
INSERT @t VALUES (1,'B.E.,MCA, MCDBA, PGDCA'), (2,'M.Com.,B.Sc.'), (3,'M.Sc.,M.Tech.')

 

This is how data looks like in the table.

With the above data available, they want it split into individual values. Searching for possible solutions, here is the solution query I came up with:

SELECT EmployeeID,
LTRIM(RTRIM(m.n.value('.[1]','varchar(8000)'))) AS Certs
FROM
(
SELECT EmployeeID,CAST('<XMLRoot><RowData>' + REPLACE(Certs,',','</RowData><RowData>') + '</RowData></XMLRoot>' AS XML) AS x
FROM   @t
)t
CROSS APPLY x.nodes('/XMLRoot/RowData')m(n)

Output looks like below.

Hope this would help you in your project. Let me know if you are able to find a different solution to the same problem. The best way to learn is to learn from each other.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to View the Dirty Pages In Memory of a Database?

I always get an opportunity to talk on topics that are basic from time to time. As I was preparing for GIDS 2015 (Great Indian Developer Summit) this year, I was vividly remembering one of the conversations I had with an attendee last year at our Pluralsight booth. Most of these conversations are forgotten, but this one was unique. During this conversation, the attendee was trying to learn the basics and was curious to listen to me. I always get carried away and spend quite some time with folks who are eager to learn. In this context, I was talking how RDBMS’s use the concept of Write-Ahead Transaction Log for  maintaining ACID properties. The response I got was interesting – “Hey Pinal, is there a way to know which of the pages in memory are currently dirty? Do you know how to get this?”

As I was preparing for this year’s session, this question was on top of my mind and I thought let me take a moment in this blog to show you how this can be achieved. Again, everything we do inside SQL Server can be accessed by the Dynamic Management views and it is important for us to know which ones to use.

T-SQL Scripts

Create the dummy database for test and we will enable few traceflags for this demo.

CREATE DATABASE DirtyPagesDB
GO
USE DirtyPagesDB
GO
-- Disable automatic checkpoint so that data isn't flushed
--
DBCC TRACEON(3505, -1);
DBCC TRACESTATUS();
GO

Let us create a table and enter some values. We will also flush the page using Checkpoint.

CREATE TABLE t1 (Speaker_Bio CHAR(8000))
GO
INSERT INTO t1 VALUES ('SQL'),('Authority')
GO
CHECKPOINT

Let us check if there are any pages dirty currently in our buffer. Since we have done a checkpoint, there will be no rows for the below query.

-- Get the rows of dirtied pages
SELECT
database_name = d.name,
OBJECT_NAME =
CASE au.TYPE
WHEN
1 THEN o1.name
WHEN 2 THEN o2.name
WHEN 3 THEN o1.name
END,
OBJECT_ID =
CASE au.TYPE
WHEN
1 THEN p1.OBJECT_ID
WHEN 2 THEN p2.OBJECT_ID
WHEN 3 THEN p1.OBJECT_ID
END,
index_id =
CASE au.TYPE
WHEN
1 THEN p1.index_id
WHEN 2 THEN p2.index_id
WHEN 3 THEN p1.index_id
END,
bd.FILE_ID,
bd.page_id,
bd.page_type,
bd.page_level
FROM sys.dm_os_buffer_descriptors bd
INNER JOIN sys.databases d
ON bd.database_id = d.database_id
INNER JOIN sys.allocation_units au
ON bd.allocation_unit_id = au.allocation_unit_id
LEFT JOIN sys.partitions p1
ON au.container_id = p1.hobt_id
LEFT JOIN sys.partitions p2
ON au.container_id = p2.partition_id
LEFT JOIN sys.objects o1
ON p1.OBJECT_ID = o1.OBJECT_ID
LEFT
JOIN sys.objects o2
ON p2.OBJECT_ID = o2.OBJECT_ID
WHERE is_modified = 1
AND d.name = 'DirtyPagesDB'
AND
(
o1.name = 't1'
OR o2.name = 't1'
);
GO

Since we got no rows, let us insert a row to check.

INSERT INTO t1 VALUES ('Pinal')
GO

Go ahead and run the above big query again. You are likely to see 2 rows, one for IAM and one for Datapage. If you run the CHECKPOINT, these will also get flushed to the data disk.

-- Clean up
USE MASTER
GO
DROP DATABASE DirtyPagesDB
GO

This was in my mind for a long time to research about this and write. I am sure this will be a good learning for my blog readers too. Do let me know if this was an interesting learning today for you.

Reference: Pinal Dave (http://blog.sqlauthority.com)