SQL SERVER – Performance counter missing! DMV sys.dm_os_performance_counters is Empty

The topic around performance tuning is the most sought out search on this blog. Almost every single day the top search that lands here are around performance or around some random errors one might receive from the application. If you are a DBA, the first high level tool that you might be using to watch perf data is around using Perfmon data. Interestingly, these PerfMon data is actually exposed via the DMV’s too.

One of uncommon issue, which I found on one of my client SQL Server box, was to get zero rows in sys.dm_os_performance_counters dynamic management views.

My first question was – “did that ever show some values?” and the answer was “yes. It was working and it broke when we changed service account”.

My next question was – how did you change the service account? And as expected, the answer was – we have used the services. ms. And once again, I wanted to pull my hairs because of seeing same mistake again. There has been enough documentation from Microsoft, which suggests to use the SQL Server Configuration Manager. I asked them to show ERRORLOG.

SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location

Here is what pointed to the issue.

Error: 8319, Severity: 16, State: 1.
Windows kernel object ‘Global\SQL_90_MEMOBJ_MSSQLSERVER_0’ already exists. It’s not owned by the SQL Server service account. SQL Server performance counters are disabled.
Error: 3409, Severity: 16, State: 1.

Performance counter shared memory setup failed with error -1. Reinstall sqlctr.ini for this instance, and ensure that the instance login account has correct registry permissions.

Below are the things which were done to fix the problem.

  • Change the account that start SQL Server Service and SQL Agent Service to the Local System Account.
  • Restart SQL Server Service and SQL Agent Service in order to apply the last change.
  • The Follow permissions must be granted to the SQL Server account in the Local Security Policy> * User Rights Assignment.
    • Adjust Memory for A Process
    • Log on as Service
    • Log on as a batch job
    • Locks pages in memory
    • Act as part of the operating system
    • Bypass traverse checking
    • Replace a process level token
  • Add the SQL Server Domain Account to the SQL Server Groups on the server.
  • Change the account that start SQL Server Service and SQL Agent Service to the SQL Server Domain Account.
  • Restart SQL Server service and SQL Server Agent Service.
  • Check your ERRORLOG and performance counters.

This is one of the reasons, I strongly recommend changing the service account from the SQL Server Configuration Manager. Above worked for my friend. Have you faced some similar issue?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Monitoring and Troubleshooting SQL Server Got Easy with Diagnostics Tool

Blogging has never been an easy task. With close to 9 years of non-stop learning and sharing has got me into a rhythm that I rarely miss a single day without writing one. Over these years, there is one topic area that gets most number of hits based on subject. It is invariably around the performance and troubleshooting area. Most hits on the blogs are always around these areas. Though I see this is proven and become mature over years, still people are searching for topics that are common and even proven over the years.

This always baffles me on the access pattern and I wondered what could be done to mitigate such queries from budding new age DBAs. At the recent SQLPass conference I met a number of them who came to me asking how to become an effective DBA. I generally try to understand their patterns and look for solutions. One of the DBAs told me he was under constant pressure because the SQL Server used to run fine start of the week and the end users complain that the servers are unresponsive or the performance is not acceptable over the week. He was getting clueless to why this behavior was happening. He wanted some help here and I was clueless what to suggest.

Getting started with SQL Diagnostic Manager 10.0

I was recently trying to play with a number of tools in the market and I bumped into IDERA’s SQL Diagnostic Manager tool. They were releasing the new version and I wanted to see how it was different. The first thing that struck me was the ability for DBAs and developers to create automated baselines. This was the first thing that got me by surprise because this requirement was fresh on my mind from the recent trip.

After installation, get to the properties page of the server and go to the “Baseline Configuration” tab. This brings the ability to set automated backups through the week. In the below case, I have gone ahead a created through the week from 8AM – 6PM. This is exactly the timeframe the production is at the maximum stress levels.

 SQL SERVER   Monitoring and Troubleshooting SQL Server Got Easy with Diagnostics Tool

What to monitor?

The next phase is to know what we are going to monitor. I saw the “Analysis Configuration” section has the settings already available with pre-set configuration. In the example below, I have used the “Online Transaction Processing” and selected some of the readily available “Categories”. I personally felt such jump start configurations can surely help and guide the new DBAs who are clueless about what is happening in the system.

 SQL SERVER   Monitoring and Troubleshooting SQL Server Got Easy with Diagnostics Tool

What I would have loved to see few tooltips while I select each of the categories. Because it would become easy before doing any selection. Having said that, it was pretty easy for me to make the selections as it was almost self explanatory.

While selecting the “Advanced settings”, I was able to see further settings that were interesting. I was able to see some of the rules that can be run by default or can be blocked for analysis. Some of these rules are pretty advanced and interesting because it can be easily missed out in the normal run. Here some of the tempdb rules caught my eyes and I have shown below for reference.

 SQL SERVER   Monitoring and Troubleshooting SQL Server Got Easy with Diagnostics Tool

If that was not enough, in “Filter Settings” we can remove unnecessary databases or add multiple databases of our choice. This is powerful because in servers that are consolidated, we don’t need to worry about analyzing smaller databases but the databases of interest based on application can be worked out.

 SQL SERVER   Monitoring and Troubleshooting SQL Server Got Easy with Diagnostics Tool

I just wish this had some way to customize some of the rules rather than being fixed. But the 100’s of rules already available are pretty exhaustive to start. I am sure an advanced user will piggyback on some of them. Cases like number of VLFs created are different between versions and sometimes obsolete in newer versions.

Web is the new SSMS

Though the desktop version is powerful and has tons of reports to show by default, I think the web interface has improved by leaps and bound in this release for SQL Diagnostic Manager. The web interface to the monitoring server can reveal insights remotely too without installation requirement. The initial install of 220+MB file and configuration can sometimes be daunting and once configured, we can use the friendly web interface to visualize what is happening on the server.

Below is a typical workload that you see that my server suddenly peaked in the late evening when a batch service got initiated. This is typically how people would see their server’s vital stats remotely to check if something is going wrong.

 SQL SERVER   Monitoring and Troubleshooting SQL Server Got Easy with Diagnostics Tool

When such anomalies are detected, we can get into the “Top N” queries to see what is going wrong. Which databases are causing heavy workload. Which queries are consuming lots of reads / writes in the system. I felt this snapshot was very useful and made a lot of sense.

 SQL SERVER   Monitoring and Troubleshooting SQL Server Got Easy with Diagnostics Tool

Attention to detail like – “When the last autogrowth happened” is something that I felt was quite useful. From the above, I can also see that a lot of tempdb activity is currently in the system that makes me understand how the developers have written their code and where I need to do the optimizations.

Self-Service the new silver-bullet

Though I was planning  to wrap up this blog, I found a unique customization feature that caught my eyes and needed a mention here. You can build your very own dashboards using the metrics that have been collected before. The usual performance counters are already available from the web version and building your own dashboard with widgets configuration. As you can see I am taking something simple as a dashboard, but these can show the health of multiple instances across the network based on the configuration.

 SQL SERVER   Monitoring and Troubleshooting SQL Server Got Easy with Diagnostics Tool

This ability to have a central web view to all the servers a DBA needs to monitor is critical in large installations and setups where DBAs need to monitor at least 8-10 servers at any moment of time.

 SQL SERVER   Monitoring and Troubleshooting SQL Server Got Easy with Diagnostics Tool

Above is a classic example of how I have created my own SQLAuthority dashboard based on the servers that are of interest to me. I am sure your views are going to be complex and complete.

As I conclude, I will let you try the analysis tool which gives some ready solutions to some of the most common problems. There are many more options that are getting added. As I started with the baseline, let me see how the baselines have been performing over the week using their “Baseline Visualizer”.

 SQL SERVER   Monitoring and Troubleshooting SQL Server Got Easy with Diagnostics Tool

I am sure you will find some of these interesting and will share your experience as you play around with the tool. Do let me know via comments on what you found interesting.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #051 – Actual Execution Plan vs. Estimated Execution Plan

In one of the recent interview, I met quite a few interesting candidates. It is common for employees ask many questions during an interview, but there are times when candidates also ask questions back to the interviewee. I was asked following question recently.

Question: Do you use Actual Execution plan or Estimated Execution plan?

Answer: I always use the Actual Execution Plan as it is relatively accurate.

Why not Estimated Execution Plan? It is not accurate. Sometime it is easier or useful to to know the plan without running the query. I just run a query and have correct and accurate Execution Plan.

Shortcut for Display Estimated Execution Plan : CTRL + L
Shortcut for Include Actual Execution Plan : CTRL + M

You can use SSMS to include the execution plan as well.

ExecutionPlan SQL SERVER Actual Execution Plan vs. Estimated Execution Plan

Reference : Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER 2016 – Comparing Execution Plans

New tools bring new capabilities and sometimes these are ingrained deep into the product that we can easily miss them out. And SQL Server 2016 is no different. After installing the CTP, I was pleasantly surprised to see this addition to help our DBA’s who are doing performance tuning and query tuning activities.

So what am I talking about? Think of a scenario wherein you are doing some sort of testing in your QA environment. You seem to be troubleshooting a production scenario and found that very scenario to be working very well in your test environment. Though you have taken the data from the production database to the test QA server less than a month back, you are still baffled with the question – why this difference?

The next requirement everyone looks at is to start comparing the execution plans between your production and test environment for the same query. Some of the execution plans on a production server can run to pages sometimes and are difficult to find difference visually.

SQL Server 2016 – Plan comparison

If you have the execution plan from your production environment, then go ahead and open it in SQL Server 2016 Management Studio and right click to get the following option:

compare plan 00 SQL SERVER 2016   Comparing Execution Plans

You can see that an interesting addition. The “Compare Showplan” needs to be selected. Now, select the plan that was generated from your test environment. This will make bring both the execution plans on a side-by-side view as shown below. Look at the Purple color on the nodes.

compare plan 01 SQL SERVER 2016   Comparing Execution Plans

This represents the fact that both the nodes are same but have some differences in some properties. Now it is upto us to right click and bring the properties tab.

compare plan 02 SQL SERVER 2016   Comparing Execution Plans

Here we can go ahead and compare each of the node properties to find where the discrimination between the plans are. This is awesome capabilities added in the new version.

I am sure you will find this feature of great use in query tuning exercises in your environments. Having said that, one thing I have seen interesting is, if you take a live “Actual Execution Plan” and then try to compare it with a saved .sqlplan file – it will come up with an error as shown below:compare plan 03 SQL SERVER 2016   Comparing Execution Plans

Do let me know if you would use this feature and if you have used this feature – what is some of the enhancements you wished this comparison operator had? I would like to learn from you for sure.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Save and Send Execution Plan Via Email

Here is an interesting question, I received the other day.

Hi Pinal,

I have a very big graphical execution plan and I want to send it to my friend. My first thought was to send him query to execute on his own machine, but he does not have the same database which I am using. I also do not want to send my large database to him.

How can I save the execution plan to my computer and send it to him via email? Additionally, my email support only maximum of 20 MB, so please also guide me, what should I do if the size goes over 20 MB?

Thanks!

It is indeed a very interesting question. Here is the simplest way to do what the user is asking in the email.

Here is a quick tutorial.

Step 1: You can write any query and click on the icon on the top tool bar or type shortcut CTRL+M to enable actual management studio.

exec1 SQL SERVER   Save and Send Execution Plan Via Email

Step 2: Click on Execute Query

exec2 SQL SERVER   Save and Send Execution Plan Via Email

Step 3: It will execute the query and will bring up Execution Plan Tab. Click on the Execution Plan Tab.

exec3 SQL SERVER   Save and Send Execution Plan Via Email

Step 4: Upon clicking on Execution Plan it will bring up a query execution plan. Right Click over here and it will bring up a menu. In this menu click on the Save Execution Plan As…

exec4 SQL SERVER   Save and Send Execution Plan Via Email

Step 5: You can save your file with the extension.sqlplan.

exec5 SQL SERVER   Save and Send Execution Plan Via Email

That’s it! You are done. Now you can send the sqlplan via email to anyone. When someone opens the file, it will open in SQL Server Management Studio as displayed in the following image. The user has no need to have any database on their system. They can just see the data fine in the sqlplan.

exec6 SQL SERVER   Save and Send Execution Plan Via Email

You can open the sqlplan file in the notepad and you will see xml details for the plan.

exec7 SQL SERVER   Save and Send Execution Plan Via Email

Reference : Pinal Dave (http://blog.SQLAuthority.com)

Interview Question of the Week #039 – What is Included Column Index in SQL Server 2005 and Onwards?

You may find it interesting that not many people still know what is Included Index in SQL Server. I have seen it personally around me and also have seen at

Question: What is Included Column Index?

Answer: 

In SQL Server 2005, the functionality of non-clustered indexes is extended by adding non-key columns to the leaf level of the non-clustered index. Non-key columns, can help to create cover indexes.By including non-key columns, you can create non-clustered indexes that cover more queries. The Database Engine does not consider non-key columns when calculating the number of index key columns or index key size. Non-key columns can be included in non-clustered index to avoid exceeding the current index size limitations of a maximum of 16 key columns and a maximum index key size of 900 bytes. Another advantage is that using non-key column in the index we can have index data types not allowed as index key columns generally.

In following example column FileName is varchar(400), which will increase the size of the index key bigger than it is allowed. If we still want to include in our cover index to gain performance we can do it by using the Keyword INCLUDE.

USE AdventureWorks
GO
CREATE INDEX IX_Document_Title
ON Production.Document (Title, Revision)
INCLUDE (FileName)

Non-key columns can be included only in non-clustered indexes. Columns can’t be defined in both the key column and the INCLUDE list. Column names can’t be repeated in the INCLUDE list. Non-key columns can be dropped from a table only after the non-key index is dropped first. For Included Column Index to exist there must be at least one key column defined with a maximum of 16 key columns and 1023 included columns.

Avoid adding unnecessary columns. Adding too many index columns, key or non-key as they will affect negatively on performance. Fewer index rows will fit on a page. This could create I/O increases and reduced cache efficiency. More disk space will be required to store the index. Index maintenance may increase the time that it takes to perform modifications, inserts, updates, or deletes, to the underlying table or indexed view.

Another example to test:
Create following Index on Database AdventureWorks in SQL SERVER 2005

USE AdventureWorks
GO
CREATE NONCLUSTERED INDEX IX_Address_PostalCode
ON Person.Address (PostalCode)
INCLUDE (AddressLine1, AddressLine2, City, StateProvinceID)
GO

Test the performance of following query before and after creating Index. The performance improvement is significant.

SELECT AddressLine1, AddressLine2, City, StateProvinceID, PostalCode
FROM Person.Address
WHERE PostalCode BETWEEN '98000'
AND '99999';
GO

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Script: Finding queries without JOIN Predicates

I have been in developer’s shoes for close to 5+ years before reinventing myself to do things that made me even better in coding. It is a tough challenge these days being a developer. The life around you and the technology is changing so rapidly that we seldom get time to take a pause from our busy schedule and learn something that has come new. This constant struggle to build new competencies, read documentations of newly released products is something I do religiously.

This blog can never come this far if I had not been doing this till date. The more I read and learn, the opportunities to share become better. To carry a baggage of legacy from time to time is inevitable and I see folks in the industry hold onto some of the practices even without knowing why they are doing so.

I am writing this code blog below because during a code review at one of the software companies, I saw a code that was not adhering to ANSI standards. I quickly jumped onto it and asked – why? The developer gave me the usual long answer of no time, release pressures and lastly – what is wrong, it works right?

I have respect for such pressures as you build your product, but writing a code block that performs and is easier to maintain is one of the critical step. After I explaining, I showed the warning in the execution plan that shows we have done something that can cause poor performance.

I quickly came up with this script to help the team identify all the queries that were written in this fashion and requested them to change the same at the earliest.

WITH XMLNAMESPACES('http://schemas.microsoft.com/sqlserver/2004/07/showplan' AS p)
SELECT  st.TEXT,
qp.query_plan
FROM    (
SELECT  TOP 50 *
FROM    sys.dm_exec_query_stats
ORDER BY total_worker_time DESC
) AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) AS qp
CROSS APPLY qp.query_plan.nodes( '//p:RelOp/p:Warnings[(@NoJoinPredicate[.="1"])]' ) AS q(n) ;

The warning is because, poorly performing queries are throwing warnings that there are operators without join predicates. That means somewhere in your query, you’ve logically written a CROSS JOIN without thinking about it. Maybe you are using an old style join syntax (SELECT * FROM a, b, c) or maybe you really did intend to write this code. Either way, a lot of rows can be produced as a result of this operation. It’s best to make sure this was your intent.

The only way out here is to re-write your query so you are always applying a join predicate. This is a script blog. If you run this in your environment, do you get anything? Find out and keep me posted.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #028 – Few SQL Best Practices

Just last week, I moderated a SQL Server interview and I heard very interesting conversation.

Question – What are the best practices are you following for SQL Server?

When I heard this question – I thought candidate will answer many different things, but he was stunned and had nothing much to add to the conversation. That reminded me one of my older articles which I believe still adds quite a lot of value to this conversation. Here is the article as an answer to the question.

Answer – 

1. Know your data and business application well.
Familiarize yourself with these sources; you must be aware of the data volume and distribution in your database.

2. Test your queries with realistic data.
A SQL statement tested with unrealistic data may behave differently when used in production. To ensure rigorous testing, the data distribution in the test environment must also closely resemble that in the production environment.

3. Write identical SQL statements in your applications.
Take full advantage of stored procedures, and functions wherever possible. The benefits are performance gain as they are precompiled.

4. Use indexes on the tables carefully.
Be sure to create all the necessary indexes on the tables. However, too many of them can degrade performance.

5. Make an indexed path available.
To take advantage of indexes, write your SQL in such a manner that an indexed path is available to it. Using SQL hints is one of the ways to ensure the index is used.

6. Understand the Optimizer.
Understand the optimizer how it uses indexes, where clause, order by clause, having clause, etc.

7. Think globally when acting locally.
Any changes you make in the database to tune one SQL statement may affect the performance of other statements used by applications and users.

8. The WHERE clause is crucial.
The following WHERE clauses would not use the index access path even if an index is available.
E.g. Table1Col1 (Comparison Operator like >, >=, <=,) Table1Col2, Table1Col1 IS (NOT) NULL, Table1Col1 NOT IN (value1, value2), Table1Col1 != expression, Table1Col1 LIKE ‘%pattern%’, NOT Exists sub query.

9. Use WHERE instead of HAVING for record filtering.
Avoid using the HAVING clause along with GROUP BY on an indexed column.

10. Specify the leading index columns in WHERE clauses.
For a composite index, the query would use the index as long as the leading column of the index is specified in the WHERE clause.

11. Evaluate index scan vs. full table scan. (Index Only Searches Vs Large Table Scan, Minimize Table Passes)
If selecting more than 15 percent of the rows from a table, full table scan is usually faster than an index access path. An index is also not used if SQL Server has to perform implicit data conversion. When the percentage of table rows accessed is 15 percent or less, an index scan will work better because it results in multiple logical reads per row accessed, whereas a full table scan can read all the rows in a block in one logical read.

12. Use ORDER BY for index scan.
The SQL Server optimizer will use an index scan if the ORDER BY clause is on an indexed column. The following query illustrates this point.

13. Minimize table passes.
Usually, reducing the number of table passes in a SQL query results in better performance. Queries with fewer table passes mean faster queries.

14. Join tables in the proper order.
Always perform the most restrictive search first to filter out the maximum number of rows in the early phases of a multiple table join. This way, the optimizer will have to work with fewer rows in the subsequent phases of join, improving performance.

15. Redundancy is good in where condition.
Provide as much information as possible in the WHERE clause. It will help the optimizer to clearly infer conditions.

16. Keep it simple, stupid.
Very complex SQL statements can overwhelm the optimizer; sometimes writing multiple, simpler SQL will yield better performance than a single complex SQL statement.

17. You can reach the same destination in different ways.
Each SQL may use a different access path and may perform differently.

18. Reduce network traffic and increase throughput.
Using T-SQL blocks over Multiple SQL statements can achieve better performance as well as reduce network traffic. Stored Procedures are better over T-SQL blocks as they are stored in SQL Server and they are pre-compiled.

19. Better Hardware.
Better hard ware always helps performance. SCACI drives, Raid 10 Array, Multi processors CPU, 64-bit operating system improves the performance by a great amount.

20. Avoid Cursors.
Using SQL Server cursors can result in some performance degradation in comparison with select statements. Try to use a correlated sub query or derived tables if you need to perform row-by-row operations.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Querying Performance Counters from SQL Server

Troubleshooting SQL Server is something almost every single DBA has to go through in their lifespan. It sounds simple and often we are clueless on where to start. Here is a scenario, you have been told that SQL Server is slow or unresponsive. Now you need to troubleshoot and figure out what is happening. Sounds simple right? What would you do in this situation?

When I asked this scenario to one of my DBA friend, their answer was simple. “Pinal, the first thing is to log onto the server and open Task Manager. Next I will see the basic parameters of how the CPU, Memory and IO are doing. Based on what I see at a macro level; we will troubleshoot accordingly.”

This seemed to be a simple yet a logical way to see things. On further query, they did mention about using PerfMon counters as an ideal way to capture some of the current problems running inside SQL Server. Capture the numbers and then troubleshoot later with other tools like Extended Events, Profiler, Activity Monitor etc. This conversation interested me big time because I wanted to see how people query performance counters.

The basic thing is to initialize Perfmon.exe and look at the various counters. We can also create a Data Collection template and go ahead with using the same. I am sure most of you are aware about doing the same.

Let me take an unconventional approach to a simple requirement to query Memory Grants pending on a SQL Server box using other techniques.

SQL to Query Performance Counter

In the below query, we use two methods. One to query to the Performance Counters and other via the DMV.

-- Get memory grants pending (perfmon counter)
SELECT cntr_value AS memory_grants_pending
FROM sys.dm_os_performance_counters
WHERE counter_name LIKE '%Memory%Grants%Pending%';
-- Get memory grants pending (DMV)
SELECT COUNT(*) AS memory_grants_pending
FROM sys.dm_exec_query_memory_grants
WHERE wait_time_ms IS NOT NULL;

This metric defines the total amount of memory grants that are waiting for workspace memory, and if this value is consistently above zero it could be a sign of memory pressure, and it could be negatively impacting the querying process. It is recommended that reevaluation of memory allocations and/or available physical memory is taken into account in order to relieve memory pressure for the current SQL Server instance where this query is being run.

If there are memory related errors, then we are likely to see 8645 errors in our SQL Server Errorlog. How to troubleshoot SQL Server error 8645 is explained in the KB.

Query Counter Using PowerShell

As we have done in various blog posts, we can also use PowerShell to query counters too. A simple script I wrote to query looks like below:

Get-Counter -Counter "\SQLServer:Memory Manager\Memory Grants Pending"| SELECT-Object @{Name = "Memory Grants Pending" Expression = {$_.CounterSamples.CookedValue}}

Please let me know if you have every used any of these techniques in your environment to automate querying values from Performance Counters. How do you use these counter values to proactively mitigate problems in your environments? What are some of the techniques that you have been using?

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #021 – Difference Between Index Seek and Index Scan (Table Scan)

Question: What is the difference between Index Seek and Index Scan?

Answer: Index Scan retrieves all the rows from the table. Index Seek retrieves selective rows from the table.

Index Scan:
Since a scan touches every row in the table whether or not it qualifies, the cost is proportional to the total number of rows in the table. Thus, a scan is an efficient strategy if the table is small or if most of the rows qualify for the predicate.

Index Seek:
Since a seek only touches rows that qualify and pages that contain these qualifying rows, the cost is proportional to the number of qualifying rows and pages rather than to the total number of rows in the table.

Index Scan is nothing but scanning on the data pages from the first page to the last page. If there is an index on a table, and if the query is touching a larger amount of data, which means the query is retrieving more than 50 percent or 90 percent of the data, and then optimizer would just scan all the data pages to retrieve the data rows. If there is no index, then you might see a Table Scan  in the execution plan.

Here are few other related articles on this subject which you may find useful:

Reference: Pinal Dave (http://blog.sqlauthority.com)