SQL SERVER – Introduction of Showplan Warning

Vinod Kumar M is my very good friend renowned SQL Server Expert. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Before joining Microsoft, he was a Microsoft MVP for SQL Server for more than 3 years. He now works with MTC as a Technology Architect. He is a well-known speaker at all major Microsoft and third party technical conferences. Here is a very interesting blog post he sent on the subject of Executon Plan. Don’t forget to visit a Vinod’s blog and follow him on twitter.


SQL Server is a cost based optimizer and can guide us at every step. Now assume you wanted to learn about Performance tuning and you had a teacher to teach this. It would be much easier for us to learn and most importantly we will learn the concepts, behavior much more quickly. On the same lines, in this post we are using “SQL Server” as our teacher and guide.

Previous Warnings

In versions prior to SQL Server 2012 we already have had a few warnings in the plan that can help us diagnose the slow query issue. Primarily there are two types of warnings.  They are query level warnings and operator level warning. In this blog post we will discuss some of the enhancements available with SQL Server 2012.

A typical example of Query level warning includes missing index we are used to seeing inside Management Studio after executing a query.

Operator level warnings are raised at operator level.  When SSMS sees this type of warning, it puts a yellow exclamation mark on the operator icon.  The figure below shows this type of warnings.

Prior to 2012,   there are two warnings at operator level.  “No Join Predicates” and “Missing Column Statistics”.   If you enable SET STATISTICS XML ON we can get “Missing Column Statistics” warning as:

<Warnings>
 <ColumnsWithNoStatistics>
 <ColumnReference Database="[missingstats]" Schema="[dbo]"
 Table="[missing_stats]" Column="name" />
 </ColumnsWithNoStatistics>
</Warnings>

New SQL Server 2012 Warnings

When we troubleshoot / tune a single query because it is slow, we normally just get the query plan. However, there are several trace events that have been very helpful in determining why the queries are slow.  “Hash Warning” and “Sort Warnings” trace events are very helpful in determining why a query is slow.

In SQL 2012, the ShowPlan will produce warnings if hash join or sort spill to tempdb because of low memory conditions. In this case, we don’t have to capture trace events just to see if there are any sort warnings or hash warning on the query. To understand this warning in a typical setup, here is the script. First let us get the objects required for this demo.

USE tempdb
GO
-- Preparing data
CREATE TABLE Plan_Warning
(id INT PRIMARY KEY CLUSTERED,
name VARCHAR(25), website CHAR(50))
GO
SET NOCOUNT ON
GO
-- Pumping in dummy Data
BEGIN TRAN
DECLARE
@counter INT
SET
@counter = 1
WHILE @counter <= 100000
BEGIN
INSERT INTO
Plan_Warning(id, name, website)
VALUES (@counter, 'Vinod Kumar' + CONVERT(VARCHAR(10), @counter),
'blogs.ExtremeExperts.com')
SET @counter = @counter + 1
END
COMMIT TRAN
PRINT
'Load completed ...'
GO

Now execute the next query to look at the execution Plan.

SET STATISTICS XML ON
GO
SELECT * FROM Plan_Warning
WHERE id >= 7000
ORDER BY name DESC OPTION (MAXDOP 1)
GO
SET STATISTICS XML OFF
GO

This executes the query and shows us the execution plan as below. Click on the Showplan resultset. Alternatively we can also use the Ctrl+M to get the actual execution plan if required.

The execution plan reads like this. And we can see a small warning symbol with the Sort Operator. Also in the warning section we can see it says the spill did happen to tempdb. This is fundamentally because we have queried more than 93k rows and it was not able to fit in our laptop memory.

If we open the XML file we can find the below node. This is the same shown visually.

<Warnings>
 <SpillToTempDb SpillLevel="1" />
 </Warnings>

This can happen for Hash Spills too. In this case we will see the warning on the Hash Join node with the same error of spill of data to tempdb. The SpillLevel for Hash Joins will be 2 in our XML.

The new warnings don’t get populated to legacy execution plans. In other words, if we set statistics profile on, we won’t get the plan with these details.

Final words

Though SQL Server shows the warnings, we have not explained what to do if you get warnings.

  • Spills are happening because we have selected more data in a single query than expected and this can be the case for reporting queries.
  • Try to add additional where clause to reduce the data from the big table.
  • When it is not possible to add additional where clause, we highly recommend to tune and monitor TempDB growth and contentions.

Hope you had fun learning something that SQL Server taught us. It is critical we keep exploring and learning looking for these fine prints with every single release of SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – Concurrency Basics – Guest Post by Vinod Kumar

This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions from SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect.

Let us read the blog post in Vinod’s own voice.


Learning is always fun when it comes to SQL Server and learning the basics again can be more fun. I did write about Transaction Logs and recovery over my blogs and the concept of simplifying the basics is a challenge. In the real world we always see checks and queues for a process – say railway reservation, banks, customer supports etc there is a process of line and queue to facilitate everyone. Shorter the queue higher is the efficiency of system (a.k.a higher is the concurrency). Every database does implement this using checks like locking, blocking mechanisms and they implement the standards in a way to facilitate higher concurrency. In this post, let us talk about the topic of Concurrency and what are the various aspects that one needs to know about concurrency inside SQL Server. Let us learn the concepts as one-liners:

  • Concurrency can be defined as the ability of multiple processes to access or change shared data at the same time.
  • The greater the number of concurrent user processes that can be active without interfering with each other, the greater the concurrency of the database system.
  • Concurrency is reduced when a process that is changing data prevents other processes from reading that data or when a process that is reading data prevents other processes from changing that data.
  • Concurrency is also affected when multiple processes are attempting to change the same data simultaneously.
  • Two approaches to managing concurrent data access:
    • Optimistic Concurrency Model
    • Pessimistic Concurrency Model

Concurrency Models

  • Pessimistic Concurrency
    • Default behavior: acquire locks to block access to data that another process is using.
    • Assumes that enough data modification operations are in the system that any given read operation is likely affected by a data modification made by another user (assumes conflicts will occur).
    • Avoids conflicts by acquiring a lock on data being read so no other processes can modify that data.
    • Also acquires locks on data being modified so no other processes can access the data for either reading or modifying.
    • Readers block writer, writers block readers and writers.
  • Optimistic Concurrency
    • Assumes that there are sufficiently few conflicting data modification operations in the system that any single transaction is unlikely to modify data that another transaction is modifying.
    • Default behavior of optimistic concurrency is to use row versioning to allow data readers to see the state of the data before the modification occurs.
    • Older versions of the data are saved so a process reading data can see the data as it was when the process started reading and not affected by any changes being made to that data.
    • Processes modifying the data is unaffected by processes reading the data because the reader is accessing a saved version of the data rows.
    • Readers do not block writers and writers do not block readers, but, writers can and will block writers.

Transaction Processing

  • A transaction is the basic unit of work in SQL Server.
  • Transaction consists of SQL commands that read and update the database but the update is not considered final until a COMMIT command is issued (at least for an explicit transaction: marked with a BEGIN TRAN and the end is marked by a COMMIT TRAN or ROLLBACK TRAN).
  • Transactions must exhibit all the ACID properties of a transaction.

ACID Properties

  • Transaction processing must guarantee the consistency and recoverability of SQL Server databases.
  • Ensures all transactions are performed as a single unit of work regardless of hardware or system failure.
  • A – Atomicity C – Consistency I – Isolation D- Durability
    • Atomicity: Each transaction is treated as all or nothing – it either commits or aborts.
    • Consistency: ensures that a transaction won’t allow the system to arrive at an incorrect logical state – the data must always be logically correct.  Consistency is honored even in the event of a system failure.
    • Isolation: separates concurrent transactions from the updates of other incomplete transactions. SQL Server accomplishes isolation among transactions by locking data or creating row versions.
    • Durability: After a transaction commits, the durability property ensures that the effects of the transaction persist even if a system failure occurs. If a system failure occurs while a transaction is in progress, the transaction is completely undone, leaving no partial effects on data.

Transaction Dependencies

  • In addition to supporting all four ACID properties, a transaction might exhibit few other behaviors (known as dependency problems or consistency problems).
    • Lost Updates: Occur when two processes read the same data and both manipulate the data, changing its value and then both try to update the original data to the new value. The second process might overwrite the first update completely.
    • Dirty Reads: Occurs when a process reads uncommitted data. If one process has changed data but not yet committed the change, another process reading the data will read it in an inconsistent state.
    • Non-repeatable Reads: A read is non-repeatable if a process might get different values when reading the same data in two reads within the same transaction. This can happen when another process changes the data in between the reads that the first process is doing.
    • Phantoms: Occurs when membership in a set changes. It occurs if two SELECT operations using the same predicate in the same transaction return a different number of rows.

Isolation Levels

  • SQL Server supports 5 isolation levels that control the behavior of read operations.
  • Read Uncommitted
    • All behaviors except for lost updates are possible.
    • Implemented by allowing the read operations to not take any locks, and because of this, it won’t be blocked by conflicting locks acquired by other processes. The process can read data that another process has modified but not yet committed.
    • When using the read uncommitted isolation level and scanning an entire table, SQL Server can decide to do an allocation order scan (in page-number order) instead of a logical order scan (following page pointers). If another process doing concurrent operations changes data and move rows to a new location in the table, the allocation order scan can end up reading the same row twice.
    • Also can happen if you have read a row before it is updated and then an update moves the row to a higher page number than your scan encounters later.
    • Performing an allocation order scan under Read Uncommitted can cause you to miss a row completely – can happen when a row on a high page number that hasn’t been read yet is updated and moved to a lower page number that has already been read.
  • Read Committed
    • Two varieties of read committed isolation: optimistic and pessimistic (default).
    • Ensures that a read never reads data that another application hasn’t committed.
    • If another transaction is updating data and has exclusive locks on data, your transaction will have to wait for the locks to be released. Your transaction must put share locks on data that are visited, which means that data might be unavailable for others to use. A share lock doesn’t prevent others from reading but prevents them from updating.
    • Read committed (snapshot) ensures that an operation never reads uncommitted data, but not by forcing other processes to wait. SQL Server generates a version of the changed row with its previous committed values. Data being changed is still locked but other processes can see the previous versions of the data as it was before the update operation began.
  • Repeatable Read
    • This is a Pessimistic isolation level.
    • Ensures that if a transaction revisits data or a query is reissued the data doesn’t change.
    • That is, issuing the same query twice within a transaction cannot pickup any changes to data values made by another user’s transaction because no changes can be made by other transactions. However, this does allow phantom rows to appear.
    • Preventing non-repeatable read is a desirable safeguard but cost is that all shared locks in a transaction must be held until the completion of the transaction.
  • Snapshot
    • Snapshot Isolation (SI) is an optimistic isolation level.
    • Allows for processes to read older versions of committed data if the current version is locked.
    • Difference between snapshot and read committed has to do with how old the older versions have to be.
    • It’s possible to have two transactions executing simultaneously that give us a result that is not possible in any serial execution.
  • Serializable
    • This is the strongest of the pessimistic isolation level.
    • Adds to repeatable read isolation level by ensuring that if a query is reissued rows were not added in the interim, i.e, phantoms do not appear.
    • Preventing phantoms is another desirable safeguard, but cost of this extra safeguard is similar to that of repeatable read – all shared locks in a transaction must be held until the transaction completes.
    • In addition serializable isolation level requires that you lock data that has been read but also data that doesn’t exist. Ex: if a SELECT returned no rows, you want it to return no. rows when the query is reissued. This is implemented in SQL Server by a special kind of lock called the key-range lock.
    • Key-range locks require that there be an index on the column that defines the range of values. If there is no index on the column, serializable isolation requires a table lock.
    • Gets its name from the fact that running multiple serializable transactions at the same time is equivalent of running them one at a time.

Now that we understand the basics of what concurrency is, the subsequent blog posts will try to bring out the basics around locking, blocking, deadlocks because they are the fundamental blocks that make concurrency possible.

Now if you are with me – let us continue learning for SQL Server Locking Basics.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – 3 Online SQL Courses at Pluralsight and Free Learning Resources

Usain Bolt is an inspiration for all. He broke his own record multiple times because he wanted to do better! Read more about him on wikipedia. He is great and indeed fastest man on the planet.

Usain Bolt - World’s Fastest Man

Usain Bolt – World’s Fastest Man

“Can you teach me SQL Server Performance Tuning?”

This is one of the most popular questions which I receive all the time. The answer is YES.

I would love to do performance tuning training for anyone, anywhere.  It is my favorite thing to do, and it is my favorite thing to train others in.  If possible, I would love to do training 24 hours a day, 7 days a week, 365 days a year.  To me, it doesn’t feel like a job.

Of course, as much as I would love to do performance tuning 24/7/365, obviously I am just one human being and can only be in one place at one time.  It is also very difficult to train more than one person at a time, and it is difficult to train two or more people at a time, especially when the two people are at different levels.  I am also limited by geography.  I live in India, and adjust to my own time zone.  Trying to teach a live course from India to someone whose time zone is 12 or more hours off of mine is very difficult.  If I am trying to teach at 2 am, I am sure I am not at my best!

There was only one solution to scale – Online Trainings. I have built 3 different courses on SQL Server Performance Tuning with Pluralsight. Now I have no problem – I am 100% scalable and available 24/7 and 365. You can make me say the same things again and again till you find it right. I am in your mobile, PC as well as on XBOX. This is why I am such a big fan of online courses.  I have recorded many performance tuning classes and you can easily access them online, at your own time.  And don’t think that just because these aren’t live classes you won’t be able to get any feedback from me.  I encourage all my viewers to go ahead and ask me questions by e-mail, Twitter, Facebook, or whatever way you can get a hold of me.

Here are details of three of my courses with Pluralsight. I suggest you go over the description of the course. As an author of the course, I have few FREE codes for watching the free courses. Please leave a comment with your valid email address, I will send a few of them to random winners.

SQL Server Performance: Introduction to Query Tuning 

SQL Server performance tuning is an art to master – for developers and DBAs alike. This course takes a systematic approach to planning, analyzing, debugging and troubleshooting common query-related performance problems. This includes an introduction to understanding execution plans inside SQL Server.

In this almost four hour course we cover following important concepts.

  • Introduction 10:22
  • Execution Plan Basics 45:59
  • Essential Indexing Techniques 20:19
  • Query Design for Performance 50:16
  • Performance Tuning Tools 01:15:14
  • Tips and Tricks 25:53
  • Checklist: Performance Tuning 07:13

The duration of each module is mentioned besides the name of the module.

SQL Server Performance: Indexing Basics

This course teaches you how to master the art of performance tuning SQL Server by better understanding indexes.

In this almost two hour course we cover following important concepts.

  • Introduction 02:03
  • Fundamentals of Indexing 22:21
  • Practical Indexing Implementation Techniques 37:25
  • Index Maintenance 16:33
  • Introduction to ColumnstoreIndex 08:06
  • Indexing Practical Performance Tips and Tricks 24:56
  • Checklist : Index and Performance 07:29

The duration of each module is mentioned besides the name of the module.

SQL Server Questions and Answers

This course is designed to help you better understand how to use SQL Server effectively. The course presents many of the common misconceptions about SQL Server, and then carefully debunks those misconceptions with clear explanations and short but compelling demos, showing you how SQL Server really works.

In this almost 2 hours and 15 minutes course we cover following important concepts.

  • Introduction 00:54
  • Retrieving IDENTITY value using @@IDENTITY 08:38
  • Concepts Related to Identity Values 04:15
  • Difference between WHERE and HAVING 05:52
  • Order in WHERE clause 07:29
  • Concepts Around Temporary Tables and Table Variables 09:03
  • Are stored procedures pre-compiled? 05:09
  • UNIQUE INDEX and NULLs problem 06:40
  • DELETE VS TRUNCATE 06:07
  • Locks and Duration of Transactions 15:11
  • Nested Transaction and Rollback 09:16
  • Understanding Date/Time Datatypes 07:40
  • Differences between VARCHAR and NVARCHAR datatypes 06:38
  • Precedence of DENY and GRANT security permissions 05:29
  • Identify Blocking Process 06:37
  • NULLS usage with Dynamic SQL 08:03
  • Appendix Tips and Tricks with Tools 20:44

The duration of each module is mentioned besides the name of the module.

SQL in Sixty Seconds

You will have to login and to get subscribed to the courses to view them. Here are my free video learning resources SQL in Sixty Seconds. These are 60 second video which I have built on various subjects related to SQL Server. Do let me know what you think about them?

Here are three of my latest videos:

You can watch and learn at your own pace.  Then you can easily ask me any questions you have.  E-mail is easiest, but for really tough questions I’m willing to talk on Skype, Gtalk, or even Facebook chat.  Please do watch and then talk with me, I am always available on the internet!

Here is the video of the world’s fastest man.Usain St. Leo Bolt inspires us that we all do better than best. We can go the next level of our own record. We all can improve if we have a will and dedication.  Watch the video from 5:00 mark.

http://youtu.be/2O7K-8G2nwU?t=5m3s

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – NTFS File System Performance for SQL Server

Note: Before practicing any of the suggestion of this article, consult your IT Infrastructural Admin, applying the suggestion without proper testing can only damage your system.

Question: “Pinal, we have 80 GB of data including all the database files, we have our data in NTFS file system. We have proper backups are set up. Any suggestion for our NTFS file system performance improvement. Our SQL Server box is running only SQL Server and nothing else. Please advise.”

When I receive questions which I have just listed above, it often sends me deep thought. Honestly, I know a lot but there are plenty of things, I believe can be built with community knowledge base. Today I need you to help me to complete this list. I will start the list and you help me complete it.

NTFS File System Performance Best Practices for SQL Server

  • Disable Indexing on disk volumes
  • Disable generation of 8.3 names (command: FSUTIL BEHAVIOR SET DISABLE8DOT3 1)
  • Disable last file access time tracking (command: FSUTIL BEHAVIOR SET DISABLELASTACCESS 1)
  • Keep some space empty (let us say 15% for reference) on drive is possible
  • (Only on Filestream Data storage volume) Defragement the volume
  • Add your suggestions here…
The one which I often get a pretty big debate is NTFS allocation size. I have seen that on the disk volume which stores filestream data, when increased allocation to 64K from 4K, it reduces the fragmentation. Again, I suggest you attempt this after proper testing on your server. Every system is different and the file stored is different. Here is when I would like to request you to share your experience with related to NTFS allocation size.

If you do not agree with any of the above suggestions, leave a comment with reference and I will modify it. Please note that above list prepared assuming the SQL Server application is only running on the computer system.

The next question does all these still relevant for SSD – I personally have no experience with SSD with large database so I will refrain from comment.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Video – Beginning Performance Tuning with SQL Server Execution Plan

Traveling can be most interesting or most exhausting experience. However, traveling is always the most enlightening experience one can have. While going to long journey one has to prepare a lot of things. Pack necessary travel gears, clothes and medicines. However, the most essential part of travel is the journey to the destination. There are many variations one prefer but the ultimate goal is to have a delightful experience during the journey.

Here is the video available which explains how to begin with SQL Server Execution plans.

Performance Tuning is a Journey

Performance tuning is just like a long journey. The goal of performance tuning is efficient and least resources consuming query execution with accurate results. Just as maps are the most essential aspect of performance tuning the same way, execution plans are essentially maps for SQL Server to reach to the resultset. The goal of the execution plan is to find the most efficient path which translates the least usage of the resources (CPU, memory, IO etc).

Execution Plans are like Maps

When online maps were invented (e.g. Bing, Google, Mapquests etc) initially it was not possible to customize them. They were given a single route to reach to the destination. As time evolved now it is possible to give various hints to the maps, for example ‘via public transport’, ‘walking’, ‘fastest route’, ‘shortest route’, ‘avoid highway’. There are places where we manually drag the route and make it appropriate to our needs. The same situation is with SQL Server Execution Plans, if we want to tune the queries, we need to understand the execution plans and execution plans internals. We need to understand the smallest details which relate to execution plan when we our destination is optimal queries.

Understanding Execution Plans

The biggest challenge with maps are figuring out the optimal path. The same way the  most common challenge with execution plans is where to start from and which precise route to take. Here is a quick list of the frequently asked questions related to execution plans:

  • Should I read the execution plans from bottoms up or top down?
  • Is execution plans are left to right or right to left?
  • What is the relational between actual execution plan and estimated execution plan?
  • When I mouse over operator I see CPU and IO but not memory, why?
  • Sometime I ran the query multiple times and I get different execution plan, why?
  • How to cache the query execution plan and data?
  • I created an optimal index but the query is not using it. What should I change – query, index or provide hints?
  • What are the tools available which helps quickly to debug performance problems?
  • Etc…

Honestly the list is quite a big and humanly impossible to write everything in the words.

SQL Server Performance:  Introduction to Query Tuning

My friend Vinod Kumar and I have created for the same a video learning course for beginning performance tuning. We have covered plethora of the subject in the course. Here is the quick list of the same:

  • Execution Plan Basics
  • Essential Indexing Techniques
  • Query Design for Performance
  • Performance Tuning Tools
  • Tips and Tricks
  • Checklist: Performance Tuning

We believe we have covered a lot in this four hour course and we encourage you to go over the video course if you are interested in Beginning SQL Server Performance Tuning and Query Tuning.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Performance Tuning – Part 2 of 2 – Analysis, Detection, Tuning and Optimizing

This second part of Performance Tuning – Part 1 of 2 – Getting Started and Configuration. I suggest you read the first part before continuing on this second part.

Analysis and Detection

If you have noticed that configuration of the data source and profile is a very easy task and if you are familiar with the tool, this can be done in less than 2 minutes. However, while configuration is an important aspect, appropriate analysis of the data is more important since that is what leads us to appropriate results.

Once configuration is over, the screen shows the results of the profiling session. Now the screen will start returning information from various sources in detail. After a few minutes it will display the exact information regarding what is going on under the hood of SQL Server.

Let us see each of the sections in detail. The left most part will show the SQL statement. The SQL statement section is extremely important, once expanded it will start showing exactly what is going on with respect to the SQL Server load in the system.

Identifying the Most Critical Query

A frequently-asked question by developers is: in real systems there are millions of queries executing in a given time period. If I have to begin tuning queries, where should I begin, and which query should I tune to give me maximum return with respect to the performance? Well, this is the precise question the SQL Statement area attempts to answer. It displays the queries in terms of the percentage of DB Activity. One can easily narrow down the query which is most expensive and focus on tuning the query.

Identifying Wait Statistics

This is another very popular aspect covered in the tool. If you know me, I am a big believer in SQL Wait Stats. When SQL Server is executing any task, and if for any reason it has to wait for resources to execute the task, this wait is recorded by SQL Server with the reason for the delay. Later on we can analyze these wait stats to understand the reason the task was delayed and maybe we can eliminate the wait for SQL Server. It is not always possible to remove the wait type 100%, but there are a few suggestions that can help. Here is the SQL Server Wait Stats (http://wp.me/p2NUQ-38d) series which I have written which explains all the major wait stats in detail.

Profiling Active Sessions with Resources

Well, personally I found this section the most useful and insightful. The biggest challenge is to identify which resource is bottlenecked in SQL Server. It is quite possible that all the queries are running at their best. There is nothing left to tune with respect to the query, but still the performance is very slow. The reason can be that resources on the system are bottlenecked. The scarcity of resources is creating the performance degradation scenario. In this top horizontal section of the window it clearly maps what time exactly which resource was utilized. The same colors are also used in the SQL Statement, Events and Sessions section which help visually map queries, resources and events.

I believe once the query profiling is done, keeping resources in mind, the performance tuning task is much easier. Here are the steps a developer should be taking when they are trying to optimize queries using DB Optimizer:

  1. Identify slow query or offending query
  2. Identify the resource bottleneck using active sessions and events
  3. Tune the offending query

It is equally important to tune the offending query as much as it is to identify the offending query. In the next section we will see how DBOptimizer can help optimize the queries in a few clicks.

Tuning and Optimizing

When we identify the offending query our next intention is to optimize the offending query. To tune any query we need to understand all the details surrounding to the query. What impressed me the most about this tool was that it’s instant database-level analysis. Once we select the query to tune we can also see various deep down profiling details for individual queries as well.

First go to SQL statement and click on SQL Text. It will give the exact statement which ran, with all the parameters. The next tabs explain events, sessions and procedures which were associated with SQL Statement.

Once the profiling details associated with any query is understood the next important task is to tune the query and understand the execution plan. In the SQL Statement query if you right click there are two options visible, 1) Explain Plan and 2) Tune

Explain Plan

You can see that now we are entering into specific details related to the query.

In this section you can see very clear tabular formatted explanations of the execution plan. There is various information associated with the execution plan. Personally I prefer to see the execution plan in XML format as it explains various details. However, when in a hurry it is not always possible to spend time on XML. There are cases when any organization does not have time and bandwidth to do this.

The Execution Plan section is not only important for understanding queries but it is a great learning tool as well. I personally spend quite a lot of time on the execution plan. I spend time even on good queries as it is important to know what made the query so good. I think the Explain Plan can aid in learning this technique. Pay attention to I/O Cost, CPU Cost, Row size, Rows and Execution count.

Tuning Execution Plan

I think this is the most interesting part of the DB Optimizer tool. It is called TUNE! I think in one simple word it says what it does and when I saw this for the first time, I just felt like clicking on it. There are cases when a developer or DBA does not want to dig deeper in the scenario, or there are cases when they just want to try alternate scenarios as well. I believe this is a good place to start. Just hit Tune and let DBOptimizer suggest the best plan.

Though I was intrigued to hit Tune, I was bit wary to click it as I was not sure if the tool would then execute any T-SQL statements or tune script against the database. I am the guy who trusts anybody and everybody, but even then, I always like to at least know what is going to happen when it is about my most critical database. Fortunately, when I clicked on Tune, it showed me an overview screen. Where there were two options provided: 1) Generate Cases and 2) Perform Detail Analysis.

This was a very clean and easy to understand interface. The part I liked was generating cases. Often when I am developing T-SQL I want to try out various options but there is always a deadline to finish the project, and it makes sense to spend a limited amount of the time on any query as Return of Time Invested is very important as well. I think this tool can automate this mundane task and help figure out what the optimal solution is.

Clicking on the analysis tab also provides various index options. Additionally, it provides the cost for each index so you know which will be most efficient for a specific query.

If you click on Create Index it also generates the index for you, which you can apply to your database. I think there are many other features which we can discuss here, if I had the time. Additionally, please note that creating an index may or may not be the optimal solution. It is quite possible that a query re-write may solve your problem or even dropping unused indexes is the solution. Anyway, this tool gives lots of suggestions along with their impact, this makes decisions much easier.

Caution

I have always believed in the philosophy that there is no replacement for the human brain. No matter the advancements in artificial intelligence, there are places and cases when the human brain makes better decisions than any machine or tool. Though I have found DB Optimizer very well balanced and organized, I will also watch its decisions. It is great tool which so far has proved itself, but just like any other tool, it should be used with human intelligence. Just like Missing Index DMV gives plenty of Indexes to create but the DBA usually selects the most useful index and creates only those which are necessary. In the same way, there will many solutions this tool will suggest along with their impact. It is our responsibility to select the most powerful solution, keeping overall server health in focus.

Top 5 things I like the most

There are quite a few things which I really liked it but I decided to list the top 5 here to keep it sharp and short.

  1. Clear and easy to navigate interface
  2. Index analysis is fairly accurate and very informative
  3. Execution plan layout and arrangement is very well organized
  4. Painless, simple installation
  5. Well documented help and community support

Top 3 things for wish list for next version

Nobody is perfect. After using this tool for a long time, I have a wish list for this product. I am listing the top 3 items from this list.

  1. Further explanation of the wait stats
  2. Do not generate case – currently it generates various cases, there are times when I do not want cases to generate. I think this feature is a bit over-ambitious. Basic users who this tool targets may never need it
  3. Additional reference materials for the execution plans

Summary

Overall, DB-Optimizer-XE is indeed a performance tuning flagship product from Embarcadero. As I said earlier, the right tool can make life easier for experts as well remove all the uncertainty from a performance tuning project. I also found this product extremely easy to use, with a very clear user interface and even some fun options (like Tune).  I think anyone who is interested in performance tuning should at least give DB Optimizer a try to see if they like it.

DB-Optimizer-XE has a lot of great features, but it’s not perfect.  Keep in mind that you have to be smarter than the tools you use, so you can’t rely on one program alone to solve all your problems.  However, with that said, I highly recommend this program to anyone who is facing performance tuning now and in the future.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Performance Tuning – Part 1 of 2 – Getting Started and Configuration

Performance tuning is always a complex subject whenever one has to deal with it. When I was beginning with SQL Server, this was the most difficult area for me. However, there is a saying that if one has to overcome their fear one has to face the fear first. So I did exactly this. I started to practice performance tuning.

Early in my career I often failed when I had to deal with performance tuning tasks. However, each failure taught me something. It took a quite a while and about 100+ various projects before I started to consider myself a guy who knows something about performance tuning.

Every performance tuning exercise gave me new experience. I firmly believe in “Practice makes perfect” when I am involved in performance tuning projects. There were times when I could figure out problems by just looking at few configurations and settings. When asked how I knew this so quickly my usual answer was “experience.”

Quite often I get requests to share my expertise in performance tuning. This is when I wish it were easier. If I had to go back and do one thing during my whole career of performance tuning, I would document every single performance tuning rule that I have come across in my life. Once everything is documented, the expertise of performance tuning can be measured, and the results of these rules can give additional confidence while doing performance tuning.

Performance tuning used to be the responsibility of the DBA and developers who built the code. However, recently I have seen in many organizations that there are dedicated experts who are responsible for performance tuning of the server. Often they have to manage multiple servers and heterogeneous RDBMS. The job is more stressful and more challenging than we know.

I constantly try to test out various aspects of performance tuning as well as all available tools which can help me select the optimal solution for performance tuning. I recently tested out DB Optimizer from Embarcadero and here is very short tutorial of how this tool can be used, as well few of the pros and cons.

Let us divide this article into the following subjects.

  1. Getting Started
  2. Analysis and Detection
  3. Tuning and Optimizing

Getting Started

In this section we will go over basic steps to install DB Optimizer XE2 Version 3.0. First of all go to DB-Optimizer-XE and download the trial version of the DB Optimizer. The installation is very quick and simple. The installation is pretty much selecting button NEXT … NEXT and Finish!

After you install and load the DB Optimizer, and right after the splash screen it will ask for the workspace where all the files of this optimization project will be saved. I think the most difficult part is over by now. From here the tool is very easy to use and self-explanatory. The very first screen has three sections:  overview, tutorial and workbench.

I strongly suggest that if you are new to the tool then go over the tutorial. The tutorial is well organized and very detailed. It not only explains how one can do various performance tuning tasks but it often explains why they are done a certain way.

Once the tutorial is over or if you are an experienced user, head to the workbench directly. Workbench is the area where we will do various performance tuning related tasks.

Configuration

Here is one thing I have to point out: even though there is no wizard or step-by-step setup guide – the interface is very intuitive. For example, as soon as you start workbench, the very first screen is data source. Once data source is configured a simple right click on the data source is all one needs to work with performance tuning aspects.

This tool lets you connect to multiple data sources. We will select SQL Server as the data source and give it an easy name so we can refer to it easily in the future. One specific thing worth noting is that while connecting to the data source, it lets you specify what kind of data source you are connecting to. Are you connecting to Development, Test, QA or Production? This is very essential so that we do not deploy changes to the incorrect server while we are performing various tasks with DB Optimizer.

The next step is to provide details of the connection string and check if the connection is successful by clicking on the Test Connection button at the bottom. If the host address along with username and password is correct, the connection will pop up with a “Connection was successful” message.

Click on Finish after the connection is successfully made to the data source. It will register the data source and bring up the same screen where you started.

Expand the data source which we have registered. Once you expand you will see every single object listed under the data source.

After configuring the data source, the next task is to configure the profile.

Profile configuration is a very easy process and we define where we will be storing various captured data in the database as well what data source will be captured.

Once the data profile is configured go ahead and click on Profile. I think we are now 100% done in configuring the setup. The next step is to analyze our profiled data.

In next blog post we will continue with the second part of  Analysis, Detection, Tuning and Optimizing. Till than you can download and configure DB-Optimizer-XE.

Reference: Pinal Dave (http://blog.sqlauthority.com)