SQL SERVER – SafePeak SQL Server Acceleration Software Gets an Upgrade

SafePeak, the SQL Server application acceleration software, gets a new version upgrade and launches on Amazon Cloud Marketplace.

SafePeak’s idea in a nutshell is simple.

SafePeak is a query-level automated caching middleware for SQL Server-based operational applications. SafePeak accelerates apps by caching SQL queries while keeping a 100% ACID compliance and data integrity. SafePeak answers queries in microseconds (<0.001sec), consistently and reliably. The solution requires no code-changes to your apps, is based on smart auto-learning, is auto-adaptive and contains safety mechanisms. The company reports that its newest version (2.1.123) is more automated, performs a deeper analysis and is easier to use – resulting in even better performance improvements.

Caching for SQL? Who does that?

Today’s database servers are very busy keeping up with demands from applications and web servers. Data volumes continue to grow and workloads are increasingly complex.  All this adds up to more strain on a database. When application performance suffers, IT managers, Dev managers and Architects hear about it and need to find solutions.

A widely used approach to enhance database performance is called caching. Caching is usually applied in one of three ways. Software-level caching can be applied at either the application-level (like Memcached) or at the data-management level (as with in-memory databases or data grids). The third approach is hardware-level caching and involves bigger servers (adding CPU, RAM) and faster IOPS storage (using I/O caching like Fusion-IO, or not-so-cheap Flash storage).

Many applications’ managers don’t have resources for a lengthy do-it-yourself caching project. DIY caching can be technically challenging, and it can require significant changes to applications, which are simply impossible to make for commercial 3rd-party apps.

Pressed for time, many people may be forced into “throwing more hardware” at the problem.  It’s a classic approach, although expensive and sometimes one that comes with a different set of complications (downtimes, upgrades, etc.).

SafePeak – Easier, Better, Faster

SafePeak offers an easier and better way to accelerate SQL Server performance. SafePeak’s software solution gives all of the benefits of caching, but without any app re-programming. It works for your custom apps and 3rd-party apps. And its app acceleration encompasses all of an app’s queries and procedures traffic at the same time.

SafePeak is a modern SQL Server caching alternative: a smart, automated and comprehensive data caching layer – kind of a combination of Memcached caching and an In-Memory database, but enhanced with machine learning intelligence.

Comparing alternatives, SafePeak’s best advantage is probably its time-to-deployment. It’s fast to get up and running.  Deployed mostly on virtual machines, it takes only a few hours of initial configurations and almost no downtime (connection-string IP change).  And that’s it.  Custom-built and 3rd party apps using SQL Server, like SharePoint or CRM Dynamics, all experience dramatic performance improvements.

If your SQL Server-based apps are starting to slow down, SafePeak may be a simple solution.  SafePeak offers a trial (www.safepeak.com/download) with technical training & assistance support (recommended, and they’re super nice guys, too!).

Life is good!

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – Monitoring Your Server at a Glance – A Open Conversation with DBAs

A few days ago I was visiting a large Database Shop and I happened to meet quite a few DBAs there. After my session was over we all went to the company’s cafe for a quick coffee. While having a coffee in the relaxed environment, I shared details about my recent vacation with my family. After sharing a quick story of my vacation, I asked the DBAs around me about their own vacations. To my surprise many of the DBA’s have not taken a holiday for quite a some time. Upon asking why they have not taken holiday the answer was simple (and as expected) - due to the job.

Excuse for No Vacation

Here are a few reasons I received from them why they do not take days off from their job.

“I do not want to leave, I do not know the guy who will manage the server after me. He may not know where to look when things go wrong.”

“Forget about days off, almost every other night I have to wake up to fix something on my server, I do not remember when I slept 6 hours without waking up to look at the server.”

“I am new in the job and I have lots to learn. Honestly, when something goes wrong, I have no clue where to look so I take lots of time to figure things out.”

There were quite a few responses in the same line; I realized that DBAs do want to take a break from their work and enjoy time with their friends and family, however, due to some reasons (insecurity, possessiveness, inexperience or just pure love for the job) they are not able to take breaks.

A Key Question and Honest Answer

Here is the key question I asked them back -

What one thing you think you need in your job, to be able to take a much-needed vacation without any stress or worry?”

I got a very interesting answer to my question-

“We need a single screen for all of our systems, which can help us know what a exactly going on on our server at all times.”

A very interesting answer – what every DBA wants is a tool, feature, or a utility that can tell them what exactly is going on in their server every single time, consistently and accurately. I think this is a fair request. I am a DBA and Developer and I know there are challenges in our job. We are often so familiar with our application that it is not possible for other DBAs to know what is going on in our system. When something goes wrong, they all have to run to us as we have the solution from our experience. Sometime I am worried to leave my server alone as I think some developer will deploy a code that will make the server behave slowly or sluggishly.

Honestly, this is not a healthy environment. You need to have transparent system where you know what is going on in your server with a single glance and if you want more information, you should be able to get it by double clicking it. Sometimes there is no out-of-the-box solution for the product itself. We need to depend on third party solutions or experts who know what they are doing.

Here is what I do

As soon we were done with coffee, I immediately opened my laptop and displayed Spotlight on SQL Server Enterprise. When I was done displaying various features of the tool, I could see a ray of hope in the eyes of the DBAs. I could sense that they did not know about this product or they have not seen anything like this. Here is the home screen of the Spotlight on SQL Server Enterprise.

You can see in the above image there are many sections for various important monitoring areas of SQL Server.

Sessions: This section demonstrates the various details about your sessions like how many sessions are open, how many different computers are connected as well as how many active sessions there are. The one which attracted my attention was details about the response time. This shows the full round-trip response time for of a query representative of general workload.

SQL Processes: This section demonstrates how many total, system, and user processes are. We can also see blocked processes there as well. The best part is that we can right click on any area of this home screen and see further details. I right clicked on User Processes and selected Session Details.

It immediately demonstrated the display of various user sessions and the script which was running in the session.

SQL Memory: This section demonstrates a very vital information about memory – like how big is my buffer or how much of my cache is being used. I also love the section of procedure cache, because we do not often track something this important when we are talking about SQL and memory. Here is the image of my computer’s Cache Size by Object Type which displays various object types and size taken from my memory cache.

Disk Storage: This section talks about IO activities, keeping in mind database workload. We can see details about various data files, filegroups, database count, log files but my favorite section is MAX IO/Wait which is at the bottom of the section. We can see various Wait types and upon clicking details we can see Diagnose I/O Pressure as well.

There are plenty of things to discuss about this tool. There is animated movement of packets, batches, logical reads, checkpoints etc, it just gives us everything we want to know at a glance. If we want to see any further details we can just right click and see more details about that particular object or activity.

Here is the best part – if there is anything where we need to focus or get alerted, Spotlight on SQL Server Enterprise will turn that to red and will send email alerts.

Moreover, Spotlight on SQL Server Enterprise makes it even easier for DBAs to go on vacation… and still keep an eye on the servers! By using Spotlight Mobile which, by the way is FREE for Spotlight users,  you can monitor the health of your SQL Server environment via the Heatmap, view an alarms list, get alarm details and have the ability to snooze, or acknowledge alarms anywhere, anytime and on any device!

Well, I will stop here today, as this is what I demonstrated in the cafe to my DBA friends. After watching demos, they had promised me that they will download the trial and give this product a shot and see if this can earn them a vacation.

I would like to know your opinion about this project. You can download the fully functional 30-day free version and install it on your machine.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Big Data – Real-Time Analytics Performance with ClustrixDB

Note: The product used in comparison is ClustrixDB. It is available to download for FREE.

NewSQL databases provide scale-out of NoSQL without giving up on SQL or ACID transactions. While most NewSQL databases focus only on transactions, ClustrixDB also provides fast real-time analytics that are becoming increasing important to many businesses. ClustrixDB does this by bringing Massively Parallel Processing (MPP) used in data warehouses, to the primary database.

So, I decided to get a workload and try it out to see what kind of performance improvements one can get, if any. Since, joins and aggregates are the workhorses of real-time analytics processing, they are a good place to start.

Configuration

I built a simple dataset with two tables USERS (100K rows), USER_ADDRESSES (200K rows) and BIDS (10M rows) so this dataset has 2GB of data (mysqldump). For platform I used AWS and got ClustrixDB from AWS Marketplace. For comparison, I decided to use MySQL 5.6 since the exact same data and queries can be run on both databases. For both databases, the instance types are m1.xlarge.

MySQL does not scale beyond a single server and is usually deployed with master and two read slaves. Since ClustrixDB provides horizontal scale-out within one cluster, rather than master-slave (with multiple copies of data), the equivalent configuration is 3 servers. ClustrixDB horizontal scaling allows all nodes to participate in all query types. For measuring performance single MySQL is enough because performance for one query will be the same – whether we use the master or read slave.

For ClustrixDB, I also tried out 6 servers to see if analytics get faster as you add servers.

Here is the resulting table:

Results

We see that some queries get significantly faster, however one query showed no performance improvement. The count query on users is only counting 100K rows so it is likely not enough work. The count query on the bids table (counting 10M rows) shows speedup with 3 nodes, but with 6 nodes we don’t get as much improvement. This is still a very simple query. The queries with aggregates and joins get significantly faster (23x and 8.79x) on 3 nodes. These queries also get nearly twice as fast as you go from 3-node ClustrixDB to 6-node ClustrixDB, this is because of MPP in ClustrixDB.

Overall, we see that for more complex analytical queries ClustrixDB gets significant advantage. This means reports will get much faster with ClustrixDB. For some other queries, there is not enough work or being distributed does not offer that much advantage and here the performance is about the same. For real-time analytics requirements, ClustrixDB seems like a good solution.

Note: The product used in comparison is ClustrixDB. It is available to download for FREE.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Monitor Database via a Heatmap, Alarms List and Realtime Diagnostics for FREE

Note: This review is based on the performance monitoring and tuning free product Spotlight on SQL Server Freemium.

If you think of a DBA’s life, it has one very simple goal – their server should never go down and all the queries should always perform excellently. However, just like any other life goals o it is not possible to achieve that easily. I have often seen many DBA’s continuously watching their monitor to make sure that their servers are running fine. Another habit of most  DBAs is to continuously check their mobile phone for alerts. Nowadays we get so many alerts it is  getting harder to keep watch on the most important alerts for the health of our server. The habit of looking at the phone and computer monitor is so rooted into a DBA’s mind that they keep on looking at their phone at home to catch a suspicious alert.

Earlier this year, when I attended SQL PASS 2013, I  stopped by the Dell Software booth to see what they have  new in the SQL Server world. I  noticed Spotlight on SQL Server Freemium running on their monitor. When I inquired about the price, I was happy, in fact I was very happy as it was totally FREE! After returning home, I got much too busy with my day job, but  I recently I got some time and I downloaded the Spotlight on SQL Server Freemium FREE tool. The installation was pretty straight forward and easy. It took me less than 10 seconds for me to install the tool, just make sure that your SSMS is closed when you install  Spotlight on SQL Server Freemium, otherwise it will show you a warning to turn off SSMS.

Once I installed the plug-in, it was very easy to use it as it becomes an integral part of the SQL Server Management Studio, the interface is a very user friendly.

There are three distinct options in the Spotlight on SQL Server Freemium tool bar menu. Once you click on Monitoring it will give  three options. 1) Heatmap 2) Alarms 3) Connections. Let’s look at  them very quickly over here.

1) Heatmap

If our server is down, we want to know right away, but if everything is smooth we do not want to keep on getting reminders about that. For that reason Heatmap is a very essential part of  Spotlight on SQL Server Freemium. It gives an ‘at-a-glance’ picture of the state of all the servers DBAs have in their environment. Colors communicate all the information about what is going on with your server. The heatmap takes this a step further by displaying each server as a tile and then aggregating all of the statuses of a server and assigning a size to that tile. It also displays alarms for the connection when touched.

2) Alarms

Alarms  is just an alternate way to view Heatmaps. They display alarms on each server ordered by severity. You can configure and sort alarms the way you prefer. Once an alarm rings an experienced user can do either of  two actions: a) Acknowledge the alarm and solve the issue  b) Snoozing it to be reminded in the future.

3) Connections

This particular area displays various connections to diagnose a server as well as the server which you are monitoring. You can make various adjustments in your server connection in this section.

System Health Check

One of the biggest features of Spotlight on SQL Server Freemium is health check and providing a prioritized list of the key health system issues. Users can pinpoint various issues with the help of this list and resolve SQL Server issues. There are major five categories this tool checks:  Security, Disaster Recovery, Index Optimization, Memory and SQL Best Practice.

In future blog posts we will cover each of these topics in depth. Meanwhile, I strongly suggest you download Spotlight on SQL Server Freemium and makes sure your servers are healthy. Additionally, visit www.SpotlightEssentials.com, the one-stop shop for all things Spotlight.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Detecting Potential Bottlenecks with the help of Profiler

Probably, everyone who writes SQL code has used INSERT statement, which alongside SELECT, UPDATE and DELETE, is a part of a basic operations’ set for data manipulation.

At the first sight syntax of the INSERT statement may seem very trivial , especially when inserting one record -  INSERT INTO … VALUES … .

It’s not surprising, whereas in SQL Server 2005 basic syntax the VALUES keyword was applied only in the context of inserting records through using the INSERT statement. However, after the release of SQL Server 2008 the basic syntax of T-SQL was considerably expanded. Owing to this, usage of the multiline VALUES statement became possible not only for record insertion.

This article will focus on several possible cases of the VALUES statement usage. To present an unbiased assessment of the advantages of the VALUES statement, we will use built-in Profiler of dbForge Studio for SQL Server to look at the query plan for each particular example.

We have a table with the following structure, and we need to find the maximum amount of medals (disregarding their value) for each player:

IF OBJECT_ID ('dbo.Players') IS NOT NULL
DROP TABLE dbo.Players;
CREATE TABLE dbo.Players
(
PlayerID INT
, GoldBadge SMALLINT
, SilverBadge SMALLINT
, BronzeBadge SMALLINT
);
INSERT INTO dbo.Players (PlayerID, GoldBadge, SilverBadge, BronzeBadge)
VALUES
(1, 5, 4, 0),
(
2, 0, 9, 1),
(
3, 2, 4, 10);

To accomplish the task we need to convert columns into rows, so that afterwards we are able to apply the MAX aggregation function.

At first, let’s look at the example that is very common among beginners:

SELECT
t.PlayerID
, MaxBadgeCount = MAX(t.Badge)
FROM (
SELECT PlayerID, Badge = GoldBadge
FROM dbo.Players
UNION ALL
SELECT PlayerID, SilverBadge
FROM dbo.Players
UNION ALL
SELECT PlayerID, BronzeBadge
FROM dbo.Players
) t
GROUP BY t.PlayerID

Note, instead of reading data once, we are reading it three times from the table .

Starting from SQL Server 2005, the UNPIVOT operator is generally used to convert columns into rows:

SELECT
t.PlayerID
, MaxBadgeCount = MAX(t.BadgeCount)
FROM (
SELECT *
FROM dbo.Players
UNPIVOT
(
BadgeCount FOR Badge IN (
GoldBadge
, SilverBadge
, BronzeBadge
)
)
unpvt
) t
GROUP BY t.PlayerID

Let’s look at the plan:

Now data from the table is read only once and that is what we’ve been trying to achieve. However, there is still one more bottleneck left in our query (the most resource-consuming operation) – it’s the Sort operation which allows a server to detect the element with maximal value.

Let’s try to avoid row sorting  using the VALUES statement.

SELECT
p.PlayerID
, MaxBadgeCount = (
SELECT MAX(BadgeCount)
FROM (
VALUES
(p.GoldBadge)
, (
p.SilverBadge)
, (
p.BronzeBadge)
)
t (BadgeCount)
)
FROM dbo.Players p

As we can see, sorting is not applied here.

Let’s compare our queries and the results they return

Let’s examine the next sample, where we need to format row-wise data output in the grouped table.

IF OBJECT_ID ('dbo.Students') IS NOT NULL
DROP TABLE dbo.Students;
CREATE TABLE dbo.Students
(
StudentID INT
, Subject1 SMALLINT
, Subject2 SMALLINT
, Subject3 SMALLINT
, Subject4 SMALLINT
, Total SMALLINT
);
INSERT INTO dbo.Students (StudentID, Subject1, Subject2, Subject3, Subject4, Total)
VALUES
(1, 85, 94, 78, 90, 347), (2, 75, 88, 91, 78, 332)

We will get the following result:

We can solve this task by reading the data several times using UNION ALL with sorting:

SELECT
t.StudentID
, t.[Subject]
, t.Total
FROM (
SELECT ID = StudentID, StudentID, [Subject] = Subject1, Total, rn = 1
FROM dbo.Students
UNION ALL
SELECT StudentID, NULL, Subject2, NULL, 2
FROM dbo.Students
UNION ALL
SELECT StudentID, NULL, Subject3, NULL, 3
FROM dbo.Students
UNION ALL
SELECT StudentID, NULL, Subject4, NULL, 4
FROM dbo.Students
) t
ORDER BY t.ID, t.rn

We can also go back to the UNPIVOT statement with checking line numbers:

SELECT
Student_Name = CASE WHEN rn = 1 THEN t.StudentID END
, t.[Subject]
, Total = CASE WHEN rn = 1 THEN t.Total END
FROM
(
SELECT
StudentID
, [Subject]
, Total
, rn = ROW_NUMBER() OVER (PARTITION BY StudentID ORDER BY 1/0)
FROM dbo.Students s
UNPIVOT
(
[Subject] FOR tt IN (
Subject1,
Subject2,
Subject3,
Subject4
)
)
unpvt
) t

However, the VALUES statement allows creating a more sophisticated query:

SELECT t.*
FROM dbo.Students
OUTER APPLY (
  
VALUES
        
(StudentID, Subject1, Total)
       , (NULL,      
Subject2, NULL)
       , (NULL,      
Subject3, NULL)
       , (NULL,      
Subject4, NULL)
)
t (Student_Name, [Subject], Total)

Let’s take a look at the plan and compare the results:

The VALUES statement cannot be considered a full substitute to UNPIVOT statement as well as a savior in other cases. It all goes down to the query plan, and analyzing it with the help of Profiler, available in dbForge Studio for SQL Server, allows detecting potential bottlenecks in query productivity.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQLAuthority News – Learn MySQL Indexing in 99 Minutes – MySQL Indexing for Performance – Video Course

Every year around September 1st, I have decided to do something cool. This is because September 1st is my daughter Shaivi’s birthday. In previous years, I have released my new books as well new courses on this day.  This year I have done something similar for her. I like to work hard and dedicate my efforts to my family as they are the one who provides me unconditional support to do various activities for the community.

Journey to Learn MySQL Indexing in 99 Minutes

Indexes are considered to be sure solution for Performance tuning but it has two sides of the story. A good index can improve performance of your query and bad index can degrade the performance of the query. I have build a course around MySQL indexing where I explain how to create indexes which improves the performances of the MySQL Query by many folds. Indexing is such a subject that I can talk about this subject for many days and still I will have more material to share. However, I had decided to build a course which is very sharp and to the point. I wanted to make sure that with the help of this course one can learn all the major points of MySQL indexes in very short period of the time.

While I was building this course – I had put a threshold that I will not build a very long boring course. I wanted to make sure that I build a crisp course which addresses the need of the hour and teach the most efficient methods to improve performance of the query. It took me more than 3 weeks to build all the material I wanted to cover in this course. Once I have build the material the real challenge was on, I wanted to make sure that I build a course which is basic enough for a novice to understand but advanced enough that an expert can learn various tricks as well. I wanted to balance the course for every level as well as I wanted to make sure that the pace of the course is good enough for everyone to keep up with it.

Finally, after long hours and many days of work, I finalized the content of the course which just is the right fit for everybody – a novice who does not know anything about index as well an expert who already have a clear idea about the index.  I had pretty much mapped out every single minute of the course with the demo and slide. As per my estimate the course should have not gone over 100 minutes. When I was done with the course, I was extremely delighted as the entire course was recorded in 99 minutes – YES, only 99 minutes. I have previously recorded many courses but this course was built with perfection in the seconds of the time.

You can learn MySQL Performance Tuning with the help of Indexing in 99 minutes.

Course Content

Here is the course outline which I have built to learn MySQL Indexing. You can watch this course by creating free trial at Pluralsight. If you do not see the play button to the watch the course, please login to the Pluralsight with your valid credentials.

  • MySQL Indexing for Performance
    • Introduction
  • Types of Indexes
    • Introduction
    • InnoDB vs MyISAM
    • B-Tree Index
    • Advantages of B-Tree Index
    • Clustered Index
    • Advantages of Clustered Index
    • Disadvantages of Clustered Index
    • Secondary Index
    • Building Clustered B-Tree
    • Building Secondary Index B-Tree
    • InnoDB Clustered Table
    • Hash Index
    • Limitation of Hash Index
    • Adaptive Hash Index
    • Building Hash Index
    • Other Indexes
    • Summary in Sixty Seconds
  • Indexing Strategies for High Performance
    • Introduction
    • Effectiveness of Index
    • Demo: List Indexes
    • Demo: Basics of Indexes
    • Demo: Order of Columns in Index
    • Demo: Optimize OR Conditions
    • Demo: Optimize OR Conditions – Advanced
    • Demo: Optimize AND Conditions
    • Demo: Optimize AND Conditions – Advanced
    • Demo: Cover Index
    • Demo: Clustered Index
    • Demo: Index Hints
    • Summary in Sixty Seconds
  • Index Maintenance
    • Introduction
    • MySQL Query Optimizer
    • Statistics
    • InnoDB Statistics
    • Data and Index Fragmentation
    • Index Maintenance
    • Other Index Maintenance Tips
    • Summary in Sixty Seconds
  • Checklists
    • Checklists

Watch Video Course

Click here to watch the MySQL Indexing for Performance video course.

As an author of this course, I have few free codes to watch this course. Please leave a comment to watch this course and I will send a few of the readers a free card to watch this course. You can also request the free code on my Facebook page as I am very active there.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Tips for SQL Query Optimization by Analyzing Query Plan

Update: You can download dbForge Studio for SQL Server for free trial.

One of the most exciting periods of my life relates to the maintenance and optimization of ERP system for one large manufacturing company. The problem was that the system was initially created for a limited range of tasks, which over time grew much bigger than expected.

When multiple users simultaneously used the system, working in it was close to impossible. Increasing the operating capacity of a server could not completely solve the problem. So it was settled on revising current business-functionality and optimizing the most resource-consuming SQL queries.

Before I go on with the article, it’s worth mentioning that SQL Server generates an execution plan for each query. The plan describes an algorithm that will lead to a desired result.

We should also keep in mind how execution plans are generated. To reach maximal query execution speed, query optimizer (separate component of the SQL Server core) always attempts to generate a plan with the sequence of actions that consumes minimum server resources.

When evaluating any execution plan, the query optimizer takes into account multiple factors: involved database objects, conditions of joining them, returned columns list, indexes presence, availability of indexes and actual statistics, etc.

However, sometimes the optimizer cannot operate with the actual data and that is why it can over- or underestimate the SQL query cost when executing different plans. That’s why there is always a chance that the non – optimal plan will be chosen.

SSMS environment allows viewing the execution plan with the help of Show Execution Plan. However, analyzing complicated queries in it is far from easy. That is why I use a more functional query profile available in dbForge Studio for SQL Server.

We can get an execution plan each time SQL query execution is started. To do so, we need to switch to the profiling mode.

Or we can use the Generate Execution Plan command without running the query.

Several examples will let me demonstrate some tips for optimizing SQL queries using the profiler:

1. Minimizing the number of implicit connections

For the most complicated SQL queries I seek to use views depending on the situation, since their declaration might be abundant, and the server will be consuming more time for reading and analyzing this data. I do not intend to say that they should not be used at all, but they should be used wisely.

For example, we should get a list of customer’s tables. All the necessary object data can be obtained from the system views.

SELECT *
FROM sys.tables t

We will get the following plan for this query:

In the profiles the most resource-consuming operation are highlighted with red and yellow accordingly, representing the percentage of resources spent on each operation.

Let’s modify our SQL query – now we shall select only table names:

SELECT t.name
FROM sys.tables t

Now our execution plan got simplified and it is obvious that the query execution will require less resources:

How did we achieve that? Let’s take a look at the content of sys.tables. Unfortunately, SSMS does not allow viewing system views DLL, so we will use a small trick:

DECLARE @SQL VARCHAR(MAX)
SELECT @SQL = OBJECT_DEFINITION(OBJECT_ID('sys.tables'))
PRINT @SQL

We receive the following results:

CREATE VIEW sys.tables
AS
SELECT
o.name, o.OBJECT_ID, o.principal_id, o.schema_id, o.parent_object_id,
o.TYPE, o.type_desc, o.create_date, o.modify_date,
o.is_ms_shipped, o.is_published, o.is_schema_published,
ISNULL(ds.indepid, 0) AS lob_data_space_id,
rfs.indepid AS filestream_data_space_id,
...
ts.name
AS lock_escalation_desc,
o.is_filetable
FROM sys.objects$ o
LEFT JOIN sys.sysidxstats lob ON lob.id = o.OBJECT_ID AND lob.indid &amp;lt;= 1
LEFT JOIN sys.syssingleobjrefs ds ON ds.depid = o.OBJECT_ID AND ...
LEFT JOIN sys.syssingleobjrefs rfs ON rfs.depid = o.OBJECT_ID AND ...
LEFT JOIN sys.syspalvalues ts ON ts.class = 'LEOP' AND ...
WHERE o.TYPE = 'U'

Query optimizer looks at the columns which are actually used in our SQL query and removes excess joins. Now you can see why the application of an asterisk in query can result in reducing query efficiency.

However the same data can be received in a more efficient way. All tables are objects and are contained in sys.objects:

DECLARE @SQL VARCHAR(MAX)
SELECT @SQL = OBJECT_DEFINITION(OBJECT_ID('sys.objects))
PRINT @SQL
'

We receive the following in Output:

CREATE VIEW sys.objects AS
SELECT
name,
OBJECT_ID,
principal_id,
schema_id,
...
FROM sys.objects$

It’s obvious that this view is very simple and thus faster than sys.tables. All we have to do is select the tables among all objects by adding filtering condition (U – USER_TABLE, V – VIEW):

SELECT t.name
FROM sys.objects t
WHERE t.[type] = 'U'

To compare the queries we need to select execution plans and click Compare Selected Results on the context menu.

2. Elimination of data re-reading

Usually data re-reading is one of the most resource-demanding operations. That is why queries should be written in a way that would allow reading data in a query only once.
For example:

SELECT
(
SELECT COUNT(*)
FROM sys.objects o
WHERE o.[type] = 'U')
,(
SELECT COUNT(*)
FROM sys.objects o
WHERE o.[type] = 'V')

When you look through the plan, you can notice that the data from the table is being read twice:

The example below shows how this problem can be resolved:

;WITH cte AS
(
SELECT o.[type], [count] = COUNT(*)
FROM sys.objects o
WHERE o.[type] IN ('U', 'V')
GROUP BY o.[type]
)
SELECT (
SELECT [count]
FROM cte
WHERE [type] = 'U'
)
, (
SELECT [count]
FROM cte
WHERE [type] = 'V'
)

Keep in mind that CTE is just a code generalization – it will not allow you to eliminate the data re-read:

Let’s try to rewrite the SQL query using aggregation:

SELECT
COUNT(CASE WHEN o.[type] = 'U' THEN 1 END)
,
COUNT(CASE WHEN o.[type] = 'V' THEN 1 END)
FROM sys.objects o
WHERE o.[type] IN ('U', 'V')

The progress is evident:

But let’s imagine that we need to make more than two aggregations. In this case query execution will consume more time. But in fact, we can simplify it even more:

SELECT *
FROM (
SELECT o.OBJECT_ID, o.[type]
FROM sys.objects o
WHERE o.[type] IN ('U', 'V')
)
t
PIVOT
(
COUNT(t.OBJECT_ID) FOR [type] IN (U, V)
)
p

Let’s compare the queries:

I suppose, no comments are needed.

To sum it up I would like to point out that simplifying SQL query logic is the key to query optimization.
When analyzing execution plans with the help of the query profiler, you will be able to detect bottlenecks in the performance of databases and improve the efficiency of your SQL queries.

I suggest you download dbForge Studio for SQL Server and try this out. Let me know what you think about the same.

Reference: Pinal Dave (http://blog.sqlauthority.com)