SQL SERVER – Video – Beginning Performance Tuning with SQL Server Execution Plan

Traveling can be most interesting or most exhausting experience. However, traveling is always the most enlightening experience one can have. While going to long journey one has to prepare a lot of things. Pack necessary travel gears, clothes and medicines. However, the most essential part of travel is the journey to the destination. There are many variations one prefer but the ultimate goal is to have a delightful experience during the journey.

Here is the video available which explains how to begin with SQL Server Execution plans.

Performance Tuning is a Journey

Performance tuning is just like a long journey. The goal of performance tuning is efficient and least resources consuming query execution with accurate results. Just as maps are the most essential aspect of performance tuning the same way, execution plans are essentially maps for SQL Server to reach to the resultset. The goal of the execution plan is to find the most efficient path which translates the least usage of the resources (CPU, memory, IO etc).

Execution Plans are like Maps

When online maps were invented (e.g. Bing, Google, Mapquests etc) initially it was not possible to customize them. They were given a single route to reach to the destination. As time evolved now it is possible to give various hints to the maps, for example ‘via public transport’, ‘walking’, ‘fastest route’, ‘shortest route’, ‘avoid highway’. There are places where we manually drag the route and make it appropriate to our needs. The same situation is with SQL Server Execution Plans, if we want to tune the queries, we need to understand the execution plans and execution plans internals. We need to understand the smallest details which relate to execution plan when we our destination is optimal queries.

Understanding Execution Plans

The biggest challenge with maps are figuring out the optimal path. The same way the  most common challenge with execution plans is where to start from and which precise route to take. Here is a quick list of the frequently asked questions related to execution plans:

  • Should I read the execution plans from bottoms up or top down?
  • Is execution plans are left to right or right to left?
  • What is the relational between actual execution plan and estimated execution plan?
  • When I mouse over operator I see CPU and IO but not memory, why?
  • Sometime I ran the query multiple times and I get different execution plan, why?
  • How to cache the query execution plan and data?
  • I created an optimal index but the query is not using it. What should I change – query, index or provide hints?
  • What are the tools available which helps quickly to debug performance problems?
  • Etc…

Honestly the list is quite a big and humanly impossible to write everything in the words.

SQL Server Performance:  Introduction to Query Tuning

My friend Vinod Kumar and I have created for the same a video learning course for beginning performance tuning. We have covered plethora of the subject in the course. Here is the quick list of the same:

  • Execution Plan Basics
  • Essential Indexing Techniques
  • Query Design for Performance
  • Performance Tuning Tools
  • Tips and Tricks
  • Checklist: Performance Tuning

We believe we have covered a lot in this four hour course and we encourage you to go over the video course if you are interested in Beginning SQL Server Performance Tuning and Query Tuning.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – A Quick Look at Logging and Ideas around Logging

TSQL2sDay SQL SERVER   A Quick Look at Logging and Ideas around Logging

This blog post is written in response to the T-SQL Tuesday post on Logging. When someone talks about logging, personally I get lots of ideas about it. I have seen logging as a very generic term. Let me ask you this question first before I continue writing about logging.

What is the first thing comes to your mind when you hear word “Logging”?

Now ask the same question to the guy standing next to you. I am pretty confident that you will get  a different answer from different people. I decided to do this activity and asked 5 SQL Server person the same question.

Question: What is the first thing comes to your mind when you hear the word “Logging”?

Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one.

logkind1 SQL SERVER   A Quick Look at Logging and Ideas around Logging

Output Clause

The very first person replied output clause. Pretty interesting answer to start with. I see what exactly he was thinking. SQL Server 2005 has introduced a new OUTPUT clause. OUTPUT clause has access to inserted and deleted tables (virtual tables) just like triggers. OUTPUT clause can be used to return values to client clause. OUTPUT clause can be used with INSERT, UPDATE, or DELETE to identify the actual rows affected by these statements.

Here are some references for Output Clause:

Error Logs

I was expecting someone to mention Error logs when it is about logging. The error log is the most looked place when there is any error either with the application or there is an error with the operating system. I have kept the policy to check my server’s error log every day. The reason is simple – enough time in my career I have figured out that when I am looking at error logs I find something which I was not expecting. There are cases, when I noticed errors in the error log and I fixed them before end user notices it. Other common practices I always tell my DBA friends to do is that when any error happens they should find relevant entries in the error logs and document the same. It is quite possible that they will see the same error in the error log  and able to fix the error based on the knowledge base which they have created. There can be many different kinds of error log files exists in SQL Server as well – 1) SQL Server Error Logs 2) Windows Event Log 3) SQL Server Agent Log 4) SQL Server Profile Log 5) SQL Server Setup Log etc.

Here are some references for Error Logs:

Change Data Capture

I got surprised with this answer. I think more than the answer I was surprised by the person who had answered me this one. I always thought he was expert in HTML, JavaScript but I guess, one should never assume about others. Indeed one of the cool logging feature is Change Data Capture. Change Data Capture records INSERTs, UPDATEs, and DELETEs applied to SQL Server tables, and makes a record available of what changed, where, and when, in simple relational ‘change tables’ rather than in an esoteric chopped salad of XML. These change tables contain columns that reflect the column structure of the source table you have chosen to track, along with the metadata needed to understand the changes that have been made.

Here are some references for Change Data Capture:

Dynamic Management View (DMV)

I like this answer. If asked I would have not come up with DMV right away but in the spirit of the original question, I think DMV does log the data. DMV logs or stores or records the various data and activity on the SQL Server. Dynamic management views return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance. One can get plethero of information from DMVs – High Availability Status, Query Executions Details, SQL Server Resources Status etc.

Here are some references for Dynamic Management View (DMV):

Log Files

I almost flipped with this final answer from my friend. This should be probably the first answer. Yes, indeed log file logs the SQL Server activities. One can write infinite things about log file. SQL Server uses log file with the extension .ldf to manage transactions and maintain database integrity. Log file ensures that valid data is written out to database and system is in a consistent state. Log files are extremely useful in case of the database failures as with the help of full backup file database can be brought in the desired state (point in time recovery is also possible). SQL Server database has three recovery models – 1) Simple, 2) Full and 3) Bulk Logged. Each of the model uses the .ldf file for performing various activities. It is very important to take the backup of the log files (along with full backup) as one never knows when backup of the log file come into the action and save the day!

Can I just say I loved this month’s T-SQL Tuesday Question. It really provoked very interesting conversation around me.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Using SafePeak to Accelerate Performance of 3rd Party Applications

An exciting solution I found last year (2011) for SQL Server performance acceleration is SafePeak. Designed to specifically to accelerate and tune performance of cases where you have minimum control on the applications, like 3rd party line of business applications. SafePeak performs automated caching of queries and procedures results, returning with very high speed results from memory and reducing the SQL load by factor of 10. No code changes needed. And that is make it very interesting and appealing!

safepeak64bit SQL SERVER   Using SafePeak to Accelerate Performance of 3rd Party Applications

One of the questions I hear many times concern performance acceleration of 3rd party applications applications that are critical to business function, the “line of business” applications. In most cases critical software remain intact for many years, growing and expanding but also losing its performance edge and affecting the business for the worse, instead for the good. The main problem with fixing and tuning 3rd party apps is that it is a closed code or someone else’s code. Those applications are waiting for an upgrade every few years, while the business being affected for the worse.

SafePeak idea: a middleware software that studies SQL Server schema (tables, views, stored procedures, functions, triggers) to learn about object dependencies, and then creates patterns of similar SQL queries and procedures executions while understanding their direct and indirect object dependents. And then it starts actively caching results in RAM, while keeping those update commands (DML, DDL and procs that do DML or DDL) to both effect the cache and the SQL Server. After downloading it the installation is on a standard Windows 2008 R2 virtual machine or a server (different then the SQL server) takes several minutes. Connecting the application is done by pointing the connection string to SafePeak (easy). Configuration is done via their very nice looking GUI and focuses on things SafePeak didn’t understood, which are usually procedures that have some dynamic code inside.

Results can be very exciting: cached queries response time is reduced to 0.25 millisecond or so. Yes, I meant millisecond, pretty impressive. So queries that are slow become extremely fast and queries that are already fast become even faster.

Because it no code change is involved this specifically fits well 3rd party applications – plug it in, play with it for couple of hours of fine-tuning and see the results.

SafePeak supports SQL 2000, 2005, 2008 and 2008 R2 and can run in a private cloud or the public cloud.

SafePeak provides a 14 days trial with all functions available for a single installation (they also have a safepeak cluster installation, which I plan to review in near future). The download is available here: www.safepeak.com/download – Good luck and tell me your stories.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQLAuthority News – Online Session Practical Tricks and Tips to Speed up Database Queries Today

I am presenting on performance tuning topic again today at Virtual Tech Days. This time I am going to talk about lots of practical tips and will focus on what we can do immediately right after the session is over.

During the session I have two things for you to spot.

  1. How many times, I say word “performance”?
  2. How many times, I use the phrase “It is interesting to …”?

Let us see if you can tell me after the session the count. Trust me, I am not going to count there as I will be presenting so I let you come up with the answer of this fun game.

Sessions Details

Title: Practical Tricks and Tips to Speed up Database Queries
Timing: December 15, 2011 1:45pm – 2:45pm IST
In this session I am going to discuss various performance tuning related techniques. Here is the agenda of the session.

  • A Quick Start on Performance
  • Denali T-SQL Enhancements
  • Timeless Best Practices
  • The Superlative Performance

Each of the above topics will be associated with very practical solid demo. I am sure you will absolutely enjoy the session.

Giveaways

 SQLAuthority News   Online Session Practical Tricks and Tips to Speed up Database Queries TodayDuring my session I will ask a simple question. I will give away 5 copies of my SQL Server Interview Questions and Answers books to five random person who will answer it correctly (more details in the session). If you have already have this book, I strongly suggest you attend this session as this session will take the performance tuning concepts to next level. I will make sure that I autograph and send this copies to your way.

Reference : Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – A Quick Look at Performance – A Quick Look at Configuration

TSQL2sDay SQL SERVER   A Quick Look at Performance   A Quick Look at Configuration

This blog post is written in response to the T-SQL Tuesday post of Tips and Tricks. For me, this is a very interesting subject. I perfectly enjoy a discussion when it is about performance tuning. I commonly get follow-up questions regarding this subject, but most of them do not give the complete information about their environment.

Whenever I get a question which does not have complete information but is obviously requesting for my help, my initial reaction is to ask more questions. When I ask more details, I usually get more questions from them rather than the details I was asking for. Indeed, this is an endless loop. I prefer to resolve a query or a problem quickly, easily and accurately so that there is no more confusion or further problems in the future.

Here is the method I follow: I send people who request my help a couple of SQL Server scripts and ask them to run these scripts on their system. Once they send me the results,  I would then have a good idea on what the status of their system is.

Here are a couple of scripts which I usually request them to run on their respective machines and get back to me with results in Excel.

1) System Configuration

SELECT *
FROM sys.configurations
ORDER BY name OPTION (RECOMPILE);

2) Filename and Paths of Database

SELECT DB_NAME([database_id])AS [DBName],
name, physical_name, type_desc, state_desc,
CONVERT( bigint, size/128.0) [SizeinMB]
FROM sys.master_files
ORDER BY DB_NAME([database_id])

3) Capturing Wait Types and Wait Stats Information at Interva

You can review the script mentioned in my earlier blog post over here.

Honestly, there is a lot of information one needs to solve a query, but this is how I start and get all the basic information from the questioner. Once I get these, I review the results and continue to ask more questions or help right away if I am able to reach the root cause of the issue.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQLAuthority News – Virtual Presentation on Practical Tricks and Tips to Speed up Database Queries – December 15, 2011

performancestandout SQLAuthority News   Virtual Presentation on Practical Tricks and Tips to Speed up Database Queries   December 15, 2011Performance tuning has been my favorite subject and any time when I have to present on this subject, this itself gives me tremendous pleasure as well. I am always excited to present something new on this topic. Virtual Tech Days is just here around the corner and I am going to present about performance tuning subject once again. However, I am going to focus that instead of theory, I will talk about the practical aspect of the performance tuning and share tips which one can use right away.

Sessions Details

Title: Practical Tricks and Tips to Speed up Database Queries
Timing: December 15, 2011 1:45pm – 2:45pm IST
In this session I am going to discuss various performance tuning related techniques. Here is the agenda of the session.

  • A Quick Start on Performance
  • Denali T-SQL Enhancements
  • Timeless Best Practices
  • The Superlative Performance

Each of the above topics will be associated with very practical solid demo. I am sure you will absolutely enjoy the session.

Giveaways

 SQLAuthority News   Virtual Presentation on Practical Tricks and Tips to Speed up Database Queries   December 15, 2011During my session I will ask a simple question. I will give away 5 copies of my SQL Server Interview Questions and Answers books to five random person who will answer it correctly (more details in the session). If you have already have this book, I strongly suggest you attend this session as this session will take the performance tuning concepts to next level. I will make sure that I autograph and send this copies to your way.

Reference : Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to Ignore Columnstore Index Usage in Query

Earlier I wrote about SQL SERVER – Fundamentals of Columnstore Index and very first question I received in email was as following.

“We are using SQL Server 2012 CTP3 and so far so good. In our data warehouse solution we have created 1 non-clustered columnstore index on our large fact table. We have very unique situation but your article did not cover it. We are running few queries on our fact table which is working very efficiently but there is one query which earlier was running very fine but after creating this non-clustered columnstore index this query is running very slow. We dropped the columnstore index and suddenly this one query is running fast but other queries which were benefited by this columnstore index it is running slow.

Any workaround in this situation?”

In summary the question in simple words “How can we ignore using columnstore index in selective queries?”

Very interesting question – you can use I can understand there may be the cases when columnstore index is not ideal and needs to be ignored the same. You can use the query hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX to ignore the columnstore index. SQL Server Engine will use any other index which is best after ignoring the columnstore index.

Here is the quick script to prove the same.

We will first create sample database and then create columnstore index on the same. Once columnstore index is created we will write simple query. This query will use columnstore index. We will then show the usage of the query hint.

USE AdventureWorks
GO
-- Create New Table
CREATE TABLE [dbo].[MySalesOrderDetail](
[SalesOrderID] [int] NOT NULL,
[SalesOrderDetailID] [int] NOT NULL,
[CarrierTrackingNumber] [nvarchar](25) NULL,
[OrderQty] [smallint] NOT NULL,
[ProductID] [int] NOT NULL,
[SpecialOfferID] [int] NOT NULL,
[UnitPrice] [money] NOT NULL,
[UnitPriceDiscount] [money] NOT NULL,
[LineTotal] [numeric](38, 6) NOT NULL,
[rowguid] [uniqueidentifier] NOT NULL,
[ModifiedDate] [datetime] NOT NULL
)
ON [PRIMARY]
GO
-- Create clustered index
CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail]
( [SalesOrderDetailID])
GO
-- Create Sample Data Table
-- WARNING: This Query may run upto 2-10 minutes based on your systems resources
INSERT INTO [dbo].[MySalesOrderDetail]
SELECT S1.*
FROM Sales.SalesOrderDetail S1
GO 100
-- Create ColumnStore Index
CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore]
ON [MySalesOrderDetail]
(UnitPrice, OrderQty, ProductID)
GO

Now we have created columnstore index so if we run following query it will use for sure the same index.

-- Select Table with regular Index
SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice,
SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty
FROM [dbo].[MySalesOrderDetail]
GROUP BY ProductID
ORDER BY ProductID
GO

columnstore3 SQL SERVER   How to Ignore Columnstore Index Usage in Query

We can specify Query Hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX as described in following query and it will not use columnstore index.

-- Select Table with regular Index
SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice,
SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty
FROM [dbo].[MySalesOrderDetail]
GROUP BY ProductID
ORDER BY ProductID
OPTION (IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX)
GO

columnstore4 SQL SERVER   How to Ignore Columnstore Index Usage in Query

Let us clean up the database.

-- Cleanup
DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail]
GO
TRUNCATE TABLE dbo.MySalesOrderDetail
GO
DROP TABLE dbo.MySalesOrderDetail
GO

Again, make sure that you use hint sparingly and understanding the proper implication of the same. Make sure that you test it with and without hint and select the best option after review of your administrator. Here is the question for you – have you started to use SQL Server 2012 for your validation and development (not on production)? It will be interesting to know the answer.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Fundamentals of Columnstore Index

There are two kind of storage in database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data is relevant or not, column store queries need only to search much lesser number of the columns. This means major increases in search speed and hard drive use. Additionally, the column store indexes are heavily compressed, which translates to even greater memory and faster searches. I am sure this looks very exciting and it does not mean that you convert every single index from row store to column store index. One has to understand the proper places where to use row store or column store indexes. Let us understand in this article what is the difference in Columnstore type of index.

Column store indexes are run by Microsoft’s VertiPaq technology. However, all you really need to know is that this method of storing data is columns on a single page is much faster and more efficient. Creating a column store index is very easy, and you don’t have to learn new syntax to create them. You just need to specify the keyword “COLUMNSTORE” and enter the data as you normally would. Keep in mind that once you add a column store to a table, though, you cannot delete, insert or update the data – it is READ ONLY. However, since column store will be mainly used for data warehousing, this should not be a big problem. You can always use partitioning to avoid rebuilding the index.

A columnstore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored. The difference between column store and row store approaches is illustrated below:

columnstore SQL SERVER   Fundamentals of Columnstore Index

In case of the row store indexes multiple pages will contain multiple rows of the columns spanning across multiple pages. In case of column store indexes multiple pages will contain multiple single columns. This will lead only the columns needed to solve a query will be fetched from disk. Additionally there is good chance that there will be redundant data in a single column which will further help to compress the data, this will have positive effect on buffer hit rate as most of the data will be in memory and due to same it will not need to be retrieved.

Let us see small example of how columnstore index improves the performance of the query on a large table.

As a first step let us create databaseset which is large enough to show performance impact of columnstore index. The time taken to create sample database may vary on different computer based on the resources.

USE AdventureWorks
GO
-- Create New Table
CREATE TABLE [dbo].[MySalesOrderDetail](
[SalesOrderID] [int] NOT NULL,
[SalesOrderDetailID] [int] NOT NULL,
[CarrierTrackingNumber] [nvarchar](25) NULL,
[OrderQty] [smallint] NOT NULL,
[ProductID] [int] NOT NULL,
[SpecialOfferID] [int] NOT NULL,
[UnitPrice] [money] NOT NULL,
[UnitPriceDiscount] [money] NOT NULL,
[LineTotal] [numeric](38, 6) NOT NULL,
[rowguid] [uniqueidentifier] NOT NULL,
[ModifiedDate] [datetime] NOT NULL
)
ON [PRIMARY]
GO
-- Create clustered index
CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail]
( [SalesOrderDetailID])
GO
-- Create Sample Data Table
-- WARNING: This Query may run upto 2-10 minutes based on your systems resources
INSERT INTO [dbo].[MySalesOrderDetail]
SELECT S1.*
FROM Sales.SalesOrderDetail S1
GO 100

Now let us do quick performance test. I have kept STATISTICS IO ON for measuring how much IO following queries take. In my test first I will run query which will use regular index. We will note the IO usage of the query. After that we will create columnstore index and will measure the IO of the same.

-- Performance Test
-- Comparing Regular Index with ColumnStore Index
USE AdventureWorks
GO
SET STATISTICS IO ON
GO
-- Select Table with regular Index
SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice,
SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty
FROM [dbo].[MySalesOrderDetail]
GROUP BY ProductID
ORDER BY ProductID
GO
-- Table 'MySalesOrderDetail'. Scan count 1, logical reads 342261, physical reads 0, read-ahead reads 0.
-- Create ColumnStore Index
CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore]
ON [MySalesOrderDetail]
(UnitPrice, OrderQty, ProductID)
GO
-- Select Table with Columnstore Index
SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice,
SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty
FROM [dbo].[MySalesOrderDetail]
GROUP BY ProductID
ORDER BY ProductID
GO

columnstore2 SQL SERVER   Fundamentals of Columnstore Index

It is very clear from the results that query is performance extremely fast after creating ColumnStore Index. The amount of the pages it has to read to run query is drastically reduced as the column which are needed in the query are stored in the same page and query does not have to go through every single page to read those columns.

columnstore1 SQL SERVER   Fundamentals of Columnstore Index

If we enable execution plan and compare we can see that column store index performance way better than regular index in this case.

Let us clean up the database.

-- Cleanup
DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail]
GO
TRUNCATE TABLE dbo.MySalesOrderDetail
GO
DROP TABLE dbo.MySalesOrderDetail
GO

In future posts we will see cases where Columnstore index is not appropriate solution as well few other tricks and tips of the columnstore index.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – TRACEWRITE – Wait Type – Wait Related to Buffer and Resolution

Earlier this year I wrote for a whole month on SQL Server Wait Stats and the series was one of the best reviewed I have ever written. The same series has been enhanced and compiled into a book as SQL Server Wait Stats [Amazon] | [Flipkart] | [Kindle]. The best part of this book is it is an evolving book. I am planning to expand this book at certain intervals.

Yesterday I came across a very interesting system, where the top most wait type was TRACEWRITE. The DBA of the system reached out to me asking what this wait types means and how it can be resolved. As I had not written about this in the book so far, this is the blog post dedicated to his question. It will definitely be included in future versions of the book. For the moment let us go over it quickly and see what we can learn about TRACEWRITE.

Personally I consider TRACEWRITE a harmless and innocent wait type and I recommend that you ignore it too. However, the DBA who ran the script mentioned in my very first blog still found this wait type on the top as I did not include this wait type in the exclusion list.  I think it should be there and in future versions of the script I will include it.

From Book On-Line:

TRACEWRITE
Occurs when the SQL Trace rowset trace provider waits for either a free buffer or a buffer with events to process.

TRACEWRITE Explanation:

SQL Trace is a SQL Server database engine technology which monitors specific events generated when various actions occur in the database engine. When any event is fired it goes through various stages as well various routes. One of the routes is Trace I/O Provider, which sends data to its final destination either as file or rowset. This rowset provider does not provide any guarantees to data. It stores the data into its internal buffers. If the data from the internal buffer is not consumed quickly enough (20 seconds) the buffers start to drop the events to free itself up to handle future events. This is when it sends a special error messages to the profiler client. When more threads are waiting for free buffers the wait type TRACEWRITE is implemented. The higher this wait type, the higher the number of threads waiting for free buffer, degrading performance in most of the system.

Reducing TRACEWRITE wait:

It is not necessary to consider this wait type as bad or evil. In most systems it can be a perfectly normal wait type and you just need to ignore it. If you are convinced, you should stop reading this blog post here. However, if still want to reduce this wait type, here is something you can experiment with on your development server (never on production server).

Run the following query and see if it returns any value. This query will list all the trace running in your system.

SELECT *
FROM sys.fn_trace_getinfo(0)

In most of the systems I have come across I have noticed default trace enabled for the system. I personally like this to keep it on as it helps the DBA to diagnosis problems the first time they occur. Additionally, this helps Microsoft to diagnosis your system if you request their support. (One more reason to ignore this wait type and do not attempt to disable default trace). However, you can manually disable this trace by following script.

sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'default trace enabled', 0;
GO
RECONFIGURE;
GO

Well, as this is not harmful wait type, I had not included it in my book or initial series. Anyway, now we have learned about this wait type so I will include it in future versions.

Read all the post in the Wait Types and Queue series.

Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading my book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQLAuthority News – SafePeak Releases a Major Update: SafePeak version 2.1 for SQL Server Performance Acceleration

Couple of months ago I had the opportunity to share with my first look at SafePeak, a new and unique software solution for improving SQL Server performance and solving bottlenecks, accelerates the data access and cuts the CPU and IO of your SQL Server. SafePeak unique approach not just tells you about the problems but actually resolves them automatically and improves SQL Server performance and the performance of applications dramatically.

It looks like SafePeak team gave a very serious thinking how to make things even easier during the integration and fine tuning, thus improving the short term benefits, and also to provide a valuable information about the usage of the database by the application, an important long term benefit. Before I get into my key findings of the new features and improvements let me give a short reminder of SafePeak.

SafePeak in a brief:

safepeak2 1 SQLAuthority News   SafePeak Releases a Major Update: SafePeak version 2.1 for SQL Server Performance Acceleration

Click to Enlarge

SafePeak is a dynamic caching proxy that resides between the application and the database. It is unique in the way it requires neither code changes nor database changes. The only change required is routing the application through SafePeak instead of the database (by changing connection string, a DNS record, or simply adding reroute line in the c:\windows\system32\drivers\etc\hosts file).

Here is a short video about SafePeak (thanks to ApplicationPerformance.com team for benchmark and the video):

SafePeak analyses the database schema and the SQL traffic, prepares sql patterns for decision making process and applies those sql patterns as caching rules. When data served from cache – the result is immediate (tests show 100-400 micro seconds per query – that is 0.000,100 of a second).

Concept of “dynamic caching” is actually very interesting and unique: SafePeak caches in RAM the results of queries and stored procedures, but makes sure that the data integrity is enforced real-time both in SafePeak and in SQL Server.

The benefits of such a solution are obvious: quick integration, immediate ROI, no complex projects and most importantly, minimal downtime for production systems.

To learn more about the concept and how SafePeak works see my previous post here.

New features in the SafePeak version 2.1

As this is my first semi-deep look at the new version, I’ll cover the top features that I found useful.

1. Improvement of the cache setup and fine-tuning.

The #1 obstacle in previous versions was the understanding what needs to be actually configured to improve cache efficiency. Most of the setup is automated, some applications still demand important fine-tuning process. You see, SafePeak takes data integrity as its primary concern and that can reduce even significantly he cache effectiveness at the beginning. Two factors to consider from the beginning: A) “Dynamic Objects”; B) Manual (or non-deterministic) patterns to be enabled for cache. Actually few of the comments of my previous review asked questions exactly about this.

The “dynamic objects” are usually stored procedures that your application has, that dynamically create SQL and then execute it (actually CLR objects and encrypted objects are also considered as “dynamic”). For SafePeak such procs are signed as “dynamic objects” and by default SafePeak takes the safest course of action to ensure data correctness: each execution of such proc ALL Cache is cleaned and SafePeak mode during the execution is “Disabled (DDL)” (visible on the dashboard screen).

Consider that you have 2-3 procedures with dynamic code that do not change any data and are executed every 5 seconds. This means that instead of caching them, SafePeak all instance cache is cleaned each 5 seconds. This doesn’t really give you a chance for improvement, right..?

While it was configurable in the previous versions, the new version makes the process of both identification of such events and procedures and the configuration much easier: Immediately on such execution you will see on the Dashboard the name of the procedure that needs to be configured:

safepeak2 2 SQLAuthority News   SafePeak Releases a Major Update: SafePeak version 2.1 for SQL Server Performance Acceleration

Click to Enlarge

Clicking on the line will take to the Dynamic Objects configuration screen (Caching managementàInstanceàDatabaseàDynamic Object tab) where a filter will apply to show you only this procedure plus all of its dependant dynamic procedures. You will need to manually configure to all these procedures.

Configuration became easier too, eliminating the previous requirement running “sp_helptext” on your production SQL Server each time. The “Available Objects” popup (under “Object Dependencies” tab) shows us the text of the procedure. All we are left to do is identify the tables that are read from (selects, joints), tables that are updated (DMLs) and/or which other stored procedures it calls. Focus only the SQL code that is dynamic (the regular code has already been learned by SafePeak automatically).

safepeak2 3 SQLAuthority News   SafePeak Releases a Major Update: SafePeak version 2.1 for SQL Server Performance Acceleration

Click to Enlarge

At the end configure the checkboxes at the “Object Information”: if object is deterministic (makes the activation of cache automatic), does it have transactions inside, DDL – does it changes schema, and should it clear all SafePeak cache (best not to do that J ).

The new version makes it simple to know what needs to be configured (from the dashboard) and the configuration is simpler and no longer requires opening in parallel your database server. This makes things also nice, as this can be done remotely.

2. Database templates

What happens if I have multiple similar structured database (like content databases in SharePoint)? Or I have several servers with same structured databases?

Well this is the next improvement by SafePeak: creation of Database Templates.

After you configure One database, you can save this a as a database template (in the Caching Management area). Then if you have additional similar database, SafePeak will recognize their pattern and apply that configuration to those databases too. You have also control for it under the very much enhanced “SettingsàInstance Configuration” section. You can also export these settings to a file and then import it on another SafePeak installation. A very nice feature.

safepeak2 4 SQLAuthority News   SafePeak Releases a Major Update: SafePeak version 2.1 for SQL Server Performance Acceleration

Click to Enlarge

3. Controlling caching by databases was also enhanced. In addition to controlling sql patterns, at the “SettingsàInstance Configuration” section you can enable and disable cache per each database. Because SafePeak licensing model is based on cached databases this makes it very simple to control.

4. SafePeak extends to near real-time analytical abilities of the SQL Server usage

safepeak2 5 SQLAuthority News   SafePeak Releases a Major Update: SafePeak version 2.1 for SQL Server Performance Acceleration

Click to Enlarge

SafePeak version 2.1 brings the DBA new abilities to see and analyze all the SQL queries that were executed during anytime. While previously we could see only cacheable “read” sql patterns, the new SafePeak allows investigation of all executed SQL patterns, including DMLs, DDLs, DCLs etc. The new SafePeak also makes it possible to check exactly what happened by choosing the specific timeframe. The new version creates analytical statistics every 15 minutes (reduced from 1 hour), which allows better investigation of suspicions events. SafePeak team also changed the columns list, added a “DB” column. This way you immediately see the queries response time as they run on SQL, performance from cache, and the change on the average performance new performance with SafePeak.

safepeak2 6 SQLAuthority News   SafePeak Releases a Major Update: SafePeak version 2.1 for SQL Server Performance Acceleration

Click to Enlarge

Actually this is interesting, as just couple of weeks ago one of my customers had a sudden overload on the SQL server during evening hours. It is not really clear when it started, when it ended, what happened there and why – all remain a mystery up to day. SafePeak’s new analytical abilities could definitely unveil the mystery and help the DBAs to save their precious time and back to normal work.

Summary

I was impressed of SafePeak first version, but the new features and components make SafePeak a very advanced and comprehensive SQL Server performance, acceleration and analytical solution. In this post scanned a small part of the overall changes and improvement. I would very much like to hear your findings and suggestions.

SafePeak is a great acceleration solution for users who want immediate results for sites with performance, scalability and peak spikes challenges.

SafePeak can significantly increase response times, by reducing network round-trip to the database, decreasing CPU resource usage, eliminating I/O and storage access.

For all those SQL Server shops – definitely worth giving SafePeak a try. You can download a free, fully functional trial www.safepeak.com/download.

Reference: Pinal Dave (http://blog.SQLAuthority.com)