SQL SERVER – 28 Links for Learning SQL Wait Stats from Beginning

It has been exact two years since I have written series on SQL Wait Statistics and Queues. I often received quite a few questions related to this subject. Here are my answers to the questions.

Q: The series which you have written two years ago, is it still relevant to latest SQL Server?
A: Yes, absolutely. Everything which I have written earlier is still relevant to the latest version of SQL Server. The matter of the fact, most of it will remain relevant forever.

Q: Is there anyway, I can read everything together in an eBook format?
A: Yes, you can get SQL Wait Stats on Kindle over here.

Q: Is SQL Wait Stats a good logical starting point for SQL Performance Tuning?
A: I believe so. It gives you a good idea where exactly your bottleneck is in your server.

Q: I have previously not learned about SQL Wait Stats, can I start now?
A: Absolutely, Yes, here are the links:

SQL SERVER – Introduction to Wait Stats and Wait Types – Wait Type – Day 1 of 28

SQL SERVER – Signal Wait Time Introduction with Simple Example – Wait Type – Day 2 of 28

SQL SERVER – DMV – sys.dm_os_wait_stats Explanation – Wait Type – Day 3 of 28

SQL SERVER – DMV – sys.dm_os_waiting_tasks and sys.dm_exec_requests – Wait Type – Day 4 of 28

SQL SERVER – Capturing Wait Types and Wait Stats Information at Interval – Wait Type – Day 5 of 28

SQL SERVER – CXPACKET – Parallelism – Usual Solution – Wait Type – Day 6 of 28

SQL SERVER – CXPACKET – Parallelism – Advanced Solution – Wait Type – Day 7 of 28

SQL SERVER – SOS_SCHEDULER_YIELD – Wait Type – Day 8 of 28

SQL SERVER – PAGEIOLATCH_DT, PAGEIOLATCH_EX, PAGEIOLATCH_KP, PAGEIOLATCH_SH, PAGEIOLATCH_UP – Wait Type – Day 9 of 28

SQL SERVER – IO_COMPLETION – Wait Type – Day 10 of 28

SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28

SQL SERVER – PAGELATCH_DT, PAGELATCH_EX, PAGELATCH_KP, PAGELATCH_SH, PAGELATCH_UP – Wait Type – Day 12 of 28

SQL SERVER – FT_IFTS_SCHEDULER_IDLE_WAIT – Full Text – Wait Type – Day 13 of 28

SQL SERVER – BACKUPIO, BACKUPBUFFER – Wait Type – Day 14 of 28

SQL SERVER – LCK_M_XXX – Wait Type – Day 15 of 28

SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

SQL SERVER – WRITELOG – Wait Type – Day 17 of 28

SQL SERVER – LOGBUFFER – Wait Type – Day 18 of 28

SQL SERVER – PREEMPTIVE and Non-PREEMPTIVE – Wait Type – Day 19 of 28

SQL SERVER – MSQL_XP – Wait Type – Day 20 of 28

SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

SQL SERVER – Guest Post – Jacob Sebastian – Filestream – Wait Types – Wait Queues – Day 22 of 28

SQL SERVER – OLEDB – Link Server – Wait Type – Day 23 of 28

SQL SERVER – 2000 – DBCC SQLPERF(waitstats) – Wait Type – Day 24 of 28

SQL SERVER – 2011 – Wait Type – Day 25 of 28

SQL SERVER – Guest Post – Glenn Berry – Wait Type – Day 26 of 28

SQL SERVER – Best Reference – Wait Type – Day 27 of 28

SQL SERVER – Summary of Month – Wait Type – Day 28 of 28

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Primary Key and NonClustered Index in Simple Words

I have been writing a weekly round up from my blog where I go over last six years of blog posts and pick the best posts from the pasts. While I do this, there are two major place where I focus 1) If there are change in features – I re-blog about it with additional details or 2) If I have not provided complete information six years ago, I try to fill up the gap now. Well, just like everything my knowledge and writing skills have evolved. Before continuing please read my latest memory lane blog post where in 2007 I wrote scripts for Primary Key and Unique Key.

November 2006 was when I started to learn more about SQL and have been only 4 months in the product, I was still exploring various subjects. I wrote a blog post describing about how Primary Key and Unique Key are different. Everything which I wrote there is correct, however, there are a few more details one should learn when it is about Primary Key and Clustered Index.

Here is the common misconception prevailing in the industry.

Primary Key has to be Clustered Index. 

In reality the statement should be corrected as follows:

Primary Key can be Clustered or Non-clustered but it is a common best practice to create a Primary Key as Clustered Index. 

Well, now we have corrected the statement let us understand a bit more in detail. Primary Key should be uniquely identifying column of the table and it should be NOT NULL. A good (most of the time) candidate of the clustered index key also uniquely identifies column and NOT NULL (most of the time). Well, that means it is a good idea to create a Primary Key Clustered so it solve both the problems together. Keeping these facts in mind SQL Server automatically creates Clustered Index on the Primary Key when the table is created. Developers often do not specify which column should have clustered index so by default Primary Key becomes Clustered Index. This practice is now extremely common and lots of people have forgotten that Primary Key and Clustered Index is two different things. They can be same column but they do not have to be.

Well here are four examples we will see where we will learn the behavior of SQL Server when it is about Primary Key and Clustered Index.

  • Scenario 1 : Primary Key will default to Clustered Index
  • Scenario 2: Primary Key is defined as a Non-clustered Index
  • Scenario 3: Primary Key defaults to Non-Clustered Index with another column defined as a Clustered Index
  • Scenario 4: Primary Key defaults to Clustered Index with other index defaults to Non-clustered index

Now let us see each of the scenarios in detail.

Scenario 1 : Primary Key will default to Clustered Index

In this case we will create only Primary Key and when we check the kind of index created on the table we will notice that it has created clustered index automatically over it.

-- Case 1 Primary Key Defaults to Clustered Index
USE TempDB
GO
-- Create table
CREATE TABLE TestTable
(ID INT NOT NULL PRIMARY KEY,
Col1 INT NOT NULL)
GO
-- Check Indexes
SELECT OBJECT_NAME(OBJECT_ID) TableObject,
[name] IndexName,
[Type_Desc]
FROM sys.indexes
WHERE OBJECT_NAME(OBJECT_ID) = 'TestTable'
GO
-- Clean up
DROP TABLE TestTable
GO

Scenario 2: Primary Key is defined as a Non-clustered Index

In this case we will explicitly defined Primary Key as a non-clustered index and it will create it as a non-clustered index. It proves that Primary Key can be non-clustered index.

-- Case 2 Primary Key Non-clustered Index
USE TempDB
GO
-- Create table
CREATE TABLE TestTable
(ID INT NOT NULL PRIMARY KEY NONCLUSTERED,
Col1 INT NOT NULL)
GO
-- Check Indexes
SELECT OBJECT_NAME(OBJECT_ID) TableObject,
[name] IndexName,
[Type_Desc]
FROM sys.indexes
WHERE OBJECT_NAME(OBJECT_ID) = 'TestTable'
GO
-- Clean up
DROP TABLE TestTable
GO

Scenario 3: Primary Key defaults to Non-Clustered Index with another column defined as a Clustered Index

In this case we will create clustered index on another column, SQL Server will automatically create a Primary Key as a non-clustered index as clustered index is specified on another column.

-- Case 3 Primary Key Defaults to Non-clustered Index
USE TempDB
GO
-- Create table
CREATE TABLE TestTable
(ID INT NOT NULL PRIMARY KEY,
Col1 INT NOT NULL UNIQUE CLUSTERED)
GO
-- Check Indexes
SELECT OBJECT_NAME(OBJECT_ID) TableObject,
[name] IndexName,
[Type_Desc]
FROM sys.indexes
WHERE OBJECT_NAME(OBJECT_ID) = 'TestTable'
GO
-- Clean up
DROP TABLE TestTable
GO

Scenario 4: Primary Key defaults to Clustered Index with other index defaults to Non-clustered index

In this case we will create two indexes on the both the tables but we will not specify the type of the index on the columns. When we check the results we will notice that Primary Key is automatically defaulted to Clustered Index and another column as a Non-clustered index.

-- Case 4 Primary Key and Defaults
USE TempDB
GO
-- Create table
CREATE TABLE TestTable
(ID INT NOT NULL PRIMARY KEY,
Col1 INT NOT NULL UNIQUE)
GO
-- Check Indexes
SELECT OBJECT_NAME(OBJECT_ID) TableObject,
[name] IndexName,
[Type_Desc]
FROM sys.indexes
WHERE OBJECT_NAME(OBJECT_ID) = 'TestTable'
GO
-- Clean up
DROP TABLE TestTable
GO

I think above examples clarifies if there are any confused related to Primary and Clustered Index.

Now here is the question I often get asked what can be the reason for creating Primary Key and Clustered Index Key on different columns. Well, there are many scenarios when this can be true. It is possible that you have column SSN which you want to create as a Primary Key but do not want make it as a clustered index key because you have uniquely increasing identity column which best suites your need for that table (again this is just an example – you can argue exactly the opposite way as well). You are welcome to continue the discussion on this subject in the comments field or a dedicated blog post I wrote about it years ago over here. There are few really good comments there – I think that blog post is now a gold mine to understand this concept.

Reference : Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Contest to Win Amazon Card – Experience Cloud Data Management System (CDMS) and NuoDB

The world of database is changing. The traditional client server installation on premises still has a place in many organizations but many of these organizations are facing challenges with this setup setup as their data is growing exponentially every day. Well, as time changes the innovation is required. If you are reading this blog, you might have attended the announcement of the NuoDB General Availability. The event was highly attended and there was lots of interest for the product.

NuoDB has announced the general availability of NuoDB Starlings Release (V 1.0) – the industry’s first and only Cloud Data Management System (CDMS). I have just downloaded their product and trying out various features, they promised. I will be writing a detailed blog post as we go further on various aspects of the product.

There are two different editions of the product available – the Developer and Pro Editions.

Developer Editions (FREE)

This edition allows users to develop NuoDB apps at full throttle without restrictions on data size or number of hosts.  There is no charge associated with this edition. Additionally, it allows user to upgrade to pro edition when they are ready for commercial deployment. Register & Download  the Developer Edition today for FREE.

Pro Edition

This edition is very robust edition and it can handle millions of transactions per second, a huge number of concurrent users and Big Data on commodity hardware. And it’s free for apps that demand high transactional throughput with low storage requirements.

CONTEST

Here is good news – you can join me with the evaluation of the product. I am going to download the product and test various aspects of the product, you can do the same.

The good folks at NuoDB will give USD 10 Amazon Card to first 25 folks who download this product between now and Jan 31, 2013 11:59 PM EST.

Additionally, here is a bonus question for you – What is the maximum data storage capacity of Pro Edition? Leave a comment and you may win something really cool from me.

Here’s a few reasons I am finding this product interesting:

  • Elastic scalability built for the cloud.
    NuoDB scales elastically to improve transactions per second performance and handle both massive concurrency and data volume. Out and in; by simply adding or removing nodes.
  • 100% SQL, 100% ACID, 0% Hassle.
    This brand new CDMS has all of the features you want (like elastically scalable in the cloud with 100% ACID guarantees and SQL compliance) and none of the things you don’t (like complex database administration tasks).
  • Blistering Performance.
    NuoDB exceeds 1 million transactions per second in YCSB benchmark, all on inexpensive commodity hardware.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQLAuthority News – Year 2012 in Review – Perspective of Blog

The WordPress.com stats helper monkeys prepared a 2012 annual report for this blog.

Click to Enlarge

Crunchy numbers:

3,200,000 people watched the Mars Rover land. This blog was viewed about 19,000,000 times in 2012. If this blog were a video of the Rover landing, it would take about 6 days for that many people to see it. In 2012, there were 372 new posts, growing the total archive of this blog to 2,360 posts. The busiest day of the year was October 9th with80,693 views.

The top referring sites:

  1. facebook.com
  2. pinaldave.com
  3. stackoverflow.com
  4. codeproject.com
  5. twitter.com

Where did they come from?

That’s 234 countries in all!
Most visitors came from The United States. India & The United Kingdom were not far behind.

Click here to see the complete report.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL Contest – Win USD 300 Worth Gift – Cartoon Contest is Back

There are two excellent contests and we have lots of winning to do this year end.

1) Win USD 25 Amazon Gift Cards (10 Units)

This is very simple, you just have to download SQL Server DB Optimizer. That’s it!

There are only two conditions:

  1. You must have a valid email address. As USD 25 Amazon Gift Card will be sent to the same address.
  2. Download DB Optimizer between today and Dec 8, 2012.

Link to Download DB Optimizer.

Every day one winner will be notified about their winning USD 25 Amazon Gift Cards for next 10 days.

2) Win Star Wars R2-D2 Inflatable R/C

This the coolest thing to win. I personally want one but as I am running a contest, I can’t  participate.

You get this cool Remote Controlled Device – you just have to answer following cartoon contest. Read the complete story and think what will be the answer provided by the smart employee.

There are only two conditions:

  1. Leave your answer in the comment area of this blog post (every comment will be hidden till Dec 8, 2012).
  2. Please leave your answer in the comment area between today and Dec 8, 2012.

Remember you can participate as many times as you want. Make sure that your answer is correct and creative. The most creative answer will be selected. The decision of contest owner will be final. We may have runner’s up prices but for the moment let us try to win R2-D2. Here is the cool video of R2D2.

Now here is the cartoon story, please follow the story and complete the very last cartoon template. Your answer should be correct and should be creative. However, the ideal answer will not be longer than one or two sentences.

Hint: (Hint)

Well, Leave your answer in the comment area of this blog post. If you do not win R2D2, trust me there are chances you may win a surprise gift from me. Remember your answer should be correct and should be creative. However, the ideal answer will not be longer than one or two sentences.

Last day to participate in both of the contest is Dec 8, 2012. We will announce the winner in the week of December 10.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Shard No More – An Innovative Look at Distributed Peer-to-peer SQL Database

There is no doubt that SQL databases play an important role in modern applications. In an ideal world, a single database can handle hundreds of incoming connections from multiple clients and scale to accommodate the related transactions. However the world is not ideal and databases are often a cause of major headaches when applications need to scale to accommodate more connections, transactions, or both.

In order to overcome scaling issues, application developers often resort to administrative acrobatics, also known as database sharding. Sharding helps to improve application performance and throughput by splitting the database into two or more shards. Unfortunately, this practice also requires application developers to code transactional consistency into their applications. Getting transactional consistency across multiple SQL database shards can prove to be very difficult.

Sharding requires developers to think about things like rollbacks, constraints, and referential integrity across tables within their applications when these types of concerns are best handled by the database. It also makes other common operations such as joins, searches, and memory management very difficult. In short, the very solution implemented to overcome throughput issues becomes a bottleneck in and of itself.

What if database sharding was no longer required to scale your application? Let me explain. For the past several months I have been following and writing about NuoDB, a hot new SQL database technology out of Cambridge, MA. NuoDB is officially out of beta and they have recently released their first release candidate so I decided to dig into the database in a little more detail. Their architecture is very interesting and exciting because it completely eliminates the need to shard a database to achieve higher throughput.

Each NuoDB database consists of at least three or more processes that enable a single database to run across multiple hosts. These processes include a Broker, a Transaction Engine and a Storage Manager.  Brokers are responsible for connecting client applications to Transaction Engines and maintain a global view of the network to keep track of the multiple Transaction Engines available at any time. Transaction Engines are in-memory processes that client applications connect to for processing SQL transactions. Storage Managers are responsible for persisting data to disk and serving up records to the Transaction Managers if they don’t exist in memory.

The secret to NuoDB’s approach to solving the sharding problem is that it is a truly distributed, peer-to-peer, SQL database. Each of its processes can be deployed across multiple hosts. When client applications need to connect to a Transaction Engine, the Broker will automatically route the request to the most available process. Since multiple Transaction Engines and Storage Managers running across multiple host machines represent a single logical database, you never have to resort to sharding to get the throughput your application requires.

NuoDB is a new pioneer in the SQL database world. They are making database scalability simple by eliminating the need for acrobatics such as sharding, and they are also making general administration of the database simpler as well.  Their distributed database appears to you as a user like a single SQL Server database.  With their RC1 release they have also provided a web based administrative console that they call NuoConsole. This tool makes it extremely easy to deploy and manage NuoDB processes across one or multiple hosts with the click of a mouse button. See for yourself by downloading NuoDB here.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

 

SQL SERVER – Sends backups to a Network Folder, FTP Server, Dropbox, Google Drive or Amazon S3

Let me tell you about one of the most useful SQL tools that every DBA should use – it is SQLBackupAndFTP. I have been using this tool since 2009 – and it is the first program I install on a SQL server. Download a free version, 1 minute configuration and your daily backups are safe in the cloud.
In summary, SQLBackupAndFTP

  • Creates SQL Server database and file backups on schedule
  • Compresses and encrypts the backups
  • Sends backups to a network folder, FTP Server, Dropbox, Google Drive or Amazon S3
  • Sends email notifications of job’s success or failure

SQLBackupAndFTP comes in Free and Paid versions (starting from $29) – see version comparison. Free version is fully functional for unlimited ad hoc backups or for scheduled backups of up to two databases – it will be sufficient for many small customers.

What has impressed me from the beginning – is that I understood how it works and was able to configure the job from a single form (see Image 1 – Main form above)

  1. Connect to you SQL server and select databases to be backed up
  2. Click “Add backup destination” to configure where backups should go to (network, FTP Server, Dropbox, Google Drive or Amazon S3)
  3. Enter your email to receive email confirmations
  4. Set the time to start daily full backups (or go to Settings if you need Differential or  Transaction Log backups on a flexible schedule)
  5. Press “Run Now” button to test

You can get to this form if you click “Settings” buttons in the “Schedule section”. Select what types of backups and how often you want to run them and you will see the scheduled backups in the “Estimated backup plan” list

A detailed tutorial is available on the developer’s website.

Along with SQLBackupAndFTP setup gives you the option to install “One-Click SQL Restore” (you can install it stand-alone too) – a basic tool for restoring just Full backups.

However basic, you can drag-and-drop on it the zip file created by SQLBackupAndFTP, it unzips the BAK file if necessary, connects to the SQL server on the start, selects the right database, it is smart enough to restart the server to drop open connections if necessary – very handy for developers who need to restore databases often.

You may ask why is this tool is better than maintenance tasks available in SQL Server? While maintenance tasks are easy to set up, SQLBackupAndFTP is still way easier and integrates solution for compression, encryption, FTP, cloud storage and email which make it superior to maintenance tasks in every aspect.

On a flip side SQLBackupAndFTP is not the fanciest tool to manage backups or check their health. It only works reliably on local SQL Server instances. In other words it has to be installed on the SQL server itself. For remote servers it uses scripting which is less reliable. This limitations is actually inherent in SQL server itself as BACKUP DATABASE command  creates backup not on the client, but on the server itself.

This tool is compatible with almost all the known SQL Server versions. It works with SQL Server 2008 (all versions) and many of the previous versions. It is especially useful for SQL Server Express 2005 and SQL Server Express 2008, as they lack built in tools for backup.

I strongly recommend this tool to all the DBAs. They must absolutely try it as it is free and does exactly what it promises. You can download your free copy of the tool from here.

Please share your experience about using this tool. I am eager to receive your feedback regarding this article.

Reference: Pinal Dave (http://blog.SQLAuthority.com)