SQL SERVER – SQL Server Configuration Checking – A Must Do for Every DBA – Notes from the Field #004

[Note from Pinal]: This is a fourth episode of Notes from the Field series. While installing SQL Server many DBA takes pride that they were able to install SQL Server without any special configuration or MS has built such a fantastic installer that all the default values worked since installation. For some specific configuration that worked well, however, every application and business has their own suitable settings and configuration. If you are a DBA who has business running on default configuration you must read this post as it talks about how to understand your business and how to optimally configure your SQL Server.

In this episode of the Notes from the Field series database expert Mike Walsh explains a very crucial issue DBAs and Developer faces in their career – how you perform a configuration checking for your database. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


As a SQL Server Consultant, I have often performed a WellDBA™ Exam for a client who has installed SQL Server using the “Next-Next-Next-Finish” approach. They’ve accepted all of the defaults, thrown their production databases on the server and just assumed that all of their instance configuration options were happy and safe. I often rant about independent software vendors for various reasons. One of my biggest pet peeves is when they seem to allow their customers to just install with all defaults and not even consider ever changing them. Whether you are deploying your own databases, or those created by a vendor, you should look at your SQL Server configuration options for your instance.

How Do I Do Check My Configuration Options?

SP_Configure is the SQL Server system stored procedure used to set your instance configurations. I’ve blogged about this and the sys.configurations table before here – In that post I go through the ways to use sp_configure and what the columns in sys.configurations mean. The options set by sp_configure control various instance level run-time parameters. Options like the maximum memory for SQL Server to use, or whether to compress backups by default and many other options. You can also change many of these through the GUI in SSMS. Each configuration option is well documented in books online.

You should check your configurations and review to see where they differ from the default settings. You should also review to see where you are using a problematic default setting, as discussed below. Every environment is different, but as you look at these options more frequently and learn about them, you’ll be well on your way to properly managing your environment into the future.

The only problem with just selecting everything out of sys.configuraitons as many scripts do is that you are likely to miss a setting that is different from the default because not too many people have a memory for the default setting of all that this table returns!

A Script To Help

At Linchpin People, we’ve recently released a SQL Server configuration checking script to help look through your configurations. This free download helps you quickly get to the bottom-line of your configuration options. Instead of having to look at all of them, and review each one, our script shows you where you have a setting that is different than the default. It also shows you where you have kept a default that we have seen customers suffering from not changing. This script helps you zero in on your problem areas more quickly.

Summary

Rather than use this post to get into all of the configuration options you should change and why (there are plenty of posts here on Pinal’s blog and other blogs about that as well as Microsoft resources) – let me use these remaining few words as a reminder – Check your configurations, get to know them, understand them and consider changing them. There have been many engagements where simply changing common defaults like Max Server Memory or Cost Threshold for Parallelism have made noticeable differences for our clients. Check out that script in the blog post on our site, review your environment and investigate the “Potentially Bad Defaults” section. Review each setting, research the best answer for your environment, and get your environment on the right path.

If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – FIX : Error 3154: The backup set holds a backup of a database other than the existing database – SSMS

This is a follow up of the blog post I have posted on error 3154 few years ago. I have received quite a few emails that how we can fix the same error with the help of SQL Server Management Studio. In this blog post, I will explain how we can do the same with SSMS. Here is the error code:

Error 3154: The backup set holds a backup of a database other than the existing database.

You can fix that with the help of T-SQL with the following command. 

RESTORE DATABASE AdventureWorks
FROM DISK = 'C:\BackupAdventureworks.bak'
WITH REPLACE

If you want to do the same with SSMS, you can follow the steps here:

Step 1: Create a database with your preferred name. (In our case AdventureWorks)

Step 2: Write click on the database and click on Tasks >> Restore >> Database…

Step 3: On the restore screen go to third selection of Options. Now select the checkbox “Overwrite the existing database (WITH REPLACE)”

Step 4: Click OK. It should successfully restore the database.

Note: When you restore a database WITH REPLACE it will overwrite the old database.

Relevant Blog Post:

FIX : Error 3154: The backup set holds a backup of a database other than the existing database

Reference : Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – vCPUs – How Many Are Too Many CPU for SQL Server Virtualization ? – Notes from the Field #003

[Note from Pinal]: This is a third episode of Notes from the Field series. Lots of organizations do Virtualization. The purpose of the Virtualization is often consolidation, performance or just easy to use configuration. Looking at the various advantages of the Virtualization, nowadays I find lots of organization have implemented Virtualization. When they face various problems with Virtualization they often blame the technology instead of going deeper to correct the problem. Virtualization can be blessings for your server, you just have to properly configure them and tune them. Do not run away from Virtualization, rather implement them properly.

In this episode of the Notes from the Field series database expert David Klee explains a very crucial issue DBAs and Developer faces in their career – how you perform a performance related root cause analysis. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


As a technologist, I primarily straddle two worlds, centered between the Microsoft SQL Server and private cloud Virtualization ecosystems. As a consultant, I spend a number of hours each week working with architecture and performance problems with virtualized SQL Server instances. After doing this for a number of years, quite a few common trends keep appearing at different projects. These trends almost always hurt performance and limit the effectiveness of the DBA to protect the core systems that their job depends on the most. Not all Virtualization infrastructures are created equal, and unless a specific focus on maintaining your SQL Server performance is present, the greatest silent performance killer is one that will silently attack and hinder your mission-critical systems. What is it, and what can you do about it?

Consolidation vs Performance

Most virtualized environments are designed as they always have been – focused on consolidation, but not necessarily performance. Unfortunately for you, the DBA, placing your SQL Servers on an environment historically designed for consolidation is guaranteed to hurt. The answer is too many vCPUs. Why?

What is a Hypervisor?

Everything that occurs in a virtualized environment must be scheduled within the Virtualization layer, also known as the hypervisor. This is the layer the sits directly on top of the hardware, and its job is to coordinate and prioritize the virtual machine’s requests to the physical resources. Between the time a CPU request is submitted and the time of actual execution, the request is placed in a ‘ready’ queue.

Importance of vCPUs

In an environment that is lightly taxed, this time spent in the queue is negligible, and the performance impact is so insignificant that usually only a benchmarking team can determine the Virtualization overhead. However, as the consolidation rations grow (meaning a greater number of virtual machines are placed on the physical machines), these times spent in the queues can grow to a much higher value. The greater the value, the more substantial the impact to performance will be. High wait times in the vCPU performance of an application that is highly parallelized, such as SQL Server, can impact the application’s performance even more than just a single threaded vCPU task getting slightly delayed.

For an example of how severe the impact of these scheduling waits can be on a SQL Server’s performance, please visit this blog post. (link)

Unfortunately, these delays are silently transparent to the applications and the operating systems inside the virtual machines. No entries are logged in any error log, and no warnings are sent to the application owner. Some applications are less sensitive to these vCPU performance impacts, such as file servers or print servers. However, resource intensive applications, such as SQL Server, are greatly impacted by these waits. All you, the DBA, knows is that last night’s ETL process ran two hours longer than usual last night, but all of the usual performance indicators are not present.

Key Metrics for Virtualization

Now that I have thoroughly scared you away from virtualizing your SQL Servers, let’s add some transparency into this scenario so you can better identify and measure these wait times. Both VMware vSphere and Microsoft Hyper-V have metrics that you can sample to better gauge the time spent waiting on CPU resources. vSphere has metrics presented within the vCenter management interface, and among the performance metrics is the VMware CPU wait metric called CPU Ready Time. More details on VMware’s vCPU Ready Time can be found at this blog post. Hyper-V has metrics that are collected with SCOM and/or Perfmon, and the Perfmon metric that you are looking for is called Hyper-V Hypervisor Virtual Processor\CPU Wait Time Per Dispatch. Work with your Virtualization administrator to get access to these tools. As a DBA responsible for your business-critical SQL Servers, it is imperative that you gain access to these metrics so you can break through that black box called Virtualization and gain access to the performance statistics underneath your systems. These metrics will help you quickly troubleshoot any performance issue that you may encounter.

Education is the Key

Sample and analyze these values very carefully. Work with your Virtualization team to determine if the current levels are acceptable and not impacting performance. Educate them on the resource consumption of SQL Server and your specific database workloads. Work with the Virtualization administrators to reduce and then maintain acceptable resource timing levels so you can maximize the performance of your SQL Servers.

If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Root Cause to Performance Problems – Notes from the Field #002

[Note from Pinal]: This is a second episode of Notes from the Fields series. When there is a performance problem we all relentlessly work to fix the issue, however, we hardly spend time to find what has created the issues in the first place. For example, it is easy to turn off our car alarm if it goes off every hour but it is equally important for us to find out what is causing it to happen and prevent it. Just like that Root Cause Analysis (commonly known as RCA) has been a very important element.

In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very common issue DBAs and Developer faces in their career – how you perform a performance related root cause analysis. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


The first time completing a root cause analysis for a performance issue can be a stressful experience for a DBA who is new to performance root cause analysis or performance tuning. Many years ago during my first root cause analysis I honestly had no clue what I was doing. I had developers, managers and spectators over my shoulder pondering my every move. It was an experience I will never forget.  As an accidental DBA becoming a production DBA, I quickly discovered the difference between what I thought I knew and what I actually knew.

I never want to see you have a similar experience. The biggest performance tuning mistake I still see in the field is DBAs skipping the process of doing a root cause analysis and making knee-jerk reactions due to something they see or are told.  I have good news: a lot has changed in the last ten years with SQL Server.  In SQL Server 2012 you can find the root cause to performance problems without writing a single line of code [Youtube]. Even if you are not using SQL Server 2012 today, a Jr. DBA, accidental DBA or IT professional that wears multiple hats can easily do performance root cause analysis with SQL Server, all you need is a good checklist and some helpful scripts and you are ready.

The following scripts and documentation will get you started doing your own performance root cause analysis:

  • Baseline SQL Server Wait Statistics
    • Capture and measure your server’s wait statistics, as this gives deep inside in your server’s query waits and resource bottlenecks (link)
  • What SQL Statements are Causing Top Waits?
    • Identify queries offending server resources (link)
  • What is Running?
    • Understand what are the different activities going on in your server at any point of time. (link)

In additional to above three primary RCA practices it is very important to understand baseline for disk latency as well as query cache offendors. Hopefully, this will get you started with finding the root cause to your performance problems.

Are you doing Root Cause Analysis of your SQL Server Performance Problems? If not, you MUST do it. If you want to get started with the help of experts read more over here: Fix Your SQL Server.


Additional reading:

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Transaction Log Full – Transaction Log Larger than Data File – Notes from the Field #001

I am very excited to announce a new series on this blog – Notes from the Fields. I have been blogging for almost 7 years on this blog and it has been a wonderful experience. Though, I have extensive experience with SQL and Databases, it is always a good idea that we consult experts for their advice and opinion. Following the same thought process, I have started this new series of Notes from the Fields. In this series we will have notes from various experts in the database world.

My friends at Linchpin People have graciously decided to support me in my new initiation.  Linchpin People are database coaches and wellness experts for a data driven world. In this very first episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains a very common issue DBA and Developer faces in their career, when database logs fills up your hard-drive or your database log is larger than your data file. Read the experience of Tim in his own words.


As a consultant, I encounter a number of common issues with clients.  One of the more common things I encounter is finding a user database in the FULL recovery model that does not make a regular transaction log backups or ever had a transaction log backup. When I find this, usually the transaction log is several times larger than the data file.

Finding this issue is very significant to me in that it allows to me to discuss service level agreements with the client. I get to ask questions such as, are nightly full backups sufficient or do they need point in time recovery.  This conversation has now signed with the customer and gets them to thinking about their disaster recovery and high availability solutions.

This issue is also very prominent on SQL Server forums and usually has the title of “Help, my transaction log has filled up my disk” or “Help, my transaction log is many times the size of my database”.

In cases where the client only needs the previous full nights backup, I am able to change the recovery model to SIMPLE and shrink the transaction log using DBCC SHRINKFILE (2,1) or by specifying the transaction log file name by using DBCC SHRINKFILE (file_name, target_size).

When the client needs point in time recovery then in most cases I will still end up switching the client to the SIMPLE recovery model to truncate the transaction log followed by a full backup. I will then schedule a SQL Agent job to make the regular transaction log backups with an interval determined by the client to meet their service level agreements.

It should also be noted that typically when I find an overgrown transaction log the virtual log file count is also out of control. I clean up will always take that into account as well.  That is a subject for a future blog post.

If your SQL Server is facing any issue we can Fix Your SQL Server.


Additional reading:

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Simple Trick to Backup Azure Database with SkyDrive

To ensure your SQL Server or Azure databases remain safe, you should backup your databases periodically. And it is important to store the backups in a reliable location. Microsoft SkyDrive currently offers 7GB free, Box offers 5GB free – both are reliable and it is simple to send your backups there. SQLBackupAndFTP in it’s latest version 9 added the option to backup to SkyDrive and Box ( in addition to local/network folder, NAS drive, FTP, Dropbox, Google Drive and Amazon S3). Just select the databases that you’d like to backup and select to store the backups in SkyDrive or Box. Below I will show you how to do it in details

Select databases to backup

First connect to your SQL Server or Azure Sql Database. Then select the databases you’d like to backup.

Connect to SkyDrive or Box cloud

If you have a free version of SQLBackupAndFTP Box destination is included, but SkyDrive destination will be disabled as it is available in the Standard version or above. Click “Try now” to get 30 days trial on all options

On the “SkyDrive Settings” form you’ll need to authorize SQLBackupAndFTP to access your SkyDrive. Click “Authorize…” to open SkyDrive authorization page in your browser, sign in your to SkyDrive account and click at “Allow” . On the next page you will see the field with an authorization code. Copy it to the clipboard. Box operation is just the same.

After that return to SQLBackupAndFTP, paste the authorization code and click “OK” .

After you are authorized, you can enter the path to a backup folder. SQLBackupAndFTP will create the folder if it does not exist.

That’s all what has to be done to backup to SkyDrive or Box cloud.  You can now click on “Run Now” button to test this job.

Conclusion

Whatever is your preference for storing SQL backups, it is easy with SQLBackupAndFTP. Note that at the time of this writing they are running a very rare promotion on volume licenses:

  • 5–9 licenses: 20% off
  • 10–19 licenses: 35% off
  • More than 20 licenses: 50% off

Please let me know your favorite options for storing the backups

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Database in RESTORING State for Long Time

A very interesting question I received the other day.

“Our database has been in restoring stage for a long time. We have already restored all the necessary files there. After restoring the files we are expecting that  the database will be in operational mode, however, it is continuously in the restoring mode. Any suggestion?”

The question is very common. I sent user follow up emails to understand what is actually going on with the user. I realized after restoring their bak files and log files their database was in the restoring state because they had not restored the latest log file with RECOVERY options.

As they had completed all the database restore sequence (bak and log in order), the real need for them was to recover the database from norecovery state. User can restore log files till the database is no recovery mode. If the database is recovered it will be in operation and it can continue database operation. If the database has another operations we cannot restore further log as the chain of the log file after the database is recovered is meaningless. This is the reason why the database has to be norecovery state when it is restored.

There are three different ways to recover the database.

1) Recover the database manually with following command.

RESTORE DATABASE database_name WITH RECOVERY

2) Recover the database with the last log file.

RESTORE LOG database_name FROM backup_device WITH RECOVERY

3) Recover the database when bak is restored

RESTORE DATABASE database_name FROM backup_device WITH RECOVERY

To understand how the backup restores timeline works read Backup Timeline and Understanding of Database Restore Process in Full Recovery Model.

Reference: Pinal Dave (http://blog.sqlauthority.com)