SQL SERVER – Tools for Proactive DBAs – Central Management Server – Notes from the Field #009

[Note from Pinal]: This is a ninth episode of Notes from the Fields series. If you are DBA and Developer there is always a situation when you want to execute the same query on multiple servers. This can work out very well if you have a servers in single digit, however, there are cases when we have hundreds of the server and each require multiple queries to be executed. In this blog post John explains how we can achieve this goal with the help of CMS.

In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very common issue DBAs and Developer faces in their career – how you perform a performance related root cause analysis. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


One of the mistakes I see repeated in the field is not leveraging included features inside of SQL Server. Today, we’re going to talk about one of my favorite tools to help DBA’s be proactive. This tool is Central Management Server “CMS.”  In a nutshell, the central management server allows you to run queries or policies against a group of servers. CMS is included in standard and enterprise edition and utilizes an msdb database to configure servers and groups of servers. CMS requires windows authentication and the windows users account connected to CMS is authenticated against all the instances in a group when you execute queries or policies.

In todays, example we will look at general database properties across many servers in a production group.

In order to take advantage of CMS you will need to enable a CMS server. Typically, I would use a non-production server for this task. To start click on view in the menu bar and select Registered Servers.

Once you have the Registered Servers window open you can right click on the Central Management Server to create your CMS.

In this example we are using a server named “PBMDEMO” as our central management server. Once the CMS is configured you can create folders to group servers. In this example we have a Development group and a Production group.

Now that you have imported your instances into your groups you can right click on a group and say new query. This will create a connection to each instance using your windows authenticated account.

Now you can type a query. In this example we are using “Select * from dbo.sysdatabases.”  You will see that it fired on each instance and returned the results into a single result set.


In a future tip from the field we will cover using Policy-Based Management with a Central Management Server.  In the meantime, you can watch a video showing you how to monitor missing backups or check out my several blog posts on Central Management Server.

Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – Overall SQL Server Performance – Not Just Query Performance – Notes from the Field #008

[Note from Pinal]: This is a new episode of Notes from the Field series. When SQL Server is slow we think we can improve the performance of the server by just increasing the performance of queries. However, sometimes the queries are already doing good and there is no increase in the speed even though we tune the queries. It is very crucial to know where exactly the problem is with the system as well understand how to improve the performance of overall system. In this blog post David has very interesting approach where he explained how one can systematically work with system and improve the performance of overall server.

In this episode of the Notes from the Field series database expert David Klee explains a very crucial issue DBAs and Developer faces in their career – how you can improve performance of overall server. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of David in his own words.


Too many times I see DBAs fighting to squeeze out performance from a poor performing SQL Server database. Hit your favorite search engine and look for SQL Server performance and the majority of the results are talking about query tuning, execution plans, and index techniques. Occasionally, problems do exist outside of the database! I wholeheartedly agree that these topics are vitally important for any DBA to become proficient at, but it should not be the only layer of focus when performing tuning. Instead, step back a bit and approach the problem holistically and at least spot-check the health of the various layers around your databases.

My analogy is simple. You are driving down the road and your car suddenly slows to a crawl. Taking the ‘start at the query’ approach is akin to immediately opening the engine bay and looking at the motor. The holistic approach is to first walk around the car and look for anomalies en route to the hood, where you might discover that you have a problem like a flat tire.

As a DBA, you do not have to be masters of each layer of the infrastructure stack. However, having a conceptual understanding of how each of these layers and their impact to your SQL Servers will help you triage performance concerns quicker and more reliably. Just look at the individual components in this hypothetical infrastructure around your application!

Now, I am in no way advocating always throwing more hardware at any performance problem. That action always leads to bigger and more expense problems down the road. I want you to help the business get the most out of the infrastructure that they already own and manage.

Storage is the largest source of issues that I routinely encounter. Three main issues constantly pop up, all of which have a tremendous (and silent) impact on SQL Server’s performance: throughput, operations per second, and latency. The root causes of any of these issues vary widely, but all can be at least identified and measured with SQL Server metrics such as sys.dm_io_virtual_file_stats, and wonderful tools such as SQLIO  and IOMeter. These tools can be used to establish your storage performance characteristics at all workload levels, and then Windows Perfmon or other third-party tools can be used to track your steady-state storage utilization.

Virtualization can cause silent performance hits when it is designed for consolidation and not the performance of resource-hungry business-critical applications such as SQL Server. Key architectural items such as CPU overcommitment and VM specifications can be overlooked and cause wildly variable performance characteristics, none of which are good for your databases.

Even a topic as subtle as networking can be a huge performance drain. Ever notice the SQL Server wait statistic ASYNC_NETWORK_IO? It means that either your application cannot consume the data fast enough that SQL Server is sending it, or that a network problem is slowing down the transmission of the data. A utility such as iperf can help you test your end-to-end network performance  and help identify network issues.

No matter the tools or techniques used in the daily tradeoffs for performance, the key to understanding the impact of these problems is to establish a system performance baseline when everything is known to be working well. By establishing a baseline, and then periodically updating the baseline with the latest repeatable system benchmarks, you know if your systems are performing at their peak capability or if something is unhealthy. Quick metrics that benchmark the entire instance, such as DVDstore , can be used to quickly check the relative performance of an instance compared to its known optimal value.

Establish your baselines of each portion of the system stack that you have some level of dependency on. Understand the additional layers that indirectly impact your database performance. Put it all together and scrutinize each piece and you can help squeeze the most performance out of your existing infrastructure!

If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Optimal Memory Settings for SQL Server – Notes from the Field #006

[Notes from Pinal]: Quite often we hear that my SQL Server is taking more memory or my server needs more memory as all the applications are running slow. The reality is that no matter how much memory you have it is never going to be enough. We all want more memory in our server and we all want our server to run faster, however the issue is that we all usually miss out to set the optimal memory settings.

Linchpin People are database coaches and wellness experts for a data driven world. In this very sixth episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains a very common issue DBA and Developer faces when they are setting up a new server. Do not forget the most important settings for any system – Memory!


When I get brought in to evaluate a SQL Server installation or get asked to look into performance issues on a SQL Server, one of the first items I look at is memory consumption and how memory is configured on the instance.

Many times I find that the default memory values have not been set and that the minimum memory setting is set to 0 and maximum memory setting is set to 2PB. This can be a major source of performance issues if max memory hasn’t been configured and SQL Server is starving the operating system.  Just the same by not having the minimum memory setting set on a server that is shared with an application server or has multiple instances of SQL Server install, SQL Server could be starved of needed memory.

You can view your Minimum server memory and Maximum server memory in Server Properties or by querying sys.configurations .

SELECT  *
FROM    sys.configurations
WHERE   configuration_id IN ( '1543', '1544' )

If you are using SSMS, you can see that in server properties menu.

Setting the minimum memory setting for SQL usually isn’t the hard number to come up with.  I usually pick a value that is fairly low just to make sure SQL would not get starved to the point of becoming unstable. I must say that I have rarely encountered an issue where the minimum value not being set has caused a problem; my experience has been with SQL Server starving the OS.  I have still made it a best practice to set the minimum value just as a precaution.

Where things get a little more complicated is with the max memory value.  I have a predetermined value I use when the server is dedicated for SQL that I use as a starting point. For example, on a server with 16 GB of ram I would set my maximum memory value to 12 GB leaving 4 GB for the OS. I would set my minimum value to 4 GB and then monitor the available mbytes memory counter to see how much memory is left for the operating system.

I am a little more conservative than some of my peers and I typically leave a little more for the OS.  Some articles will recommend values in the 150-300 mb available; I usually like to leave around 1 GB, so if I see that 2 GB is still available for the OS, I will increase the maximum memory value to allow SQL to consume more. With SQL Servers today having 16, 32, 64, 128 GB as standard builds, I think I can afford to give the OS more than the 150-300 recommended.

I recently blogged about how to get the mbytes memory counter from within SQL Server without having to use Perfmon Counters, you can read more about that here Get OS Counter Memory Available Using T-SQL.

If you want me to take a look at your server and its settings or if your server is facing any issue we can Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Finding Frequently Running Query and Elapsed Time – Notes from the Field #005

[Note from Pinal]: This is a fifth episode of Notes from the Field series. What happens when your server performs badly. The very first reaction is identifying what is running on the server. Once you identify what is running on the server we want to identify two of the most crucial information about running query – 1) How often the query runs and 2) What is the duration of the query. In this blog post Brian hits very important concept about performance tuning.

In this episode of the Notes from the Field series database expert Brian Kelley explains a very crucial issue DBAs and Developer faces on their production server. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


While I primarily focus on security, as a DBA it’s important for me to be able to performance tune as well. As a consultant, one of the things I’ve seen that affects performance are queries that run often. If we just look at a given query running one time, it doesn’t set off our flags because the query runs quickly and doesn’t take up a lot of resources. However, the problem is that the query in question runs often. As a result, the cumulative effect does cause a performance hit to the overall system.

Finding the Queries Running Frequently
Often times, the client doesn’t realize that a particular query is running as often as it does. They may be using Profiler or a server-side trace and looking at the execution time for queries and, of course, they’re focusing on long running queries. As a result, they don’t see the queries executing thousands of times and causing the whole system to drag. A simple query such as the following will reveal what’s running often based on a minimum count of executions.

DECLARE @MinCount BIGINT ;
SET @MinCount = 5000;
SELECT st.[text], qs.execution_count
FROM sys .dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text( qs.sql_handle ) AS st
WHERE qs.execution_count > @MinCount
ORDER BY qs.execution_count DESC;

This isn’t to say that every query running a lot is causing a significant performance issue. However, this it helps to have a list to look at and investigate. Even a marginal improvement on a query that executes often can result in a noticeable improvement of a system.

Looking at Total Elapsed Time

One way to determine which queries to focus on first is to look at the total_elapsed_time column in sys.dm_exec_query_stats. What you may find is the query that runs most often isn’t the one that has run the longest cumulatively. Let’s modify our query to include the total_elapsed_time column and to order by it in descending order.

DECLARE @MinCount BIGINT ;
SET @MinCount = 5000;
SELECT st.[text], qs.execution_count, qs.total_elapsed_time
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text( qs.sql_handle ) AS st
WHERE qs.execution_count > @MinCount
ORDER BY qs.total_elapsed_time DESC;

High execution count and high total elapsed time are good indicators of queries to look at in addition to the infamous “long running queries.”

A Caveat

The sys.dm_exec_query_stats and sys.dm_exec_sql_text dynamic management view/function reset when SQL Server restarts as they are kept in memory. Therefore, the execution count is cumulative from the last time SQL Server has restarted. Therefore, if SQL Server has restarted recently, such as after patching, you’re not going to receive accurate stats based on the usage of the application.

If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SQL Server Configuration Checking – A Must Do for Every DBA – Notes from the Field #004

[Note from Pinal]: This is a fourth episode of Notes from the Field series. While installing SQL Server many DBA takes pride that they were able to install SQL Server without any special configuration or MS has built such a fantastic installer that all the default values worked since installation. For some specific configuration that worked well, however, every application and business has their own suitable settings and configuration. If you are a DBA who has business running on default configuration you must read this post as it talks about how to understand your business and how to optimally configure your SQL Server.

In this episode of the Notes from the Field series database expert Mike Walsh explains a very crucial issue DBAs and Developer faces in their career – how you perform a configuration checking for your database. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


As a SQL Server Consultant, I have often performed a WellDBA™ Exam for a client who has installed SQL Server using the “Next-Next-Next-Finish” approach. They’ve accepted all of the defaults, thrown their production databases on the server and just assumed that all of their instance configuration options were happy and safe. I often rant about independent software vendors for various reasons. One of my biggest pet peeves is when they seem to allow their customers to just install with all defaults and not even consider ever changing them. Whether you are deploying your own databases, or those created by a vendor, you should look at your SQL Server configuration options for your instance.

How Do I Do Check My Configuration Options?

SP_Configure is the SQL Server system stored procedure used to set your instance configurations. I’ve blogged about this and the sys.configurations table before here – In that post I go through the ways to use sp_configure and what the columns in sys.configurations mean. The options set by sp_configure control various instance level run-time parameters. Options like the maximum memory for SQL Server to use, or whether to compress backups by default and many other options. You can also change many of these through the GUI in SSMS. Each configuration option is well documented in books online.

You should check your configurations and review to see where they differ from the default settings. You should also review to see where you are using a problematic default setting, as discussed below. Every environment is different, but as you look at these options more frequently and learn about them, you’ll be well on your way to properly managing your environment into the future.

The only problem with just selecting everything out of sys.configuraitons as many scripts do is that you are likely to miss a setting that is different from the default because not too many people have a memory for the default setting of all that this table returns!

A Script To Help

At Linchpin People, we’ve recently released a SQL Server configuration checking script to help look through your configurations. This free download helps you quickly get to the bottom-line of your configuration options. Instead of having to look at all of them, and review each one, our script shows you where you have a setting that is different than the default. It also shows you where you have kept a default that we have seen customers suffering from not changing. This script helps you zero in on your problem areas more quickly.

Summary

Rather than use this post to get into all of the configuration options you should change and why (there are plenty of posts here on Pinal’s blog and other blogs about that as well as Microsoft resources) – let me use these remaining few words as a reminder – Check your configurations, get to know them, understand them and consider changing them. There have been many engagements where simply changing common defaults like Max Server Memory or Cost Threshold for Parallelism have made noticeable differences for our clients. Check out that script in the blog post on our site, review your environment and investigate the “Potentially Bad Defaults” section. Review each setting, research the best answer for your environment, and get your environment on the right path.

If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – FIX : Error 3154: The backup set holds a backup of a database other than the existing database – SSMS

This is a follow up of the blog post I have posted on error 3154 few years ago. I have received quite a few emails that how we can fix the same error with the help of SQL Server Management Studio. In this blog post, I will explain how we can do the same with SSMS. Here is the error code:

Error 3154: The backup set holds a backup of a database other than the existing database.

You can fix that with the help of T-SQL with the following command. 

RESTORE DATABASE AdventureWorks
FROM DISK = 'C:\BackupAdventureworks.bak'
WITH REPLACE

If you want to do the same with SSMS, you can follow the steps here:

Step 1: Create a database with your preferred name. (In our case AdventureWorks)

Step 2: Write click on the database and click on Tasks >> Restore >> Database…

Step 3: On the restore screen go to third selection of Options. Now select the checkbox “Overwrite the existing database (WITH REPLACE)”

Step 4: Click OK. It should successfully restore the database.

Note: When you restore a database WITH REPLACE it will overwrite the old database.

Relevant Blog Post:

FIX : Error 3154: The backup set holds a backup of a database other than the existing database

Reference : Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – vCPUs – How Many Are Too Many CPU for SQL Server Virtualization ? – Notes from the Field #003

[Note from Pinal]: This is a third episode of Notes from the Field series. Lots of organizations do Virtualization. The purpose of the Virtualization is often consolidation, performance or just easy to use configuration. Looking at the various advantages of the Virtualization, nowadays I find lots of organization have implemented Virtualization. When they face various problems with Virtualization they often blame the technology instead of going deeper to correct the problem. Virtualization can be blessings for your server, you just have to properly configure them and tune them. Do not run away from Virtualization, rather implement them properly.

In this episode of the Notes from the Field series database expert David Klee explains a very crucial issue DBAs and Developer faces in their career – how you perform a performance related root cause analysis. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


As a technologist, I primarily straddle two worlds, centered between the Microsoft SQL Server and private cloud Virtualization ecosystems. As a consultant, I spend a number of hours each week working with architecture and performance problems with virtualized SQL Server instances. After doing this for a number of years, quite a few common trends keep appearing at different projects. These trends almost always hurt performance and limit the effectiveness of the DBA to protect the core systems that their job depends on the most. Not all Virtualization infrastructures are created equal, and unless a specific focus on maintaining your SQL Server performance is present, the greatest silent performance killer is one that will silently attack and hinder your mission-critical systems. What is it, and what can you do about it?

Consolidation vs Performance

Most virtualized environments are designed as they always have been – focused on consolidation, but not necessarily performance. Unfortunately for you, the DBA, placing your SQL Servers on an environment historically designed for consolidation is guaranteed to hurt. The answer is too many vCPUs. Why?

What is a Hypervisor?

Everything that occurs in a virtualized environment must be scheduled within the Virtualization layer, also known as the hypervisor. This is the layer the sits directly on top of the hardware, and its job is to coordinate and prioritize the virtual machine’s requests to the physical resources. Between the time a CPU request is submitted and the time of actual execution, the request is placed in a ‘ready’ queue.

Importance of vCPUs

In an environment that is lightly taxed, this time spent in the queue is negligible, and the performance impact is so insignificant that usually only a benchmarking team can determine the Virtualization overhead. However, as the consolidation rations grow (meaning a greater number of virtual machines are placed on the physical machines), these times spent in the queues can grow to a much higher value. The greater the value, the more substantial the impact to performance will be. High wait times in the vCPU performance of an application that is highly parallelized, such as SQL Server, can impact the application’s performance even more than just a single threaded vCPU task getting slightly delayed.

For an example of how severe the impact of these scheduling waits can be on a SQL Server’s performance, please visit this blog post. (link)

Unfortunately, these delays are silently transparent to the applications and the operating systems inside the virtual machines. No entries are logged in any error log, and no warnings are sent to the application owner. Some applications are less sensitive to these vCPU performance impacts, such as file servers or print servers. However, resource intensive applications, such as SQL Server, are greatly impacted by these waits. All you, the DBA, knows is that last night’s ETL process ran two hours longer than usual last night, but all of the usual performance indicators are not present.

Key Metrics for Virtualization

Now that I have thoroughly scared you away from virtualizing your SQL Servers, let’s add some transparency into this scenario so you can better identify and measure these wait times. Both VMware vSphere and Microsoft Hyper-V have metrics that you can sample to better gauge the time spent waiting on CPU resources. vSphere has metrics presented within the vCenter management interface, and among the performance metrics is the VMware CPU wait metric called CPU Ready Time. More details on VMware’s vCPU Ready Time can be found at this blog post. Hyper-V has metrics that are collected with SCOM and/or Perfmon, and the Perfmon metric that you are looking for is called Hyper-V Hypervisor Virtual Processor\CPU Wait Time Per Dispatch. Work with your Virtualization administrator to get access to these tools. As a DBA responsible for your business-critical SQL Servers, it is imperative that you gain access to these metrics so you can break through that black box called Virtualization and gain access to the performance statistics underneath your systems. These metrics will help you quickly troubleshoot any performance issue that you may encounter.

Education is the Key

Sample and analyze these values very carefully. Work with your Virtualization team to determine if the current levels are acceptable and not impacting performance. Educate them on the resource consumption of SQL Server and your specific database workloads. Work with the Virtualization administrators to reduce and then maintain acceptable resource timing levels so you can maximize the performance of your SQL Servers.

If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)