SQL SERVER – Tools for Proactive DBAs – Policy Based Management – Notes from the Field #012

[Note from Pinal]: This is a 12th episode of Notes from the Fields series. When taking a vacation the biggest worry any DBA has it that in their absence what will happen to their database. They are often worried that something will go wrong with their server or some users will change something without their permission and knowledge. This keeps on them so much worried that even though they take vacations they keep on looking at the phone or email continuously. Here is a simple trick which DBA can implement and take their vacation without worrying about their database.

In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very common issue DBAs, and Developer faces in their career – how to be proactive and manage database policies before they are violated. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


In this week’s tip from the field were going to cover Policy Based Management. Policy Based Management (PBM) allows you to import and build policies that could be enforced or checked to validate that your SQL Server Farm is compliant with best practices. Today, were going to show you how to import best practice policies. In a future tip, we will show you how you can leverage Central Management Server and Policy Based Management together to validate best practice settings against your SQL Server farm.

In order to use Policy Based Management you will need to enable it. In this case PBM is enabled, but this would how you would enable it incase it’s not already enabled on your instance.

For those of you who are familiar with SQL Server Best Practices Analyzer you will noticed that several of these checks have been included as policies that can be imported. We are going to take a look at how you can import some of these policies.

Policy Based Management can be accessed via SSMS under the management node for an instance as shown below. Right click on policies and select Import Policy.

Once you click on import you will see the following window. When you click on the ellipse next to “Files to import” you will see the folder by default is pointed to the folder that includes the default best practices shown below.

You will notice that there are several best practices policies. Today, we are going to import just the Database Auto Close policy.

Once we import the following policy we will see it inside the policies tree in SSMS.

You will notice that there is a red arrow on the Database Auto Close policy we just imported. This is because the policy evaluation isn’t scheduled. You can manually evaluate it or schedule a SQL Agent job to run on a schedule to evaluate the policy.

In this tip, were going to execute manually. This is done by right clicking on the policy and selecting evaluate. You will see that the “Database Auto Close” policy was evaluated for every user database on the instance.

If you enjoyed this tip from the field, checkout several other posts on Policy Based Management.

Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – Check for Database Integrity – Notes from the Field #011

[Notes from Pinal]: There is always scenario when we suddenly end up having an error in our database query which is related to database integrity. We see error where SQL Queries are not executed due to there is corruption in the database file. When we see this happening, in many cases it is pretty late to correct the issue. It would be nice we have been checking the database integrity frequently and making sure that our database is in the best state to return us result anytime when we want. Tim tasks about database integrity in this episode of Notes from the Field.

Linchpin People are database coaches and wellness experts for a data driven world. In this 11th episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains a very common issue DBA and Developer faces when they are dealing with database integrity. Do not forget the most important aspect for any system – Database Integrity!


When I get asked to review a database server environment, a very important check that I perform is to review when the last DBCC CHECKDB scan was ran on each database.  It is very important to check for integrity regularly on all production databases.

Scheduling DBCC CHECKDB to run on a regular basis is a really good practice.  DBCC CHECKDB checks the logical and physical integrity of all the objects in the respective database in which it is being ran.  I highly recommend reading a previous blog post that Pinal Dave wrote that gives a great introduction and explanation on DBCC CHECKDB.

As a consultant and full time DBA I perform these checks very often and have a set of detailed scripts that I use.  The basics of obtaining when the last time DBCC CHECKDB was ran can be obtained from running the following script.

EXEC ('DBCC DBInfo() With TableResults, NO_INFOMSGS')

The output will resemble the following:

 

You will want to review the output from this script and look for “dbi_dbccLastKnownGood’ in the Field column.  This isn’t too difficult to perform if you are just searching for a few databases but in my experience I have to run this against many instances with numerous databases on each instance.  I would recommend building a script that will loop through each database, insert the output from the above result set into a temporary table and only report back where Field = ‘dbi_dbccLastKnownGood’. There are a number of these scripts already written and published on the web.

What is crucial is that you are performing regular DBCC CHECKDB scans. If you are not proactively checking for integrity problems you are putting yourself at risk of losing data.  If data becomes corrupt and you do not have backups predating the corruption then you have in most cases lost data.  I recommend even if you have a job scheduled to run regular checks to create a report to scan each server to validate they are being ran.

It is worth noting that DBCC CHECKDB is a very IO intensive operation and should be ran during maintenance windows or during nonpeak times.  I know of several organizations that have a process in place to perform backup restore validations in a dedicated environment.  They perform their integrity scans as part of that process enabling them to offload the integrity checks to a non-production environment.

However you architect your integrity checks, just make sure you are performing them and that your backup process protects you in the event corruption is found.  If you are prepared then when corruption does happen, you will be ready.

If you want me to take a look at your server and its settings or if your server is facing any issue we can Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Auditors, Passwords and Exceptions – Notes from the Field #010

[Note from Pinal]: This is a 10th episode of Notes from the Field series. Every other day, I read online about various security breach happening worldwide – some have lost passwords, some have lost phone numbers and some have lost credit card information. Security is the most important aspect of the application. The worst scenario would be when theives comes to your door and you have kept your key under your door-mat, which they are able to guess. Password is just like key to get into the database. If you keep your password so easy that everyone can guess, it would be very bad for your system. You need to keep it complex and also follow your organization’s policy. In this blog post Brian hits very important concept about application security.

In this episode of the Notes from the Field series database expert Brian Kelley explains a very crucial issue DBAs and Developer faces on their production server. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of Brian in his own words.


Auditors focus on logins, especially the password settings. Why? They do because it’s easy to audit and those audits find that too many organizations still get them wrong. With respect to SQL Server, you can’t control things like how many days before the password expires or what the complexity rules are. Those are controlled by the operating system and SQL Server follows the operating system’s rules. What can be controlled is if a login follows the login policy rules. There are actually three options here:

  • The login doesn’t follow any of the policy rules.
  • The login follows all of the policy rules.
  • The login follows all of the policy rules except for password expiration.

Auditors are interested in exceptions. When something doesn’t follow the rules or doesn’t follow all of the rules, an auditor gets interested. If you’re asked to show the auditors evidence of what a login follows, there are two ways to do this: via the GUI or via a script. In many cases auditors will ask for a screenshot that looks something like:

Auditing SQL Login (GUI)

Note the areas I’ve highlighted. This is the information most auditors are looking for with respect to logins that are created within SQL Server (not domain accounts like domain users or groups). If you only have a small number of logins, then using the GUI is not very time consuming. If you have a lot of SQL logins or SQL Servers to audit, however, you want to script this. While an auditor may ask for a screenshot, what an auditor really wants is the information contained in the screenshot. Clicking through every login and taking a screenshot is a waste of time. Thankfully, there is a particular catalog view, sys.sql_logins, that contains all the information the auditor will want.

The catalog view sys.sql_logins contains all the same information as sys.server_principals plus other information that is applicable only to SQL Server-based logins. What’s more, it only contains the SQL Server-based logins. It doesn’t contain any information on Windows users and groups. Therefore, it’s the right catalog view to use to satisfy the auditors asking about logins and passwords. This query will return the information they need:

SELECT name, is_disabled,
is_policy_checked, is_expiration_checked
FROM sys.sql_logins;

You can still give them a screenshot, however instead of giving them a screenshot of each and every login, you should give them the screenshot of the query and the results of that query. In the following screenshot, I’ve flagged two logins that would be of interest to the auditors because these do not follow the password policy (is_policy_checked = 0).

SQL Login info by Query

If you have a lot of SQL Servers, you could simply automate this query with a PowerShell script that queries each server and dumps the results to a text file. In that way you save a lot of time providing the results needed, freeing you up for other tasks. Remember, auditors need the information, they don’t necessarily need a screenshot for each and every login.

If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Tools for Proactive DBAs – Central Management Server – Notes from the Field #009

[Note from Pinal]: This is a ninth episode of Notes from the Fields series. If you are DBA and Developer there is always a situation when you want to execute the same query on multiple servers. This can work out very well if you have a servers in single digit, however, there are cases when we have hundreds of the server and each require multiple queries to be executed. In this blog post John explains how we can achieve this goal with the help of CMS.

In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very common issue DBAs and Developer faces in their career – how you perform a performance related root cause analysis. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


One of the mistakes I see repeated in the field is not leveraging included features inside of SQL Server. Today, we’re going to talk about one of my favorite tools to help DBA’s be proactive. This tool is Central Management Server “CMS.”  In a nutshell, the central management server allows you to run queries or policies against a group of servers. CMS is included in standard and enterprise edition and utilizes an msdb database to configure servers and groups of servers. CMS requires windows authentication and the windows users account connected to CMS is authenticated against all the instances in a group when you execute queries or policies.

In todays, example we will look at general database properties across many servers in a production group.

In order to take advantage of CMS you will need to enable a CMS server. Typically, I would use a non-production server for this task. To start click on view in the menu bar and select Registered Servers.

Once you have the Registered Servers window open you can right click on the Central Management Server to create your CMS.

In this example we are using a server named “PBMDEMO” as our central management server. Once the CMS is configured you can create folders to group servers. In this example we have a Development group and a Production group.

Now that you have imported your instances into your groups you can right click on a group and say new query. This will create a connection to each instance using your windows authenticated account.

Now you can type a query. In this example we are using “Select * from dbo.sysdatabases.”  You will see that it fired on each instance and returned the results into a single result set.


In a future tip from the field we will cover using Policy-Based Management with a Central Management Server.  In the meantime, you can watch a video showing you how to monitor missing backups or check out my several blog posts on Central Management Server.

Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Overall SQL Server Performance – Not Just Query Performance – Notes from the Field #008

[Note from Pinal]: This is a new episode of Notes from the Field series. When SQL Server is slow we think we can improve the performance of the server by just increasing the performance of queries. However, sometimes the queries are already doing good and there is no increase in the speed even though we tune the queries. It is very crucial to know where exactly the problem is with the system as well understand how to improve the performance of overall system. In this blog post David has very interesting approach where he explained how one can systematically work with system and improve the performance of overall server.

In this episode of the Notes from the Field series database expert David Klee explains a very crucial issue DBAs and Developer faces in their career – how you can improve performance of overall server. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of David in his own words.


Too many times I see DBAs fighting to squeeze out performance from a poor performing SQL Server database. Hit your favorite search engine and look for SQL Server performance and the majority of the results are talking about query tuning, execution plans, and index techniques. Occasionally, problems do exist outside of the database! I wholeheartedly agree that these topics are vitally important for any DBA to become proficient at, but it should not be the only layer of focus when performing tuning. Instead, step back a bit and approach the problem holistically and at least spot-check the health of the various layers around your databases.

My analogy is simple. You are driving down the road and your car suddenly slows to a crawl. Taking the ‘start at the query’ approach is akin to immediately opening the engine bay and looking at the motor. The holistic approach is to first walk around the car and look for anomalies en route to the hood, where you might discover that you have a problem like a flat tire.

As a DBA, you do not have to be masters of each layer of the infrastructure stack. However, having a conceptual understanding of how each of these layers and their impact to your SQL Servers will help you triage performance concerns quicker and more reliably. Just look at the individual components in this hypothetical infrastructure around your application!

Now, I am in no way advocating always throwing more hardware at any performance problem. That action always leads to bigger and more expense problems down the road. I want you to help the business get the most out of the infrastructure that they already own and manage.

Storage is the largest source of issues that I routinely encounter. Three main issues constantly pop up, all of which have a tremendous (and silent) impact on SQL Server’s performance: throughput, operations per second, and latency. The root causes of any of these issues vary widely, but all can be at least identified and measured with SQL Server metrics such as sys.dm_io_virtual_file_stats, and wonderful tools such as SQLIO  and IOMeter. These tools can be used to establish your storage performance characteristics at all workload levels, and then Windows Perfmon or other third-party tools can be used to track your steady-state storage utilization.

Virtualization can cause silent performance hits when it is designed for consolidation and not the performance of resource-hungry business-critical applications such as SQL Server. Key architectural items such as CPU overcommitment and VM specifications can be overlooked and cause wildly variable performance characteristics, none of which are good for your databases.

Even a topic as subtle as networking can be a huge performance drain. Ever notice the SQL Server wait statistic ASYNC_NETWORK_IO? It means that either your application cannot consume the data fast enough that SQL Server is sending it, or that a network problem is slowing down the transmission of the data. A utility such as iperf can help you test your end-to-end network performance  and help identify network issues.

No matter the tools or techniques used in the daily tradeoffs for performance, the key to understanding the impact of these problems is to establish a system performance baseline when everything is known to be working well. By establishing a baseline, and then periodically updating the baseline with the latest repeatable system benchmarks, you know if your systems are performing at their peak capability or if something is unhealthy. Quick metrics that benchmark the entire instance, such as DVDstore , can be used to quickly check the relative performance of an instance compared to its known optimal value.

Establish your baselines of each portion of the system stack that you have some level of dependency on. Understand the additional layers that indirectly impact your database performance. Put it all together and scrutinize each piece and you can help squeeze the most performance out of your existing infrastructure!

If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Optimal Memory Settings for SQL Server – Notes from the Field #006

[Notes from Pinal]: Quite often we hear that my SQL Server is taking more memory or my server needs more memory as all the applications are running slow. The reality is that no matter how much memory you have it is never going to be enough. We all want more memory in our server and we all want our server to run faster, however the issue is that we all usually miss out to set the optimal memory settings.

Linchpin People are database coaches and wellness experts for a data driven world. In this very sixth episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains a very common issue DBA and Developer faces when they are setting up a new server. Do not forget the most important settings for any system – Memory!


When I get brought in to evaluate a SQL Server installation or get asked to look into performance issues on a SQL Server, one of the first items I look at is memory consumption and how memory is configured on the instance.

Many times I find that the default memory values have not been set and that the minimum memory setting is set to 0 and maximum memory setting is set to 2PB. This can be a major source of performance issues if max memory hasn’t been configured and SQL Server is starving the operating system.  Just the same by not having the minimum memory setting set on a server that is shared with an application server or has multiple instances of SQL Server install, SQL Server could be starved of needed memory.

You can view your Minimum server memory and Maximum server memory in Server Properties or by querying sys.configurations .

SELECT  *
FROM    sys.configurations
WHERE   configuration_id IN ( '1543', '1544' )

If you are using SSMS, you can see that in server properties menu.

Setting the minimum memory setting for SQL usually isn’t the hard number to come up with.  I usually pick a value that is fairly low just to make sure SQL would not get starved to the point of becoming unstable. I must say that I have rarely encountered an issue where the minimum value not being set has caused a problem; my experience has been with SQL Server starving the OS.  I have still made it a best practice to set the minimum value just as a precaution.

Where things get a little more complicated is with the max memory value.  I have a predetermined value I use when the server is dedicated for SQL that I use as a starting point. For example, on a server with 16 GB of ram I would set my maximum memory value to 12 GB leaving 4 GB for the OS. I would set my minimum value to 4 GB and then monitor the available mbytes memory counter to see how much memory is left for the operating system.

I am a little more conservative than some of my peers and I typically leave a little more for the OS.  Some articles will recommend values in the 150-300 mb available; I usually like to leave around 1 GB, so if I see that 2 GB is still available for the OS, I will increase the maximum memory value to allow SQL to consume more. With SQL Servers today having 16, 32, 64, 128 GB as standard builds, I think I can afford to give the OS more than the 150-300 recommended.

I recently blogged about how to get the mbytes memory counter from within SQL Server without having to use Perfmon Counters, you can read more about that here Get OS Counter Memory Available Using T-SQL.

If you want me to take a look at your server and its settings or if your server is facing any issue we can Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Finding Frequently Running Query and Elapsed Time – Notes from the Field #005

[Note from Pinal]: This is a fifth episode of Notes from the Field series. What happens when your server performs badly. The very first reaction is identifying what is running on the server. Once you identify what is running on the server we want to identify two of the most crucial information about running query – 1) How often the query runs and 2) What is the duration of the query. In this blog post Brian hits very important concept about performance tuning.

In this episode of the Notes from the Field series database expert Brian Kelley explains a very crucial issue DBAs and Developer faces on their production server. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


While I primarily focus on security, as a DBA it’s important for me to be able to performance tune as well. As a consultant, one of the things I’ve seen that affects performance are queries that run often. If we just look at a given query running one time, it doesn’t set off our flags because the query runs quickly and doesn’t take up a lot of resources. However, the problem is that the query in question runs often. As a result, the cumulative effect does cause a performance hit to the overall system.

Finding the Queries Running Frequently
Often times, the client doesn’t realize that a particular query is running as often as it does. They may be using Profiler or a server-side trace and looking at the execution time for queries and, of course, they’re focusing on long running queries. As a result, they don’t see the queries executing thousands of times and causing the whole system to drag. A simple query such as the following will reveal what’s running often based on a minimum count of executions.

DECLARE @MinCount BIGINT ;
SET @MinCount = 5000;
SELECT st.[text], qs.execution_count
FROM sys .dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text( qs.sql_handle ) AS st
WHERE qs.execution_count > @MinCount
ORDER BY qs.execution_count DESC;

This isn’t to say that every query running a lot is causing a significant performance issue. However, this it helps to have a list to look at and investigate. Even a marginal improvement on a query that executes often can result in a noticeable improvement of a system.

Looking at Total Elapsed Time

One way to determine which queries to focus on first is to look at the total_elapsed_time column in sys.dm_exec_query_stats. What you may find is the query that runs most often isn’t the one that has run the longest cumulatively. Let’s modify our query to include the total_elapsed_time column and to order by it in descending order.

DECLARE @MinCount BIGINT ;
SET @MinCount = 5000;
SELECT st.[text], qs.execution_count, qs.total_elapsed_time
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text( qs.sql_handle ) AS st
WHERE qs.execution_count > @MinCount
ORDER BY qs.total_elapsed_time DESC;

High execution count and high total elapsed time are good indicators of queries to look at in addition to the infamous “long running queries.”

A Caveat

The sys.dm_exec_query_stats and sys.dm_exec_sql_text dynamic management view/function reset when SQL Server restarts as they are kept in memory. Therefore, the execution count is cumulative from the last time SQL Server has restarted. Therefore, if SQL Server has restarted recently, such as after patching, you’re not going to receive accurate stats based on the usage of the application.

If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)