SQL SERVER – 9 Things You Should be Doing with Your Backups

SQL Server backups are essential for any business that store their data in SQL Server. The following is a list of best practices you should be following if you are the person in charge with maintaining your organization’s databases.

1. Backups should not be stored in the same physical location as database files

This very simple rule will save your business from a lot of difficult situations in the event that a physical drive becomes faulty.

In case something like this occurs, you sahould have the possibility to use other drives or to remote to a different location that has stored a backup of your database so that you are able to restore your database from that location.

2. Backup schedules are set up

Another good safety precaution is for your backups to always be accompanied by proper schedules which have been established to meet the application needs and your particular business requirements.

Creating backup schedules is very important because as time passes and your backups get older, the risk of data loss becomes higher, unless you are protected and you have a way to reestablish all the data up to the point of failure.

Backup schedules will provide you a consistent and constant history of your data which will always be useful, not only in disaster situations.

3. Test the restore procedure of your backup on a test server

You should always try to restoring your backups on a test server and make sure that you can recover all of the data with all the options you need to use during a restore on the main server.

Just because a backup process has finished successfully, this will not guarantee that the backup can also be restored. You might not be aware that your backup was not created correctly, and when trying to restore it on the main server, the process might fail due to media corruption or other factors.

4. Make use of all available verification options when doing backup

Another good practice is to use all options provided in the process of backing up your database in order to make sure that the end result is a transactionally consistent backup.

If you are using T-SQL scripts to back up your database, then make sure that when using the BACKUP command you also add the CHECKSUM parameter, in the WITH clause, which will imply that each page will be passed through a checksum operation to ensure the consistency of your backup on the backup media.

BACKUP DATABASE MyDatabase
TO DISK = 'Z:\MyBackups\MyDatabase.bak'
WITH CHECKSUM;

If you prefer doing your backups manually through a visual interface like the one available in SQL Server Management Studio (SSMS), make sure to check the first two checkboxes in the Reliability section, on the Options page.

These two options, Verify backup when finished and Perform checksum before writing to media will add a level of verification to your backup process which will ensure your backups are consistent.

If you choose a different wizard to do your backups, like the one available through SQL Server Maintenance Plans, make sure to tick the box that corresponds to Check Database Integrity option.

A different piece of software which I prefer to use to use for this operation, mainly because of its simplicity in use, is SQLBackupAndFTP. It has all the most important options in an easy to access and intuitive interface.

5. Do a FULL backup daily

Based on the needs of your organization you should choose a recovery model that will allow you to protect your organization’s against data loss. If your organization can afford to lose 15, 30, 60 minutes worth of data, choosing a simple recovery model is the option you should go for.

In this case, having a FULL database backup is the best method of protection against data loss in any data protection plan and in conclusion, most cases would require it to be performed daily, despite the overhead added by the time required for such an operation.

If your organization’s databases are small with easily recoverable data, then a FULL backup should be the way to go.

As I am already using SQLBackupAndFTP for my backup process, I find it very convenient that it has, by default, set its backup schedule to do a FULL back up every 24 hours.

This is very helpful and this way the risk that a detail that is most important will be eliminated.

The biggest advantage of this best practice is that the recovery is easier as the backup is just one file and no database log is needed, while the downside for this type of backup is that data that can be recovered is only up to the time of the backup and depending on the size of your database, it can take up a lot of disk space.

6. Do differential backups more frequently

If you consider doing FULL backups are too expensive for your organization, from a resource availability and data loss point of view, there is the option of doing differential backups of your databases, which can and should be done more frequently than FULL backups.

In terms of advantages, differential backups will take less time to complete and also less disk space as they contain just the data from the last FULL backup operation. No database log is needed in this case either and the restores are more precise in terms of data.

The disadvantages of this would be that still, the data that can be recovered is only up to the time of the restore and the restoring process is a bit more complicated as there are two files involved.

7. Do transaction log backups even more frequently

The following most important step in doing backups, after FULL and differential backups, would be to back up the transaction log. As the transaction log contains all the recent activity in the database, it could be used to make a restore of the database to a specific point in time, which is its biggest advantage.

Doing a transaction log backup also truncates the log, which keeps it from becoming full. Similar to database backups, transaction log backups can be done while the system is active.

If your organization has a high activity with some of its databases, doing a transaction log backup every 10 minutes is recommended, while other databases which are less active might have their transaction logs backed up every 30 minutes or 60 minutes.

The negative side of this type of backup is that the database is required to have transaction logging activated which will increase the size of the database and the effort required when doing the restore process.

8. Practice recovery operations

A successful company is flexible and quickly adapts to the changes in the market. In such a case, where business requirements could change unexpectedly, this could mean that your backup strategies can become obsolete.

Testing your backup strategies on a frequent basis and covering different scenarios that might appear, scenarios that include both system and individual database restores will ensure that your backup plans will have the expected efficiency and will work at the time they are needed.

9. Regularly back up system databases

Even though backup strategies will save you from losing user data, it is incomplete without a backup plan for your SQL Server system databases, master, model and msdb databases.

These databases are essential because they contain system configuration along with SQL Server job information which has to be restored in case of a total system restore.

You should also keep a strict and frequent plan of backing up your system databases, preferably on a daily basis, if you’re frequently changing instances. Otherwise, for installations which are more stable, you can do this backup operation with less frequency.

If you have already considered taking a look at SQLBackupAndFTP for your backup process, I still have good news regarding this software.

As you can see above, it also offers you the possibility to backup, along with your user databases, the system databases just with one extra click. You still have the options to back these databases up with a schedule and to your favorite location, be it on the same disk or in a different location.

Conclusion

If you are to follow these best practices in your backup process you will eventually find an optimum and the most efficient combination of steps to manage your organization’s databases. Software is here to help us and make our lives easier, while making us more efficient. This is why, for some of the steps above I recommend using SQLBackupAndFTP, which in my opinion has the most complete and easy to use set of tools necessary to manage your databases.

Also, by incorporating any of the steps above in your backup practices you are going to improve your organization’s efficiency against data loss and speed up the recovery process of your data.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Virtualized SQL Server Performance and Storage System – Notes from the Field #013

[Note from Pinal]: This is a new episode of Notes from the Field series. A common comment I often hear from the developers is – “I have virtual environment and I have followed all the tricks and tips which I should apply to SQL Server but still I do not see a performance improvement to my server. I believe virtualization is not a good thing.” The matter of the fact is that we do not know how to tune virtualized server. Regular SQL Server where we have dedicated server and virtualized SQL Server have few different tricks associated with them.

In this episode of the Notes from the Field series database expert David Klee explains a very crucial issue DBAs, and Developer faces in their career – how you can improve performance of overall server. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of David in his own words.


Out in the field, the sheer number of trouble scenarios that you run into is mind boggling. But, over time, you start to see a number of patterns appear, and you start to build this list of cause-and-effect relationships in your mind. The list of common patterns helps you be more efficient at troubleshooting different problems. In having virtualized SQL Server and other mission-critical workloads for quite some time now, storage performance is the number one recurring item on that list. Let’s discuss why this is, and how to get underneath the symptoms and into the root causes of some of these common themes.

Performance Collection

Paramount to any discussion on storage performance, in both the physical and virtual worlds, is performance statistic collection. What amount of I/O performance do your SQL Servers consume throughout the day or week? You cannot begin a conversation about good or bad storage performance without having a usage baseline to compare the current system state against. Use your favorite performance statistic collector (Perfmon is mine) and collect data around the clock. Use this data to start to trend your usage. Common Windows performance counters that you should sample for disk include:

Counter Group Counter Set
PhysicalDisk % Idle Time
Average Disk Bytes / Read
Average Disk Bytes / Write
Average Disk Read Queue Length
Average Disk sec / Read
Average Disk sec / Write
Average Disk Transfer Bytes / sec
Average Disk Write Queue Length
Current Disk Queue Length
Disk Read Bytes / sec
Disk Reads / sec
Disk Write Bytes / sec
Disk Writes / sec

Remember to use the option to sample all of the drives individually instead of just the total aggregate counter.

Scenario

Take this scenario as an example. You currently have 20 mission-critical SQL Servers that you are looking to virtualize. Each one exhibits steady-state business-day storage utilization of 8,000 IOPs (with bursts to 15,000) and 45MB/s (and bursts to 275MB/s). Given the state of your virtualization environment regarding CPU and memory consumption, you feel comfortable with six SQL Server virtual machines per physical host. Do the math and do you find that each host can handle the aggregate storage workload? It equals between 48-90K IOPs and between 270-1650MB/s, per host.

Uh-oh.

Virtualize these machines and you most likely cannot achieve that level of performance at each host. You now have a performance bottleneck based on your physical server’s performance, and you are going to notice.

Now What?

Each of your hosts and shared storage devices (usually a SAN) should be carefully examined before you virtualize your SQL Servers. The SAN itself should be configured in a manner that can exceed your aggregate I/O requirements, both per host and per LUN. RAID types, controller counts and cache values, disk group configurations, whatever – should all be able to exceed what you need. Most of the time, the core SAN can handle your requests without a problem. Other times, more spindles or SSDs, controller cache, or reconfiguring RAID types can possibly fix the situation.

At the host level, though, things get interesting. Each host will have some means to connect to the storage (fiber HBAs, Ethernet adapters, etc.). A default configuration is usually what I see in the field, which is usually not optimal for resource-intensive workloads such as SQL Server. Round-trip path performance may be slow, active multipathing may be misconfigured or not configured at all, individual paths can be congested and not load balanced, or not enough overall paths to the SAN may be available. Any one of those items can cause an individual VM’s performance to suffer, and in aggregate this impact can be very devastating to the overall performance of all of the VMs on a host.

At the VM level, you have some tricks that can help you improve performance even more. Configure your SQL Server VMs with multiple virtual disks and spread out your workload. A common disk layout for my SQL Server templates is as follows.

Letter Purpose
C: Operating system
D: SQL Server home and system databases
F: User database data files (1 of X)
L: User database log files (1 of X)
T: TempDB database data and log files
Z: SQL Server database backup target

By utilizing multiple virtual disks, you allow the hypervisor to more efficiently multipath the I/O requests rather than funnel all of that I/O traffic down one route. You can then place the virtual disks on the appropriate type and configuration of storage. For example, the OS disk can go on a SAS tier of disks. The backup volume could go on a SATA tier, as well as rarely used archival data. Commonly used data could get placed on SSDs. Maximize your performance where you need it, and minimize costs where you don’t quite need top performance.

Another quick tip is that for VMware-specific environments, you can utilize the VMware Paravirtual SCSI driver. You can get a sizable performance improvement for a virtual disk. Michael Webster has a great post demonstrating the performance differences here, and I have a how-to guide that you can use to retrofit your existing I/O-intensive virtual machines to take advantage of this free performance boost here.

Conclusion

Hopefully your virtualized SQL Servers and other mission-critical systems are performing beautifully in your virtual environment. If not, review these tips to see what improvements you can make. Even if things are good right now, incorporate these tips to help squeeze more performance from your virtualized SQL Servers to help you scale into the future!

If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Tools for Proactive DBAs – Policy Based Management – Notes from the Field #012

[Note from Pinal]: This is a 12th episode of Notes from the Fields series. When taking a vacation the biggest worry any DBA has it that in their absence what will happen to their database. They are often worried that something will go wrong with their server or some users will change something without their permission and knowledge. This keeps on them so much worried that even though they take vacations they keep on looking at the phone or email continuously. Here is a simple trick which DBA can implement and take their vacation without worrying about their database.

In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very common issue DBAs, and Developer faces in their career – how to be proactive and manage database policies before they are violated. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


In this week’s tip from the field were going to cover Policy Based Management. Policy Based Management (PBM) allows you to import and build policies that could be enforced or checked to validate that your SQL Server Farm is compliant with best practices. Today, were going to show you how to import best practice policies. In a future tip, we will show you how you can leverage Central Management Server and Policy Based Management together to validate best practice settings against your SQL Server farm.

In order to use Policy Based Management you will need to enable it. In this case PBM is enabled, but this would how you would enable it incase it’s not already enabled on your instance.

For those of you who are familiar with SQL Server Best Practices Analyzer you will noticed that several of these checks have been included as policies that can be imported. We are going to take a look at how you can import some of these policies.

Policy Based Management can be accessed via SSMS under the management node for an instance as shown below. Right click on policies and select Import Policy.

Once you click on import you will see the following window. When you click on the ellipse next to “Files to import” you will see the folder by default is pointed to the folder that includes the default best practices shown below.

You will notice that there are several best practices policies. Today, we are going to import just the Database Auto Close policy.

Once we import the following policy we will see it inside the policies tree in SSMS.

You will notice that there is a red arrow on the Database Auto Close policy we just imported. This is because the policy evaluation isn’t scheduled. You can manually evaluate it or schedule a SQL Agent job to run on a schedule to evaluate the policy.

In this tip, were going to execute manually. This is done by right clicking on the policy and selecting evaluate. You will see that the “Database Auto Close” policy was evaluated for every user database on the instance.

If you enjoyed this tip from the field, checkout several other posts on Policy Based Management.

Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Check for Database Integrity – Notes from the Field #011

[Notes from Pinal]: There is always scenario when we suddenly end up having an error in our database query which is related to database integrity. We see error where SQL Queries are not executed due to there is corruption in the database file. When we see this happening, in many cases it is pretty late to correct the issue. It would be nice we have been checking the database integrity frequently and making sure that our database is in the best state to return us result anytime when we want. Tim tasks about database integrity in this episode of Notes from the Field.

Linchpin People are database coaches and wellness experts for a data driven world. In this 11th episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains a very common issue DBA and Developer faces when they are dealing with database integrity. Do not forget the most important aspect for any system – Database Integrity!


When I get asked to review a database server environment, a very important check that I perform is to review when the last DBCC CHECKDB scan was ran on each database.  It is very important to check for integrity regularly on all production databases.

Scheduling DBCC CHECKDB to run on a regular basis is a really good practice.  DBCC CHECKDB checks the logical and physical integrity of all the objects in the respective database in which it is being ran.  I highly recommend reading a previous blog post that Pinal Dave wrote that gives a great introduction and explanation on DBCC CHECKDB.

As a consultant and full time DBA I perform these checks very often and have a set of detailed scripts that I use.  The basics of obtaining when the last time DBCC CHECKDB was ran can be obtained from running the following script.

EXEC ('DBCC DBInfo() With TableResults, NO_INFOMSGS')

The output will resemble the following:

 

You will want to review the output from this script and look for “dbi_dbccLastKnownGood’ in the Field column.  This isn’t too difficult to perform if you are just searching for a few databases but in my experience I have to run this against many instances with numerous databases on each instance.  I would recommend building a script that will loop through each database, insert the output from the above result set into a temporary table and only report back where Field = ‘dbi_dbccLastKnownGood’. There are a number of these scripts already written and published on the web.

What is crucial is that you are performing regular DBCC CHECKDB scans. If you are not proactively checking for integrity problems you are putting yourself at risk of losing data.  If data becomes corrupt and you do not have backups predating the corruption then you have in most cases lost data.  I recommend even if you have a job scheduled to run regular checks to create a report to scan each server to validate they are being ran.

It is worth noting that DBCC CHECKDB is a very IO intensive operation and should be ran during maintenance windows or during nonpeak times.  I know of several organizations that have a process in place to perform backup restore validations in a dedicated environment.  They perform their integrity scans as part of that process enabling them to offload the integrity checks to a non-production environment.

However you architect your integrity checks, just make sure you are performing them and that your backup process protects you in the event corruption is found.  If you are prepared then when corruption does happen, you will be ready.

If you want me to take a look at your server and its settings or if your server is facing any issue we can Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Auditors, Passwords and Exceptions – Notes from the Field #010

[Note from Pinal]: This is a 10th episode of Notes from the Field series. Every other day, I read online about various security breach happening worldwide – some have lost passwords, some have lost phone numbers and some have lost credit card information. Security is the most important aspect of the application. The worst scenario would be when theives comes to your door and you have kept your key under your door-mat, which they are able to guess. Password is just like key to get into the database. If you keep your password so easy that everyone can guess, it would be very bad for your system. You need to keep it complex and also follow your organization’s policy. In this blog post Brian hits very important concept about application security.

In this episode of the Notes from the Field series database expert Brian Kelley explains a very crucial issue DBAs and Developer faces on their production server. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of Brian in his own words.


Auditors focus on logins, especially the password settings. Why? They do because it’s easy to audit and those audits find that too many organizations still get them wrong. With respect to SQL Server, you can’t control things like how many days before the password expires or what the complexity rules are. Those are controlled by the operating system and SQL Server follows the operating system’s rules. What can be controlled is if a login follows the login policy rules. There are actually three options here:

  • The login doesn’t follow any of the policy rules.
  • The login follows all of the policy rules.
  • The login follows all of the policy rules except for password expiration.

Auditors are interested in exceptions. When something doesn’t follow the rules or doesn’t follow all of the rules, an auditor gets interested. If you’re asked to show the auditors evidence of what a login follows, there are two ways to do this: via the GUI or via a script. In many cases auditors will ask for a screenshot that looks something like:

Auditing SQL Login (GUI)

Note the areas I’ve highlighted. This is the information most auditors are looking for with respect to logins that are created within SQL Server (not domain accounts like domain users or groups). If you only have a small number of logins, then using the GUI is not very time consuming. If you have a lot of SQL logins or SQL Servers to audit, however, you want to script this. While an auditor may ask for a screenshot, what an auditor really wants is the information contained in the screenshot. Clicking through every login and taking a screenshot is a waste of time. Thankfully, there is a particular catalog view, sys.sql_logins, that contains all the information the auditor will want.

The catalog view sys.sql_logins contains all the same information as sys.server_principals plus other information that is applicable only to SQL Server-based logins. What’s more, it only contains the SQL Server-based logins. It doesn’t contain any information on Windows users and groups. Therefore, it’s the right catalog view to use to satisfy the auditors asking about logins and passwords. This query will return the information they need:

SELECT name, is_disabled,
is_policy_checked, is_expiration_checked
FROM sys.sql_logins;

You can still give them a screenshot, however instead of giving them a screenshot of each and every login, you should give them the screenshot of the query and the results of that query. In the following screenshot, I’ve flagged two logins that would be of interest to the auditors because these do not follow the password policy (is_policy_checked = 0).

SQL Login info by Query

If you have a lot of SQL Servers, you could simply automate this query with a PowerShell script that queries each server and dumps the results to a text file. In that way you save a lot of time providing the results needed, freeing you up for other tasks. Remember, auditors need the information, they don’t necessarily need a screenshot for each and every login.

If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Tools for Proactive DBAs – Central Management Server – Notes from the Field #009

[Note from Pinal]: This is a ninth episode of Notes from the Fields series. If you are DBA and Developer there is always a situation when you want to execute the same query on multiple servers. This can work out very well if you have a servers in single digit, however, there are cases when we have hundreds of the server and each require multiple queries to be executed. In this blog post John explains how we can achieve this goal with the help of CMS.

In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very common issue DBAs and Developer faces in their career – how you perform a performance related root cause analysis. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


One of the mistakes I see repeated in the field is not leveraging included features inside of SQL Server. Today, we’re going to talk about one of my favorite tools to help DBA’s be proactive. This tool is Central Management Server “CMS.”  In a nutshell, the central management server allows you to run queries or policies against a group of servers. CMS is included in standard and enterprise edition and utilizes an msdb database to configure servers and groups of servers. CMS requires windows authentication and the windows users account connected to CMS is authenticated against all the instances in a group when you execute queries or policies.

In todays, example we will look at general database properties across many servers in a production group.

In order to take advantage of CMS you will need to enable a CMS server. Typically, I would use a non-production server for this task. To start click on view in the menu bar and select Registered Servers.

Once you have the Registered Servers window open you can right click on the Central Management Server to create your CMS.

In this example we are using a server named “PBMDEMO” as our central management server. Once the CMS is configured you can create folders to group servers. In this example we have a Development group and a Production group.

Now that you have imported your instances into your groups you can right click on a group and say new query. This will create a connection to each instance using your windows authenticated account.

Now you can type a query. In this example we are using “Select * from dbo.sysdatabases.”  You will see that it fired on each instance and returned the results into a single result set.


In a future tip from the field we will cover using Policy-Based Management with a Central Management Server.  In the meantime, you can watch a video showing you how to monitor missing backups or check out my several blog posts on Central Management Server.

Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Overall SQL Server Performance – Not Just Query Performance – Notes from the Field #008

[Note from Pinal]: This is a new episode of Notes from the Field series. When SQL Server is slow we think we can improve the performance of the server by just increasing the performance of queries. However, sometimes the queries are already doing good and there is no increase in the speed even though we tune the queries. It is very crucial to know where exactly the problem is with the system as well understand how to improve the performance of overall system. In this blog post David has very interesting approach where he explained how one can systematically work with system and improve the performance of overall server.

In this episode of the Notes from the Field series database expert David Klee explains a very crucial issue DBAs and Developer faces in their career – how you can improve performance of overall server. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of David in his own words.


Too many times I see DBAs fighting to squeeze out performance from a poor performing SQL Server database. Hit your favorite search engine and look for SQL Server performance and the majority of the results are talking about query tuning, execution plans, and index techniques. Occasionally, problems do exist outside of the database! I wholeheartedly agree that these topics are vitally important for any DBA to become proficient at, but it should not be the only layer of focus when performing tuning. Instead, step back a bit and approach the problem holistically and at least spot-check the health of the various layers around your databases.

My analogy is simple. You are driving down the road and your car suddenly slows to a crawl. Taking the ‘start at the query’ approach is akin to immediately opening the engine bay and looking at the motor. The holistic approach is to first walk around the car and look for anomalies en route to the hood, where you might discover that you have a problem like a flat tire.

As a DBA, you do not have to be masters of each layer of the infrastructure stack. However, having a conceptual understanding of how each of these layers and their impact to your SQL Servers will help you triage performance concerns quicker and more reliably. Just look at the individual components in this hypothetical infrastructure around your application!

Now, I am in no way advocating always throwing more hardware at any performance problem. That action always leads to bigger and more expense problems down the road. I want you to help the business get the most out of the infrastructure that they already own and manage.

Storage is the largest source of issues that I routinely encounter. Three main issues constantly pop up, all of which have a tremendous (and silent) impact on SQL Server’s performance: throughput, operations per second, and latency. The root causes of any of these issues vary widely, but all can be at least identified and measured with SQL Server metrics such as sys.dm_io_virtual_file_stats, and wonderful tools such as SQLIO  and IOMeter. These tools can be used to establish your storage performance characteristics at all workload levels, and then Windows Perfmon or other third-party tools can be used to track your steady-state storage utilization.

Virtualization can cause silent performance hits when it is designed for consolidation and not the performance of resource-hungry business-critical applications such as SQL Server. Key architectural items such as CPU overcommitment and VM specifications can be overlooked and cause wildly variable performance characteristics, none of which are good for your databases.

Even a topic as subtle as networking can be a huge performance drain. Ever notice the SQL Server wait statistic ASYNC_NETWORK_IO? It means that either your application cannot consume the data fast enough that SQL Server is sending it, or that a network problem is slowing down the transmission of the data. A utility such as iperf can help you test your end-to-end network performance  and help identify network issues.

No matter the tools or techniques used in the daily tradeoffs for performance, the key to understanding the impact of these problems is to establish a system performance baseline when everything is known to be working well. By establishing a baseline, and then periodically updating the baseline with the latest repeatable system benchmarks, you know if your systems are performing at their peak capability or if something is unhealthy. Quick metrics that benchmark the entire instance, such as DVDstore , can be used to quickly check the relative performance of an instance compared to its known optimal value.

Establish your baselines of each portion of the system stack that you have some level of dependency on. Understand the additional layers that indirectly impact your database performance. Put it all together and scrutinize each piece and you can help squeeze the most performance out of your existing infrastructure!

If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Optimal Memory Settings for SQL Server – Notes from the Field #006

[Notes from Pinal]: Quite often we hear that my SQL Server is taking more memory or my server needs more memory as all the applications are running slow. The reality is that no matter how much memory you have it is never going to be enough. We all want more memory in our server and we all want our server to run faster, however the issue is that we all usually miss out to set the optimal memory settings.

Linchpin People are database coaches and wellness experts for a data driven world. In this very sixth episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains a very common issue DBA and Developer faces when they are setting up a new server. Do not forget the most important settings for any system – Memory!


When I get brought in to evaluate a SQL Server installation or get asked to look into performance issues on a SQL Server, one of the first items I look at is memory consumption and how memory is configured on the instance.

Many times I find that the default memory values have not been set and that the minimum memory setting is set to 0 and maximum memory setting is set to 2PB. This can be a major source of performance issues if max memory hasn’t been configured and SQL Server is starving the operating system.  Just the same by not having the minimum memory setting set on a server that is shared with an application server or has multiple instances of SQL Server install, SQL Server could be starved of needed memory.

You can view your Minimum server memory and Maximum server memory in Server Properties or by querying sys.configurations .

SELECT  *
FROM    sys.configurations
WHERE   configuration_id IN ( '1543', '1544' )

If you are using SSMS, you can see that in server properties menu.

Setting the minimum memory setting for SQL usually isn’t the hard number to come up with.  I usually pick a value that is fairly low just to make sure SQL would not get starved to the point of becoming unstable. I must say that I have rarely encountered an issue where the minimum value not being set has caused a problem; my experience has been with SQL Server starving the OS.  I have still made it a best practice to set the minimum value just as a precaution.

Where things get a little more complicated is with the max memory value.  I have a predetermined value I use when the server is dedicated for SQL that I use as a starting point. For example, on a server with 16 GB of ram I would set my maximum memory value to 12 GB leaving 4 GB for the OS. I would set my minimum value to 4 GB and then monitor the available mbytes memory counter to see how much memory is left for the operating system.

I am a little more conservative than some of my peers and I typically leave a little more for the OS.  Some articles will recommend values in the 150-300 mb available; I usually like to leave around 1 GB, so if I see that 2 GB is still available for the OS, I will increase the maximum memory value to allow SQL to consume more. With SQL Servers today having 16, 32, 64, 128 GB as standard builds, I think I can afford to give the OS more than the 150-300 recommended.

I recently blogged about how to get the mbytes memory counter from within SQL Server without having to use Perfmon Counters, you can read more about that here Get OS Counter Memory Available Using T-SQL.

If you want me to take a look at your server and its settings or if your server is facing any issue we can Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Finding Frequently Running Query and Elapsed Time – Notes from the Field #005

[Note from Pinal]: This is a fifth episode of Notes from the Field series. What happens when your server performs badly. The very first reaction is identifying what is running on the server. Once you identify what is running on the server we want to identify two of the most crucial information about running query – 1) How often the query runs and 2) What is the duration of the query. In this blog post Brian hits very important concept about performance tuning.

In this episode of the Notes from the Field series database expert Brian Kelley explains a very crucial issue DBAs and Developer faces on their production server. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


While I primarily focus on security, as a DBA it’s important for me to be able to performance tune as well. As a consultant, one of the things I’ve seen that affects performance are queries that run often. If we just look at a given query running one time, it doesn’t set off our flags because the query runs quickly and doesn’t take up a lot of resources. However, the problem is that the query in question runs often. As a result, the cumulative effect does cause a performance hit to the overall system.

Finding the Queries Running Frequently
Often times, the client doesn’t realize that a particular query is running as often as it does. They may be using Profiler or a server-side trace and looking at the execution time for queries and, of course, they’re focusing on long running queries. As a result, they don’t see the queries executing thousands of times and causing the whole system to drag. A simple query such as the following will reveal what’s running often based on a minimum count of executions.

DECLARE @MinCount BIGINT ;
SET @MinCount = 5000;
SELECT st.[text], qs.execution_count
FROM sys .dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text( qs.sql_handle ) AS st
WHERE qs.execution_count > @MinCount
ORDER BY qs.execution_count DESC;

This isn’t to say that every query running a lot is causing a significant performance issue. However, this it helps to have a list to look at and investigate. Even a marginal improvement on a query that executes often can result in a noticeable improvement of a system.

Looking at Total Elapsed Time

One way to determine which queries to focus on first is to look at the total_elapsed_time column in sys.dm_exec_query_stats. What you may find is the query that runs most often isn’t the one that has run the longest cumulatively. Let’s modify our query to include the total_elapsed_time column and to order by it in descending order.

DECLARE @MinCount BIGINT ;
SET @MinCount = 5000;
SELECT st.[text], qs.execution_count, qs.total_elapsed_time
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text( qs.sql_handle ) AS st
WHERE qs.execution_count > @MinCount
ORDER BY qs.total_elapsed_time DESC;

High execution count and high total elapsed time are good indicators of queries to look at in addition to the infamous “long running queries.”

A Caveat

The sys.dm_exec_query_stats and sys.dm_exec_sql_text dynamic management view/function reset when SQL Server restarts as they are kept in memory. Therefore, the execution count is cumulative from the last time SQL Server has restarted. Therefore, if SQL Server has restarted recently, such as after patching, you’re not going to receive accurate stats based on the usage of the application.

If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SQL Server Configuration Checking – A Must Do for Every DBA – Notes from the Field #004

[Note from Pinal]: This is a fourth episode of Notes from the Field series. While installing SQL Server many DBA takes pride that they were able to install SQL Server without any special configuration or MS has built such a fantastic installer that all the default values worked since installation. For some specific configuration that worked well, however, every application and business has their own suitable settings and configuration. If you are a DBA who has business running on default configuration you must read this post as it talks about how to understand your business and how to optimally configure your SQL Server.

In this episode of the Notes from the Field series database expert Mike Walsh explains a very crucial issue DBAs and Developer faces in their career – how you perform a configuration checking for your database. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


As a SQL Server Consultant, I have often performed a WellDBA™ Exam for a client who has installed SQL Server using the “Next-Next-Next-Finish” approach. They’ve accepted all of the defaults, thrown their production databases on the server and just assumed that all of their instance configuration options were happy and safe. I often rant about independent software vendors for various reasons. One of my biggest pet peeves is when they seem to allow their customers to just install with all defaults and not even consider ever changing them. Whether you are deploying your own databases, or those created by a vendor, you should look at your SQL Server configuration options for your instance.

How Do I Do Check My Configuration Options?

SP_Configure is the SQL Server system stored procedure used to set your instance configurations. I’ve blogged about this and the sys.configurations table before here – In that post I go through the ways to use sp_configure and what the columns in sys.configurations mean. The options set by sp_configure control various instance level run-time parameters. Options like the maximum memory for SQL Server to use, or whether to compress backups by default and many other options. You can also change many of these through the GUI in SSMS. Each configuration option is well documented in books online.

You should check your configurations and review to see where they differ from the default settings. You should also review to see where you are using a problematic default setting, as discussed below. Every environment is different, but as you look at these options more frequently and learn about them, you’ll be well on your way to properly managing your environment into the future.

The only problem with just selecting everything out of sys.configuraitons as many scripts do is that you are likely to miss a setting that is different from the default because not too many people have a memory for the default setting of all that this table returns!

A Script To Help

At Linchpin People, we’ve recently released a SQL Server configuration checking script to help look through your configurations. This free download helps you quickly get to the bottom-line of your configuration options. Instead of having to look at all of them, and review each one, our script shows you where you have a setting that is different than the default. It also shows you where you have kept a default that we have seen customers suffering from not changing. This script helps you zero in on your problem areas more quickly.

Summary

Rather than use this post to get into all of the configuration options you should change and why (there are plenty of posts here on Pinal’s blog and other blogs about that as well as Microsoft resources) – let me use these remaining few words as a reminder – Check your configurations, get to know them, understand them and consider changing them. There have been many engagements where simply changing common defaults like Max Server Memory or Cost Threshold for Parallelism have made noticeable differences for our clients. Check out that script in the blog post on our site, review your environment and investigate the “Potentially Bad Defaults” section. Review each setting, research the best answer for your environment, and get your environment on the right path.

If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)