SQL SERVER – SysTools SQL Recovery Software – An Experiment to Recover Database Corruption

NOTE: This was my experiment with database corruption. If your database is corrupted, you should contact professionals who do this as their primary job. Always make sure that you do experiments on the copy of your database on your development server, rather than on production server.

SQL Server helps to maintain the pace of managing database orderly, following the step by step procedure of updating and handling the Structured Query Language. Duties and responsibilities of database administrator are ever changing as well as challenging from the proper implementation of the system process of being prepared with the recovery solutions for cases of emergency. DBA (Database Administrator) should make sure that the database gets recovered and restored if error messages trouble or corruption issues are reported in the Server. SQL database may get corrupted, inaccessible or even dropped for what so ever probable reasons. The efficiency of an SQL Server admin is calculated according to the reliability, completeness and perfection exhibited by the recovery process they selected. SQL Recovery software can prove to be a perfect solution for most of the SQL MDF and NDF related corruption issues.

SysTools SQL Recovery software is the tool of choice by many SQL DBA because of the multiple functionalities it exhibits while resolving the troubles with SQL database files. The main peculiarity of this tool is that it succeeds where even the inbuilt disaster recovery options for MS SQL server fails. The procedures usually practiced by DBA after SQL database corruption is declining towards failure because of the complex, incomplete steps they provide. The disaster recovery points set by Microsoft® for SQL server like Database mirroring, Transactional replication/warm standby server, Failover clustering, or the most preferred backup and restore feature is reported to end up in disaster if attempt to restore the database went wrong. Here comes the relevance of SQL Recovery software that works with perfection with all the user friendly features added to make SQL MDF/NDF file recovery successful one.

What Makes SysTools SQL Recovery Software One Outstanding Solution?

SQL Recovery software has been selected the most amongst many other names available in the online market belonging to the similar arena. The tool holds accuracy of provisioning SQL database recovery owing to its features and updates. Capability to restore database from corrupt to healthy format even after the encounter of error messages related to the issues like database not mounting, damaged RAID Strip, etc. Also the potential of repairing the database out of failure in Raid Rebuild attempt, RAID setup, etc. with regard to RAID controller and RAID array along with software/ hardware troubles, file system/OS failure etc. is amongst its greatest praise. Some of the software qualities have been listed with the vision of making it easier for SQL Server Administrators or users to confirm the best possible solution for their database recovery.

Performance Enhancers

  • Quick and Advance Scan feature to make scanning of MDF/NDF file specific. For normal file corruption the option for quick scan is suggested, but for severe corruption issues the advance scans is the one to select.
  • MDF and NDF database file recovery process is possible for the SQL server version 2012, 2008, 2005, 2000 and 2008 R2.
  • Advance scan and repair process offered for tackling with both; primary and secondary database corruption / inaccessibility.
  • Deleted SQL database recovery is also offered (tried and tested) with the assurance of database recovery along with maintenance of processing accuracy without data loss.
  • Recover SQL Rules, Triggers, Stored Procedures, Functions, Views, Rules, Triggers, primary and unique keys.

Potentiality At Recovery

  • Save database scan as .str files to face situational needs where either the process got interrupted or was intentionally intervened due to the large size.
  • Auto detection of SQL Server version made the process run without any trouble of platform mismatch.
  • SQL Server compatible SQL scripts can be created in case the database has to be created in the absence of SQL Server setup.
  • Recover SQL database in case even if the DBCC CHECKDB command fails to give accurate results.
  • XML data is also recoverable if XML type columns or variables are created in SQL Server.

Software Pros Reflecting Facilitative Proficiency

Erroneous Corruption State

Error based corruption is the worst of all as compared to others, especially when the messages don’t reflect the cause of occurrence while pose a database inaccessible. Recovering data from such circumstances using this program is effectual due to the guarantee of recovery it extends to users even after a delay at being attended. Delay caused at attending error based issues encountered in SQL Server lead to severity, thus, diminishing the probability of recovery; which certainly isn’t the case with SQL database recovery program.

Damaged Allocation Pages

A lot of complexity is involved in recovering allocation pages while the most challenging thing about it is that; an entire database has to be repaired for a single allocation page to be recovered from the damage. The SQL MDF recovery program is well equipped with the convenience and freedom of recovering tables, views, triggers, etc. Selectively as per the requirement is. This helpfully reduces the Recovery Time Objective for database recovery without compromising on Recovery Point Objective.

Database In Suspect Mode

An SQL Server database gets caught up into suspect mode in the extreme of cases due to; corruption or severe internal damage. SQL Server takes this step in order to maintain the consistency of transaction under the Relational Database Management System’s ACID property. But fortunately recovery from even this case of damage has been tried and tested with the SQL database recovery program with successful results.

NOTE: Unlike other recovery methods; while using SQL Recovery program, Server downtime won’t be faced as it is an independent application that is executable without SQL Server availability. Hence, you can continue working on the Server (with other databases) while the recovery is in progress.

The tool is capable to resolve most of the corruption issues that could possibly take place in an SQL database like: schema corruption, consistency corruption etc. Taking any scenario where DBCC CHECKDB command fails the SQL Recovery tool helps to recover the data maintaining integrity in intact form. Manual SQL database repair trouble can be resolved with the user friendly interface proposed by software for SQL Recovery.

NOTE: Free demo setup of the software is meant for trying out your hand on the tool before going to purchase its full version to gain confidence of performance perfection. However, it is limited to save only the STR file and show a preview of the entire scanned content of MDF/NDF files. To export and save recovered data, full version license is apparently required.

How Software Proceed to Resolve SQL Database Corruption?

Once the software is downloaded and installed in any of the Win OS versions, users can run it following the default path as:

Start>>All Programs>> SysTools® SQL Recovery Software>>SysTools® SQL Recovery Software

systools1 SQL SERVER   SysTools SQL Recovery Software   An Experiment to Recover Database Corruption

  • Select option to open MDF file from the system location it is saved at.
  • To load the damaged MDF file browse the location, select the file and then Click Open.

MDF and NDF file recovery is possible for that users need to customize the options:

  • Under tab for Scan Options, Select Auto detect SQL server file (. mdf) version to let tool, identify the file version or in case if you know the current version you can manually select the SQL Server version (make sure you select the correct version of SQL Server because wrong selection can result in inconsistent results.)

systools2 SQL SERVER   SysTools SQL Recovery Software   An Experiment to Recover Database Corruption

  • Under tab for NDF Options, select “Let Me Choose SQL Server Secondary Database Files”, select options to Add files, Folder or Remove File respectively according to the requirement and then Click OK.

systools3 SQL SERVER   SysTools SQL Recovery Software   An Experiment to Recover Database Corruption

  • Browse, locate, select NDF file and Click Open.
  • Later, the process report of automated MDF file scanning can be seen.
  • To save time and effort of scanning large MDF files, user can save the scan as STR files. If want to get .str files, click Yes, otherwise go for NO and proceed to the next step.
  • To save the STR file, select a desired location for its storage. Click Save.
  • Preview tables, stored procedures, triggers, rules, etc in the preview screen for MDF/NDF files.
  • Select Export option to save recovered database items from MDF and NDF files.

To save the SQL database files safely in healthy format, customize the options:

To export and save data as MDF file:

  • To export recovered data as MDF format go for “export as SQL Server database”.
  • Select “Create Destination Database”.
  • For SQL Server authentication, fill in the database credentials i.e. server name, database name, user name, password.
  • Select export feature as “with only schema” or “with schema and data”.
  • Click Export/Save option.

To export and save data as SQL Server compatible SQL scripts:

  • Check and select the desired component of the SQL database file that you want to be exported.
  • “With only Schema” or “with Schema and data” you can export SQL database.
  • Click Export/Save option to export recovered data as SQL Server compatible SQL scripts.
  • Browse location to save the output SQL file and click OK.
  • Get progress report of SQL scripts saving process.

Click OK to complete the Export process.

Secondary Measure For SQL Database Recovery

In case the time taken for recovering corrupt SQL database doesn’t suit you and a healthy LOG file for the same MDF is available then; try SysTools SQL Log Analyzer. The application helps analyze transactions performed on a database and recover for which it employs the usage of the corresponding LOG file. The combination of a log analysis and database recovery program helps in reducing the Recovery Time Objective to a lower extent as compared to that of the SQL Recovery program.

systools5 SQL SERVER   SysTools SQL Recovery Software   An Experiment to Recover Database Corruption

Have a look at its processing technique:

  1. Log file along with the corresponding MDF file is required for recovering the database through thorough analysis of its transaction.

systools6 SQL SERVER   SysTools SQL Recovery Software   An Experiment to Recover Database Corruption

  1. The software automatically loads the corresponding SQL database for the LDF file you browse for, in case both the files are available in a same folder. Also, you can browse and add the associated MDF file manually in case of its availability in a different location.

systools7 SQL SERVER   SysTools SQL Recovery Software   An Experiment to Recover Database Corruption

  1. Transaction details of the selected Log file are listed soon after the scan is completed, letting you preview all the operations performed on the associated database.
  2. Meanwhile, for exporting database, you are provisioned with three respective options:
    1. SQL Server Database (If SQL Server is available)
    2. SQL Server compatible SQL Scripts (Unavailability of Server)
    3. CSV (For transaction log details)

systools8 SQL SERVER   SysTools SQL Recovery Software   An Experiment to Recover Database Corruption

End Observation And Conclusion: With the widespread usage of SQL Servers worldwide, especially by large scale organizations makes downtime of the Server due to damaged database unaffordable. Meanwhile, SQL Recovery and SQL Log Analyzer both are predominantly dependable for resolving cases of MDF / NDF file corruption. A wise selection is all what it takes to overcome the trouble caused by an inaccessible SQL Server database.

NOTE: This was my personal experiment with database corruption. Always make sure that you do experiments on the copy of your database on your development server, rather than on production server. Once verified that your database is 100% recovered, you should experiment on your database. This blog takes no responsibilities whatsoever.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Learning SSIS – Where Do I Start? – Notes from the Field #014

[Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications. As wikipedia says – It features a fast and flexible data warehousing tool used for data extraction, transformation, and loading (ETL). The tool may also be used to automate maintenance of SQL Server databases and updates to multidimensional cube data.

andyleonard SQL SERVER   Learning SSIS   Where Do I Start?   Notes from the Field #014

In this episode of the Notes from the Field series I asked SSIS Expert Andy Leonard a very crucial question – How to learn SSIS and where do we start with it? Andy was very kind to answer the questions and provides plenty of information about how a novice developer can learn SSIS from the beginning and become expert in the technology.


Where Do I Start?

I often meet people who want to learn SSIS but don’t know where to get started. There are several great places to learn SSIS.

Articles

I recommend the Stairway to Integration Services at SQL Server Central, but I am clearly biased – I wrote it (and I’m still writing it…). The stairway is a systematic walk-through laden with images which, I believe, helps when learning something new.

Books

There are lots of good books on SSIS. You can learn something from all of them. At Amazon, you can find an assortment of books by the major publishers such as Apress, Microsoft Press, and Wrox – as well as self-published works – by simply searching for SSIS.

Free Training

There are a number of online resources where you can obtain free training. Individuals and companies offer training on YouTube, Vimeo, and other online video websites. The Professional Association for SQL Server (PASS) stores recordings of Virtual Chapter meetings and various presentations.

You can also obtain free training at events such as SQL Saturday or local SQL Server user groups. The Linchpin People blog contains some free SSIS video training.

Free training is usually good training, but not always. One caveat about free training: You often get what you pay for.

Paid Training

You can opt for professional training delivered in-person, remotely, or via video. There are several sites that provide excellent SSIS training via video. Some are subscription services for which you pay a yearly or monthly fee. Others allow you to purchase a course of collection of courses.

In-person training is usually the most expensive option. Why? Remember: you get what you pay for. Many students who attend the Linchpin People SSIS training courses report learning more during the Day 1 morning session than they knew entering the classroom. In-person training is the best and fastest way to obtain a working knowledge of SSIS.

Why So Much?

So why does in-person training cost so much? First, the curriculum must be developed. This may sound trivial – putting together some slides and talking points and a handful of exercises – but it is not. I spent almost a year developing the labs for Linchpin People’s course: From Zero to SSIS. Why so long? Several reasons. I wanted the content to:

  1. Represent real-world data (No one’s data is as clean as AdventureWorks).
  2. Build from a central starting point (spiral-out).
  3. Emphasize the techniques and SSIS components and tasks you will use as an SSIS Developer, not deliver a survey of all SSIS components and tasks (some of them don’t work very well, after all).

(Shameless plug: I am personally delivering From Zero to SSIS 19-23 May in Reston Virginia, and SQL Server Integration Services 2012 Design Patterns 8-11 Sep in London, England.)

Time and Hard Work

There is no substitute for spending time working hard to learn anything, and SSIS is no different. In fact, learning SSIS will take more time than learning almost any other technology. Why? Integration Services is a software development platform for moving data. Learning it well requires understanding both database technology and software development.

As I tell students in my SSIS training, “If it was easy, anyone could do it.” Many people are not interested in learning something difficult. The result is there are not enough SSIS developers to do all the available SSIS work. So if you learn SSIS, you should enjoy some measure of job security.

Working in Information Technology requires a commitment to lifelong learning. You should not expect to be able to learn one version of SSIS – or one version of any technology – and be able to work in that version of the technology for the next 20 years (you will be fortunate to be able to work in a single version of a technology for 20 months!). If you don’t like “keeping up,” Information Technology is not the right field for you.

You will need to invest time to learn SSIS, and you will need to continue investing time to improve your SSIS skills and keep up with changes in new releases of the technology.

Conclusion

You can learn SSIS. You need to make up your mind that you are going to learn it and let nothing stop you. Treat each obstacle as “something to overcome.” Accept no shortcuts. Do the work. Put in the time.

I have the utmost confidence in you. You can do it.

If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – 9 Things You Should be Doing with Your Backups

SQL Server backups are essential for any business that store their data in SQL Server. The following is a list of best practices you should be following if you are the person in charge with maintaining your organization’s databases.

1. Backups should not be stored in the same physical location as database files

This very simple rule will save your business from a lot of difficult situations in the event that a physical drive becomes faulty.

In case something like this occurs, you sahould have the possibility to use other drives or to remote to a different location that has stored a backup of your database so that you are able to restore your database from that location.

2. Backup schedules are set up

Another good safety precaution is for your backups to always be accompanied by proper schedules which have been established to meet the application needs and your particular business requirements.

Creating backup schedules is very important because as time passes and your backups get older, the risk of data loss becomes higher, unless you are protected and you have a way to reestablish all the data up to the point of failure.

Backup schedules will provide you a consistent and constant history of your data which will always be useful, not only in disaster situations.

3. Test the restore procedure of your backup on a test server

You should always try to restoring your backups on a test server and make sure that you can recover all of the data with all the options you need to use during a restore on the main server.

Just because a backup process has finished successfully, this will not guarantee that the backup can also be restored. You might not be aware that your backup was not created correctly, and when trying to restore it on the main server, the process might fail due to media corruption or other factors.

4. Make use of all available verification options when doing backup

Another good practice is to use all options provided in the process of backing up your database in order to make sure that the end result is a transactionally consistent backup.

If you are using T-SQL scripts to back up your database, then make sure that when using the BACKUP command you also add the CHECKSUM parameter, in the WITH clause, which will imply that each page will be passed through a checksum operation to ensure the consistency of your backup on the backup media.

BACKUP DATABASE MyDatabase
TO DISK = 'Z:\MyBackups\MyDatabase.bak'
WITH CHECKSUM;

If you prefer doing your backups manually through a visual interface like the one available in SQL Server Management Studio (SSMS), make sure to check the first two checkboxes in the Reliability section, on the Options page.

These two options, Verify backup when finished and Perform checksum before writing to media will add a level of verification to your backup process which will ensure your backups are consistent.

sqlbackup14 1 SQL SERVER   9 Things You Should be Doing with Your Backups

If you choose a different wizard to do your backups, like the one available through SQL Server Maintenance Plans, make sure to tick the box that corresponds to Check Database Integrity option.

sqlbackup14 2 SQL SERVER   9 Things You Should be Doing with Your Backups

A different piece of software which I prefer to use to use for this operation, mainly because of its simplicity in use, is SQLBackupAndFTP. It has all the most important options in an easy to access and intuitive interface.

sqlbackup14 3 SQL SERVER   9 Things You Should be Doing with Your Backups

5. Do a FULL backup daily

Based on the needs of your organization you should choose a recovery model that will allow you to protect your organization’s against data loss. If your organization can afford to lose 15, 30, 60 minutes worth of data, choosing a simple recovery model is the option you should go for.

In this case, having a FULL database backup is the best method of protection against data loss in any data protection plan and in conclusion, most cases would require it to be performed daily, despite the overhead added by the time required for such an operation.

If your organization’s databases are small with easily recoverable data, then a FULL backup should be the way to go.

As I am already using SQLBackupAndFTP for my backup process, I find it very convenient that it has, by default, set its backup schedule to do a FULL back up every 24 hours.sqlbackup14 4 SQL SERVER   9 Things You Should be Doing with Your Backups

This is very helpful and this way the risk that a detail that is most important will be eliminated.

The biggest advantage of this best practice is that the recovery is easier as the backup is just one file and no database log is needed, while the downside for this type of backup is that data that can be recovered is only up to the time of the backup and depending on the size of your database, it can take up a lot of disk space.

6. Do differential backups more frequently

If you consider doing FULL backups are too expensive for your organization, from a resource availability and data loss point of view, there is the option of doing differential backups of your databases, which can and should be done more frequently than FULL backups.

In terms of advantages, differential backups will take less time to complete and also less disk space as they contain just the data from the last FULL backup operation. No database log is needed in this case either and the restores are more precise in terms of data.

The disadvantages of this would be that still, the data that can be recovered is only up to the time of the restore and the restoring process is a bit more complicated as there are two files involved.

7. Do transaction log backups even more frequently

The following most important step in doing backups, after FULL and differential backups, would be to back up the transaction log. As the transaction log contains all the recent activity in the database, it could be used to make a restore of the database to a specific point in time, which is its biggest advantage.

Doing a transaction log backup also truncates the log, which keeps it from becoming full. Similar to database backups, transaction log backups can be done while the system is active.

If your organization has a high activity with some of its databases, doing a transaction log backup every 10 minutes is recommended, while other databases which are less active might have their transaction logs backed up every 30 minutes or 60 minutes.

The negative side of this type of backup is that the database is required to have transaction logging activated which will increase the size of the database and the effort required when doing the restore process.

8. Practice recovery operations

A successful company is flexible and quickly adapts to the changes in the market. In such a case, where business requirements could change unexpectedly, this could mean that your backup strategies can become obsolete.

Testing your backup strategies on a frequent basis and covering different scenarios that might appear, scenarios that include both system and individual database restores will ensure that your backup plans will have the expected efficiency and will work at the time they are needed.

9. Regularly back up system databases

Even though backup strategies will save you from losing user data, it is incomplete without a backup plan for your SQL Server system databases, master, model and msdb databases.

These databases are essential because they contain system configuration along with SQL Server job information which has to be restored in case of a total system restore.

You should also keep a strict and frequent plan of backing up your system databases, preferably on a daily basis, if you’re frequently changing instances. Otherwise, for installations which are more stable, you can do this backup operation with less frequency.

If you have already considered taking a look at SQLBackupAndFTP for your backup process, I still have good news regarding this software.sqlbackup14 5 SQL SERVER   9 Things You Should be Doing with Your Backups

As you can see above, it also offers you the possibility to backup, along with your user databases, the system databases just with one extra click. You still have the options to back these databases up with a schedule and to your favorite location, be it on the same disk or in a different location.

Conclusion

If you are to follow these best practices in your backup process you will eventually find an optimum and the most efficient combination of steps to manage your organization’s databases. Software is here to help us and make our lives easier, while making us more efficient. This is why, for some of the steps above I recommend using SQLBackupAndFTP, which in my opinion has the most complete and easy to use set of tools necessary to manage your databases.

Also, by incorporating any of the steps above in your backup practices you are going to improve your organization’s efficiency against data loss and speed up the recovery process of your data.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Virtualized SQL Server Performance and Storage System – Notes from the Field #013

[Note from Pinal]: This is a new episode of Notes from the Field series. A common comment I often hear from the developers is – “I have virtual environment and I have followed all the tricks and tips which I should apply to SQL Server but still I do not see a performance improvement to my server. I believe virtualization is not a good thing.” The matter of the fact is that we do not know how to tune virtualized server. Regular SQL Server where we have dedicated server and virtualized SQL Server have few different tricks associated with them.

KleeD SQL SERVER   Virtualized SQL Server Performance and Storage System   Notes from the Field #013

In this episode of the Notes from the Field series database expert David Klee explains a very crucial issue DBAs, and Developer faces in their career – how you can improve performance of overall server. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of David in his own words.


Out in the field, the sheer number of trouble scenarios that you run into is mind boggling. But, over time, you start to see a number of patterns appear, and you start to build this list of cause-and-effect relationships in your mind. The list of common patterns helps you be more efficient at troubleshooting different problems. In having virtualized SQL Server and other mission-critical workloads for quite some time now, storage performance is the number one recurring item on that list. Let’s discuss why this is, and how to get underneath the symptoms and into the root causes of some of these common themes.

Performance Collection

Paramount to any discussion on storage performance, in both the physical and virtual worlds, is performance statistic collection. What amount of I/O performance do your SQL Servers consume throughout the day or week? You cannot begin a conversation about good or bad storage performance without having a usage baseline to compare the current system state against. Use your favorite performance statistic collector (Perfmon is mine) and collect data around the clock. Use this data to start to trend your usage. Common Windows performance counters that you should sample for disk include:

Counter Group Counter Set
PhysicalDisk % Idle Time
Average Disk Bytes / Read
Average Disk Bytes / Write
Average Disk Read Queue Length
Average Disk sec / Read
Average Disk sec / Write
Average Disk Transfer Bytes / sec
Average Disk Write Queue Length
Current Disk Queue Length
Disk Read Bytes / sec
Disk Reads / sec
Disk Write Bytes / sec
Disk Writes / sec

Remember to use the option to sample all of the drives individually instead of just the total aggregate counter.

Scenario

Take this scenario as an example. You currently have 20 mission-critical SQL Servers that you are looking to virtualize. Each one exhibits steady-state business-day storage utilization of 8,000 IOPs (with bursts to 15,000) and 45MB/s (and bursts to 275MB/s). Given the state of your virtualization environment regarding CPU and memory consumption, you feel comfortable with six SQL Server virtual machines per physical host. Do the math and do you find that each host can handle the aggregate storage workload? It equals between 48-90K IOPs and between 270-1650MB/s, per host.

Uh-oh.

Virtualize these machines and you most likely cannot achieve that level of performance at each host. You now have a performance bottleneck based on your physical server’s performance, and you are going to notice.

Now What?

Each of your hosts and shared storage devices (usually a SAN) should be carefully examined before you virtualize your SQL Servers. The SAN itself should be configured in a manner that can exceed your aggregate I/O requirements, both per host and per LUN. RAID types, controller counts and cache values, disk group configurations, whatever – should all be able to exceed what you need. Most of the time, the core SAN can handle your requests without a problem. Other times, more spindles or SSDs, controller cache, or reconfiguring RAID types can possibly fix the situation.

At the host level, though, things get interesting. Each host will have some means to connect to the storage (fiber HBAs, Ethernet adapters, etc.). A default configuration is usually what I see in the field, which is usually not optimal for resource-intensive workloads such as SQL Server. Round-trip path performance may be slow, active multipathing may be misconfigured or not configured at all, individual paths can be congested and not load balanced, or not enough overall paths to the SAN may be available. Any one of those items can cause an individual VM’s performance to suffer, and in aggregate this impact can be very devastating to the overall performance of all of the VMs on a host.

At the VM level, you have some tricks that can help you improve performance even more. Configure your SQL Server VMs with multiple virtual disks and spread out your workload. A common disk layout for my SQL Server templates is as follows.

Letter Purpose
C: Operating system
D: SQL Server home and system databases
F: User database data files (1 of X)
L: User database log files (1 of X)
T: TempDB database data and log files
Z: SQL Server database backup target

By utilizing multiple virtual disks, you allow the hypervisor to more efficiently multipath the I/O requests rather than funnel all of that I/O traffic down one route. You can then place the virtual disks on the appropriate type and configuration of storage. For example, the OS disk can go on a SAS tier of disks. The backup volume could go on a SATA tier, as well as rarely used archival data. Commonly used data could get placed on SSDs. Maximize your performance where you need it, and minimize costs where you don’t quite need top performance.

Another quick tip is that for VMware-specific environments, you can utilize the VMware Paravirtual SCSI driver. You can get a sizable performance improvement for a virtual disk. Michael Webster has a great post demonstrating the performance differences here, and I have a how-to guide that you can use to retrofit your existing I/O-intensive virtual machines to take advantage of this free performance boost here.

Conclusion

Hopefully your virtualized SQL Servers and other mission-critical systems are performing beautifully in your virtual environment. If not, review these tips to see what improvements you can make. Even if things are good right now, incorporate these tips to help squeeze more performance from your virtualized SQL Servers to help you scale into the future!

If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Tools for Proactive DBAs – Policy Based Management – Notes from the Field #012

[Note from Pinal]: This is a 12th episode of Notes from the Fields series. When taking a vacation the biggest worry any DBA has it that in their absence what will happen to their database. They are often worried that something will go wrong with their server or some users will change something without their permission and knowledge. This keeps on them so much worried that even though they take vacations they keep on looking at the phone or email continuously. Here is a simple trick which DBA can implement and take their vacation without worrying about their database.

JohnSterrett SQL SERVER   Tools for Proactive DBAs   Policy Based Management   Notes from the Field #012

In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very common issue DBAs, and Developer faces in their career – how to be proactive and manage database policies before they are violated. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


In this week’s tip from the field were going to cover Policy Based Management. Policy Based Management (PBM) allows you to import and build policies that could be enforced or checked to validate that your SQL Server Farm is compliant with best practices. Today, were going to show you how to import best practice policies. In a future tip, we will show you how you can leverage Central Management Server and Policy Based Management together to validate best practice settings against your SQL Server farm.

In order to use Policy Based Management you will need to enable it. In this case PBM is enabled, but this would how you would enable it incase it’s not already enabled on your instance.

policyfield1 SQL SERVER   Tools for Proactive DBAs   Policy Based Management   Notes from the Field #012

For those of you who are familiar with SQL Server Best Practices Analyzer you will noticed that several of these checks have been included as policies that can be imported. We are going to take a look at how you can import some of these policies.

Policy Based Management can be accessed via SSMS under the management node for an instance as shown below. Right click on policies and select Import Policy.

policyfield2 SQL SERVER   Tools for Proactive DBAs   Policy Based Management   Notes from the Field #012

Once you click on import you will see the following window. When you click on the ellipse next to “Files to import” you will see the folder by default is pointed to the folder that includes the default best practices shown below.

policyfield3 SQL SERVER   Tools for Proactive DBAs   Policy Based Management   Notes from the Field #012

You will notice that there are several best practices policies. Today, we are going to import just the Database Auto Close policy.

policyfield4 SQL SERVER   Tools for Proactive DBAs   Policy Based Management   Notes from the Field #012

Once we import the following policy we will see it inside the policies tree in SSMS.

policyfield5 SQL SERVER   Tools for Proactive DBAs   Policy Based Management   Notes from the Field #012

You will notice that there is a red arrow on the Database Auto Close policy we just imported. This is because the policy evaluation isn’t scheduled. You can manually evaluate it or schedule a SQL Agent job to run on a schedule to evaluate the policy.

policyfield6 SQL SERVER   Tools for Proactive DBAs   Policy Based Management   Notes from the Field #012

In this tip, were going to execute manually. This is done by right clicking on the policy and selecting evaluate. You will see that the “Database Auto Close” policy was evaluated for every user database on the instance.

policyfield7 SQL SERVER   Tools for Proactive DBAs   Policy Based Management   Notes from the Field #012

If you enjoyed this tip from the field, checkout several other posts on Policy Based Management.

Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Check for Database Integrity – Notes from the Field #011

[Notes from Pinal]: There is always scenario when we suddenly end up having an error in our database query which is related to database integrity. We see error where SQL Queries are not executed due to there is corruption in the database file. When we see this happening, in many cases it is pretty late to correct the issue. It would be nice we have been checking the database integrity frequently and making sure that our database is in the best state to return us result anytime when we want. Tim tasks about database integrity in this episode of Notes from the Field.

timradney SQL SERVER   Check for Database Integrity   Notes from the Field #011Linchpin People are database coaches and wellness experts for a data driven world. In this 11th episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains a very common issue DBA and Developer faces when they are dealing with database integrity. Do not forget the most important aspect for any system – Database Integrity!


When I get asked to review a database server environment, a very important check that I perform is to review when the last DBCC CHECKDB scan was ran on each database.  It is very important to check for integrity regularly on all production databases.

Scheduling DBCC CHECKDB to run on a regular basis is a really good practice.  DBCC CHECKDB checks the logical and physical integrity of all the objects in the respective database in which it is being ran.  I highly recommend reading a previous blog post that Pinal Dave wrote that gives a great introduction and explanation on DBCC CHECKDB.

As a consultant and full time DBA I perform these checks very often and have a set of detailed scripts that I use.  The basics of obtaining when the last time DBCC CHECKDB was ran can be obtained from running the following script.

EXEC ('DBCC DBInfo() With TableResults, NO_INFOMSGS')

The output will resemble the following:

 dbintegrity SQL SERVER   Check for Database Integrity   Notes from the Field #011

You will want to review the output from this script and look for “dbi_dbccLastKnownGood’ in the Field column.  This isn’t too difficult to perform if you are just searching for a few databases but in my experience I have to run this against many instances with numerous databases on each instance.  I would recommend building a script that will loop through each database, insert the output from the above result set into a temporary table and only report back where Field = ‘dbi_dbccLastKnownGood’. There are a number of these scripts already written and published on the web.

What is crucial is that you are performing regular DBCC CHECKDB scans. If you are not proactively checking for integrity problems you are putting yourself at risk of losing data.  If data becomes corrupt and you do not have backups predating the corruption then you have in most cases lost data.  I recommend even if you have a job scheduled to run regular checks to create a report to scan each server to validate they are being ran.

It is worth noting that DBCC CHECKDB is a very IO intensive operation and should be ran during maintenance windows or during nonpeak times.  I know of several organizations that have a process in place to perform backup restore validations in a dedicated environment.  They perform their integrity scans as part of that process enabling them to offload the integrity checks to a non-production environment.

However you architect your integrity checks, just make sure you are performing them and that your backup process protects you in the event corruption is found.  If you are prepared then when corruption does happen, you will be ready.

If you want me to take a look at your server and its settings or if your server is facing any issue we can Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Auditors, Passwords and Exceptions – Notes from the Field #010

[Note from Pinal]: This is a 10th episode of Notes from the Field series. Every other day, I read online about various security breach happening worldwide – some have lost passwords, some have lost phone numbers and some have lost credit card information. Security is the most important aspect of the application. The worst scenario would be when theives comes to your door and you have kept your key under your door-mat, which they are able to guess. Password is just like key to get into the database. If you keep your password so easy that everyone can guess, it would be very bad for your system. You need to keep it complex and also follow your organization’s policy. In this blog post Brian hits very important concept about application security.

bkbphoto SQL SERVER   Auditors, Passwords and Exceptions   Notes from the Field #010

In this episode of the Notes from the Field series database expert Brian Kelley explains a very crucial issue DBAs and Developer faces on their production server. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of Brian in his own words.


Auditors focus on logins, especially the password settings. Why? They do because it’s easy to audit and those audits find that too many organizations still get them wrong. With respect to SQL Server, you can’t control things like how many days before the password expires or what the complexity rules are. Those are controlled by the operating system and SQL Server follows the operating system’s rules. What can be controlled is if a login follows the login policy rules. There are actually three options here:

  • The login doesn’t follow any of the policy rules.
  • The login follows all of the policy rules.
  • The login follows all of the policy rules except for password expiration.

Auditors are interested in exceptions. When something doesn’t follow the rules or doesn’t follow all of the rules, an auditor gets interested. If you’re asked to show the auditors evidence of what a login follows, there are two ways to do this: via the GUI or via a script. In many cases auditors will ask for a screenshot that looks something like:

Auditing SQL Login (GUI)

Note the areas I’ve highlighted. This is the information most auditors are looking for with respect to logins that are created within SQL Server (not domain accounts like domain users or groups). If you only have a small number of logins, then using the GUI is not very time consuming. If you have a lot of SQL logins or SQL Servers to audit, however, you want to script this. While an auditor may ask for a screenshot, what an auditor really wants is the information contained in the screenshot. Clicking through every login and taking a screenshot is a waste of time. Thankfully, there is a particular catalog view, sys.sql_logins, that contains all the information the auditor will want.

The catalog view sys.sql_logins contains all the same information as sys.server_principals plus other information that is applicable only to SQL Server-based logins. What’s more, it only contains the SQL Server-based logins. It doesn’t contain any information on Windows users and groups. Therefore, it’s the right catalog view to use to satisfy the auditors asking about logins and passwords. This query will return the information they need:

SELECT name, is_disabled,
is_policy_checked, is_expiration_checked
FROM sys.sql_logins;

You can still give them a screenshot, however instead of giving them a screenshot of each and every login, you should give them the screenshot of the query and the results of that query. In the following screenshot, I’ve flagged two logins that would be of interest to the auditors because these do not follow the password policy (is_policy_checked = 0).

SQL Login info by Query

If you have a lot of SQL Servers, you could simply automate this query with a PowerShell script that queries each server and dumps the results to a text file. In that way you save a lot of time providing the results needed, freeing you up for other tasks. Remember, auditors need the information, they don’t necessarily need a screenshot for each and every login.

If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Tools for Proactive DBAs – Central Management Server – Notes from the Field #009

[Note from Pinal]: This is a ninth episode of Notes from the Fields series. If you are DBA and Developer there is always a situation when you want to execute the same query on multiple servers. This can work out very well if you have a servers in single digit, however, there are cases when we have hundreds of the server and each require multiple queries to be executed. In this blog post John explains how we can achieve this goal with the help of CMS.

JohnSterrett SQL SERVER   Tools for Proactive DBAs   Central Management Server   Notes from the Field #009

In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very common issue DBAs and Developer faces in their career – how you perform a performance related root cause analysis. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


One of the mistakes I see repeated in the field is not leveraging included features inside of SQL Server. Today, we’re going to talk about one of my favorite tools to help DBA’s be proactive. This tool is Central Management Server “CMS.”  In a nutshell, the central management server allows you to run queries or policies against a group of servers. CMS is included in standard and enterprise edition and utilizes an msdb database to configure servers and groups of servers. CMS requires windows authentication and the windows users account connected to CMS is authenticated against all the instances in a group when you execute queries or policies.

In todays, example we will look at general database properties across many servers in a production group.

In order to take advantage of CMS you will need to enable a CMS server. Typically, I would use a non-production server for this task. To start click on view in the menu bar and select Registered Servers.

nftf9 1 SQL SERVER   Tools for Proactive DBAs   Central Management Server   Notes from the Field #009

Once you have the Registered Servers window open you can right click on the Central Management Server to create your CMS.

nftf9 2 SQL SERVER   Tools for Proactive DBAs   Central Management Server   Notes from the Field #009

In this example we are using a server named “PBMDEMO” as our central management server. Once the CMS is configured you can create folders to group servers. In this example we have a Development group and a Production group.

nftf9 3 SQL SERVER   Tools for Proactive DBAs   Central Management Server   Notes from the Field #009

Now that you have imported your instances into your groups you can right click on a group and say new query. This will create a connection to each instance using your windows authenticated account.

nftf9 4 SQL SERVER   Tools for Proactive DBAs   Central Management Server   Notes from the Field #009

Now you can type a query. In this example we are using “Select * from dbo.sysdatabases.”  You will see that it fired on each instance and returned the results into a single result set.

nftf9 5 SQL SERVER   Tools for Proactive DBAs   Central Management Server   Notes from the Field #009
In a future tip from the field we will cover using Policy-Based Management with a Central Management Server.  In the meantime, you can watch a video showing you how to monitor missing backups or check out my several blog posts on Central Management Server.

Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Overall SQL Server Performance – Not Just Query Performance – Notes from the Field #008

[Note from Pinal]: This is a new episode of Notes from the Field series. When SQL Server is slow we think we can improve the performance of the server by just increasing the performance of queries. However, sometimes the queries are already doing good and there is no increase in the speed even though we tune the queries. It is very crucial to know where exactly the problem is with the system as well understand how to improve the performance of overall system. In this blog post David has very interesting approach where he explained how one can systematically work with system and improve the performance of overall server.

KleeD SQL SERVER   Overall SQL Server Performance   Not Just Query Performance    Notes from the Field #008

In this episode of the Notes from the Field series database expert David Klee explains a very crucial issue DBAs and Developer faces in their career – how you can improve performance of overall server. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of David in his own words.


Too many times I see DBAs fighting to squeeze out performance from a poor performing SQL Server database. Hit your favorite search engine and look for SQL Server performance and the majority of the results are talking about query tuning, execution plans, and index techniques. Occasionally, problems do exist outside of the database! I wholeheartedly agree that these topics are vitally important for any DBA to become proficient at, but it should not be the only layer of focus when performing tuning. Instead, step back a bit and approach the problem holistically and at least spot-check the health of the various layers around your databases.

My analogy is simple. You are driving down the road and your car suddenly slows to a crawl. Taking the ‘start at the query’ approach is akin to immediately opening the engine bay and looking at the motor. The holistic approach is to first walk around the car and look for anomalies en route to the hood, where you might discover that you have a problem like a flat tire.

As a DBA, you do not have to be masters of each layer of the infrastructure stack. However, having a conceptual understanding of how each of these layers and their impact to your SQL Servers will help you triage performance concerns quicker and more reliably. Just look at the individual components in this hypothetical infrastructure around your application!

virtual system stack SQL SERVER   Overall SQL Server Performance   Not Just Query Performance    Notes from the Field #008

Now, I am in no way advocating always throwing more hardware at any performance problem. That action always leads to bigger and more expense problems down the road. I want you to help the business get the most out of the infrastructure that they already own and manage.

Storage is the largest source of issues that I routinely encounter. Three main issues constantly pop up, all of which have a tremendous (and silent) impact on SQL Server’s performance: throughput, operations per second, and latency. The root causes of any of these issues vary widely, but all can be at least identified and measured with SQL Server metrics such as sys.dm_io_virtual_file_stats, and wonderful tools such as SQLIO  and IOMeter. These tools can be used to establish your storage performance characteristics at all workload levels, and then Windows Perfmon or other third-party tools can be used to track your steady-state storage utilization.

Virtualization can cause silent performance hits when it is designed for consolidation and not the performance of resource-hungry business-critical applications such as SQL Server. Key architectural items such as CPU overcommitment and VM specifications can be overlooked and cause wildly variable performance characteristics, none of which are good for your databases.

Even a topic as subtle as networking can be a huge performance drain. Ever notice the SQL Server wait statistic ASYNC_NETWORK_IO? It means that either your application cannot consume the data fast enough that SQL Server is sending it, or that a network problem is slowing down the transmission of the data. A utility such as iperf can help you test your end-to-end network performance  and help identify network issues.

No matter the tools or techniques used in the daily tradeoffs for performance, the key to understanding the impact of these problems is to establish a system performance baseline when everything is known to be working well. By establishing a baseline, and then periodically updating the baseline with the latest repeatable system benchmarks, you know if your systems are performing at their peak capability or if something is unhealthy. Quick metrics that benchmark the entire instance, such as DVDstore , can be used to quickly check the relative performance of an instance compared to its known optimal value.

Establish your baselines of each portion of the system stack that you have some level of dependency on. Understand the additional layers that indirectly impact your database performance. Put it all together and scrutinize each piece and you can help squeeze the most performance out of your existing infrastructure!

If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Optimal Memory Settings for SQL Server – Notes from the Field #006

[Notes from Pinal]: Quite often we hear that my SQL Server is taking more memory or my server needs more memory as all the applications are running slow. The reality is that no matter how much memory you have it is never going to be enough. We all want more memory in our server and we all want our server to run faster, however the issue is that we all usually miss out to set the optimal memory settings.

timradney SQL SERVER   Optimal Memory Settings for SQL Server   Notes from the Field #006Linchpin People are database coaches and wellness experts for a data driven world. In this very sixth episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains a very common issue DBA and Developer faces when they are setting up a new server. Do not forget the most important settings for any system – Memory!


When I get brought in to evaluate a SQL Server installation or get asked to look into performance issues on a SQL Server, one of the first items I look at is memory consumption and how memory is configured on the instance.

Many times I find that the default memory values have not been set and that the minimum memory setting is set to 0 and maximum memory setting is set to 2PB. This can be a major source of performance issues if max memory hasn’t been configured and SQL Server is starving the operating system.  Just the same by not having the minimum memory setting set on a server that is shared with an application server or has multiple instances of SQL Server install, SQL Server could be starved of needed memory.

You can view your Minimum server memory and Maximum server memory in Server Properties or by querying sys.configurations .

SELECT  *
FROM    sys.configurations
WHERE   configuration_id IN ( '1543', '1544' )

If you are using SSMS, you can see that in server properties menu.

memorysettings SQL SERVER   Optimal Memory Settings for SQL Server   Notes from the Field #006

Setting the minimum memory setting for SQL usually isn’t the hard number to come up with.  I usually pick a value that is fairly low just to make sure SQL would not get starved to the point of becoming unstable. I must say that I have rarely encountered an issue where the minimum value not being set has caused a problem; my experience has been with SQL Server starving the OS.  I have still made it a best practice to set the minimum value just as a precaution.

Where things get a little more complicated is with the max memory value.  I have a predetermined value I use when the server is dedicated for SQL that I use as a starting point. For example, on a server with 16 GB of ram I would set my maximum memory value to 12 GB leaving 4 GB for the OS. I would set my minimum value to 4 GB and then monitor the available mbytes memory counter to see how much memory is left for the operating system.

I am a little more conservative than some of my peers and I typically leave a little more for the OS.  Some articles will recommend values in the 150-300 mb available; I usually like to leave around 1 GB, so if I see that 2 GB is still available for the OS, I will increase the maximum memory value to allow SQL to consume more. With SQL Servers today having 16, 32, 64, 128 GB as standard builds, I think I can afford to give the OS more than the 150-300 recommended.

I recently blogged about how to get the mbytes memory counter from within SQL Server without having to use Perfmon Counters, you can read more about that here Get OS Counter Memory Available Using T-SQL.

If you want me to take a look at your server and its settings or if your server is facing any issue we can Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)