SQL SERVER – Tools for Proactive DBAs – Central Management Server – Notes from the Field #024

[Note from Pinal]: This is a new episode of Notes from the Fields series. AlwaysOn is a very complex subject and not everyone knows many things about this. The matter of the fact is there is very little information available on this subject online and not everyone knows everything about this. This is why when a very common question related to AlwaysOn comes, people get confused.

In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very common issue DBAs and Developer faces in their career and is related to AlwaysOn Availability Group. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.

After completing several AlwaysOn Availability Group implementations there are two questions that come up frequently. When did my availability group failover? Where is my read-write replica? The answer to the first one is provided here [http://johnsterrett.com/2014/03/18/where-is-my-availability-group/ ]. Today, were going to look at the answer to the second question.

Where is my read-write replica?

The following script, when executed on an availability group replica returns the availability group name, current role state, and database name. This will let you know if this instance is hosting the PRIMARY “read/write” replica.

AvailabilityGroup = ag.name,
AvailabilityGroupRole = ars.role_desc,
FROM sys.databases db
INNER JOIN sys.availability_databases_cluster adc ON db.group_database_id = adc.group_database_id
INNER JOIN sys.availability_groups ag ON adc.group_id = ag.group_id
INNER JOIN sys.dm_hadr_availability_replica_states ars ON ag.group_id = ars.group_id
AND db.replica_id = ars.replica_id
NULL AS AvailabilityGroup, NULL AS AvailabilityGroupRole, name
FROM sys.databases

How do I check all replicas?

Now you know how to check if an instance of SQL Server participating in an AlwaysOn Availability Group is the “Read/Write” PRIMARY role. Next, we can utilize Central Management Server to run the same script across your other instances participating as replicas in your AlwaysOn Availability Group configuration. If you have never used Central Management Server this step-by-step guide can get you going in 10 minutes.

For this week, I have an windows failover cluster with two nodes “SQL2012DR” and “SQL2012PROD1 aka localhost,1433”). Each node hosts its own default instance of SQL Server. There are two separate AlwaysOn Availability Groups. Using Central Management Server, I can rerun the query above and quickly see where my read/write replica is.

Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – Finding Jobs Shrinking Database Files – Notes from the Field #023

[Notes from Pinal]: Search on the internet about ‘Shrinking Database’, you will find plenty of advice why it is bad and it should not be done. If you do not believe me, just try it yourself. Even I have blogged about it before that it is an absolutely bad thing to do. However, there are very few blogs which talks about how to solve this ancient problem. The reason, I call it ancient problem is that there are quite a few organizations which are doing this for many years and they have many different places where they have included Shrinking Database code. There are quite a few places, where I see that people have created database jobs to shrink the database. I was talking to Tim about this and he comes up with a beautifully simple script where he demonstrated how to find out jobs which are shrinking database files.

Linchpin People are database coaches and wellness experts for a data driven world. In this 16th episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains a very simple script to find out jobs shrinking database files.

Often when analyzing a SQL Server Instance I come across jobs that are automating shrinking a database file.  Checking for a shrinking operation is important when analyzing a server for a number of reasons.  One is that anytime a data file or log file has to grow, transactions have to wait until the growth operation is complete thus causing a performance impact.  Shrinking a log file can contribute to high virtual log file counts and shrinking a data file will lead to fragmenting the database.

I check to see if any database maintenance plans exist and if so I check to see if auto shrink has been chosen, occasionally I find this is the case.  I also have a TSQL script that will search for any reference to the word ‘shrink’ in a job step.  I find custom jobs like this more often than auto shrink being used in a maintenance plan.  Often times the justification for someone configuring a job like this is due to drive space issues.

In cases where the shrink is to reclaim drive space it is most often due to not having a proper backup routine in place for the transaction logs or a process that keeps a transaction open for an extended period of time.  The transaction log will need to be large enough to handle your index maintenance, ETL processes and transactions that occur between log backups.   If you find that you are having an erratic large growth, then you will need to examine what transactions or processes are holding open an active transaction for such a long duration or the frequency of your log backups.

Below is the script I use to search for the word shrink in any tsql job step.

DECLARE @search VARCHAR(100)
SET @Search = 'shrink'
SELECT  A.[job_id],
FROM    [msdb].[dbo].[sysjobsteps] A
JOIN [msdb].dbo.sysjobs B ON A.job_id = B.[job_id]
WHERE   command LIKE '%' + @Search + '%'
ORDER BY [database_name],

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone.

Related Articles:

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Say No to DB Data Roles – SQL Security – Notes from the Field #022

[Note from Pinal]: This is a 22nd episode of Notes from the Field series. Security is very important and we all realize that. However, when it is about implementing the security, we all are not sure what is the right path to take. If we do not have enough knowledge, we can damage ourself only. DB Data Roles are very similar concept, when implemented poorly it can compromise your server security.

In this episode of the Notes from the Field series database expert Brian Kelley explains a very crucial issue DBAs and Developer faces on their production server. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of Brian in his own words.

I am prejudiced against two fixed database roles: db_datareader and db_datawriter. When I give presentations or talk to customers, some are surprised by my stance. I have two good reasons to recommend against these two roles (and their counterparts, db_denydatareader and db_denydatawriter).

A Violation of the Principle of Least Privilege

The first reason is they violate the Principle of Least Privilege. If you’re not familiar with this security principle, it’s really simple: give permissions to do the job – no more and no less. The db_datareader and db_datawriter roles give access to all tables and views in a given database. Most of the time, this is more access than what is needed. This is a violation of the Principle of Least Privilege.

There are some cases where a user needs such access, but there is always the possibility that a new table or view will be added which the user should not have access to. This creates a dilemma: do I create new roles and remove the user from db_datareader or db_datawriter or do I start using DENY permissions? The first involves additional work.The second means the security model is more complex. Neither is a good solution.

Failing the 3 AM Test

The second reason is the use of these roles violates what I call the “3 AM test.” The 3 AM test comes from being on call. When I am awakened at 3 AM because of a production problem, is this going to cause me unnecessary problems? If the answer is yes, the solution fails the test. I classify db_datareader and db_datawriter role usage as failing this test. Here’s why: the permissions granted are implicit. As a result, when I’m still trying to wake up I may miss that a particular account has permissions and is able to perform an operation that caused the problem. I’ve been burned by it in production before. That’s why it fails my test.

An Example

To see why this is an issue, create a user without a login in a sample database. Make it a member of the db_datareader role. Then create a role and give it explicit rights to a table in the database. This script does so in the AdventureWorks2012 database:
USE AdventureWorks2012;
EXEC sp_addrolemember @membername = 'TestDBRoleUser', @rolename = 'db_datareader';
CREATE ROLE ExplicitPermissions;
GRANT SELECT ON HumanResources.Employee TO ExplicitPermissions;

Pick any table or view at random and check the permissions on it. I’m using HumanResources.Employee:

We see the permissions for the role with explicit permissions. We don’t, however, see the user who is a member of db_datareader. When first troubleshooting it’s easy to make the assumption that the user doesn’t have permissions. This assumption means time is wasted trying to figure out how the user was able to cause the production problem. Only later, when someone things to check db_datareader, will the root cause be spotted. This is why I say these roles fail the 3 AM test.

If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSIS Execution Control Using Precedence Constraints – Notes from the Field #021

[Notes from Pinal]: Lots of people think that SSIS is all about arranging various operations together in one logical flow. Well, the understanding is absolutely correct, but the implementation of the same is not as easy as it seems. Lots of people start with lots of enthusiasm, but when it is about building a control flow, which addresses all the necessary elements of the package execution they face issues with the order of the tasks. This is a very critical subject and it requires some expert advice.

Linchpin People are database coaches and wellness experts for a data driven world. In this 21th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to control the flow of the program using SSIS package execution.

In this post, I’m going to review the essentials of precedence constraints in SQL Server Integration Services, and will demonstrate how these can be used to precisely control the flow of the program during the SSIS package execution.

In a control flow for SQL Server Integration Services, the “lines” we commonly use to connect tasks to each other are actually smart controls.  These precedence constraints allow the ETL developer to customize the program flow from one task to another.  The most common use of precedence constraints is to simply connect two tasks with the default constraint, which is the success constraint.  In this usage, the downstream task will be executed if and only if the execution of the upstream task is successful.  As shown in the example below, the task DFT Write data to output table will be executed if the preceding task named SQL Truncate output table is successfully executed.  Similarly, the final task in the chain, named SQL Log row count to audit tbl, will be executed only if the preceding data flow task was successful.

The green lines connecting these tasks are the precedence constraints.  In the above example, we know that these are success precedence constraints by their color.

Using precedence constraints for success operations is not the only option we have.  Because these are smart controls, we can configure them in a variety of ways for maximum flexibility.  An example of this would be use a failure precedence constraint to code for the possibility of failure.  In this scenario, we could create one or more tasks that would be executed in the event of a failure within a package.  As shown below, we have a snippet of a package showing a failure path from the data flow by way of a failure precedence constraint.

In this example, we connect a precedence constraint from the data flow task to the task named SQL Remove data from partial load, but instead of using the default setting, we reconfigure this as a failure constraint (shown with the “Failure” label and red color) to redirect the execution flow to that task in the event that the data flow execution fails.  Downstream from there, we have yet another precedence constraint configured as a completion constraint (indicated by the label and the black color).  This constraint indicates that the task SQL Log failure event to event log will be executed whenever the upstream task is finished executing, irrespective of whether the result of that execution was success or failure.

To configure a precedence constraint to a setting other than the default value of success, simply double-click on the precedence constraint and set the appropriate setting, the options for which are shown below.

As shown, we can choose Success (the default setting), Failure, or Completion.

Note that there are a few other options as well.  Above the selection for the type of constraint, you’ll see that we can also choose the evaluation operation.  The default setting is Constraint, which means that the precedence constraint uses only the success/failure/completion results as described above.  However, we also have some other options.  We can use an SSIS expression along with, or perhaps instead of, the execution status evaluation to determine how to control the execution flow.

Additionally, we have the ability to control the behavior when multiple constraints are used.  By default, if multiple constraints are connected to a single downstream task, then all of those constraints must evaluate true for said downstream task to execute.  However, we can override that default behavior by setting this value for multiple constraints to Logical OR, indicating that the downstream task will be executed if any of the connected constraints evaluates true.

As shown in the example below, we have modified the original example, changing both the evaluation operation as well as the multiple constraint behavior.  On the constraint between the data flow task and the data deletion task, we add an SSIS expression to check the row count check to confirm whether any rows were loaded as part of the failed execution (since we wouldn’t need to run the delete operation if no rows were loaded).  Further, we’ve added a completion constraint between the data flow task and the task named SQL Log failure event to event log and set the multiple constraint behavior to Logical OR to make sure we log the failure whether or not the delete operation occurred.

We get visual reminders of these settings as well.  The function indicator (which looks like an italicized fx) appears on the precedence constraint on which we are using the expression.  For the Logical OR setting, the constraints will appear as dashed lines rather than solid lines.

Precedence constraints are much more than just lines drawn between two tasks.  In fact, these are powerful and flexible decision tools that allow the ETL developer a great deal of control over the flow of the SSIS execution.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Index Fragmentation Next Steps – Notes from the Field #020

[Notes from Pinal]: Indexes are considered as a silver bullet to performance tuning and do amazing job when they are properly created. Just like any other engine they need maintenance and tuning. As we get more data indexes start to get fragmented and performance starts to degrade. I asked direct questions to my friend Tim about fragmentation and he gave me a very interesting answer.

Linchpin People are database coaches and wellness experts for a data driven world. In this 20th episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) shares very interesting conversation with me. I asked:

Pinal: “Tim, what would you do if your indexes gets fragmented?”

When analyzing systems I often find that little maintenance is being performed. Although proper backups are my number one priority when checking out a system, index maintenance is also very important.  I have a script that I run that check for index fragmentation and very often I find extremely high fragmentation levels for most of the tables in the database.

Finding high fragmentation is very common and the fix is quite easy. I simply have to apply a maintenance script to clean up the fragmentation.  The path I take can vary depending on the size of the database and the edition of SQL being ran.  Rebuilding indexes on standard edition will have a vastly different impact, then an online index rebuild on enterprise edition.  I usually take a surgical approach on a large database with very high levels of fragmentation until I get the fragmentation under control. Once the indexes are at a more manageable level of fragmentation I allow the script to keep things in check.

There are numerous products on the market that can help and there are free products as well.  Microsoft even includes an option within Database Maintenance plans; however, I personally use Ola Hallengren’s http://ola.hallengren.com/scripts for Index maintenance and for updating statistics.  Ola also includes a comprehensive backup solution, but unless the client needs a complete overhaul I typically don’t make changes there.

The great thing about this solution is the ability to rebuild when needed and reorganize when needed. You get to choose which option based on the level of fragmentation.   This solution gives you a great deal of flexibility with how it is run.  When running index maintenance on a system that is highly fragmented for the first time, even with the online options, be careful to monitor your transaction log size.  You will also need to have extra space in your data file for index rebuilds as well.

Regardless of which solution you choose for your index maintenance, the important thing is that you have a proper solution in place to deal with fragmentation.  What causes fragmentation in the first place, good for starters inserts, updates, and deletes.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSIS Data Flow Troubleshooting – Part1 – Notes from the Field #019

[Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications. The tool may also be used to automate maintenance of SQL Server databases and updates to multidimensional cube data.

In this episode of the Notes from the Field series I asked SSIS Expert Andy Leonard a very crucial question – How to troubleshoot SSIS data flow? It is such a complicated problem that everyone thinks they know, but not everyone can resolve this issue. Any Leonard is a world renowned expert, explains in this blog post how to troubleshoot SSIS data flow.

SQL Server Integration Services (SSIS) is designed to move data. Several SSIS tasks can move data but the Data Flow Task is arguably used most to accomplish moving data. When bad things happen to good developers, it helps to know how to troubleshoot. Writing about all the ways to troubleshoot an SSIS Data Flow Task will take more than one post. So this is Part 1.

Building a Quick Demo Package

If you are interested in working through some demos with me, create a new SSIS project named DataFlowTests. Add a Data Flow Task to the Control Flow as shown in Figure 1:

You can use any source data you desire, but I am going to use a file containing weather data captured from my weather station here in Farmville, Virginia beginning in December 2008. You can obtain this data here. Click the Data Flow tab to edit the Data Flow Task. Add a Flat File Connection Manager configured to consume the sensor1-all.csv file containing the December 2008 weather data. Add a Flat File Source adapter and link it to the Flat File Connection Manager. You Data Flow should appear as shown in Figure 2:

To test the Flat File Source adapter, we need to connect an output to another component. Add a Union All transformation and connect the output of the Flat File Source to it, as shown in Figure 3:

You can now test-execute the SSIS package. You should see results similar to those shown in Figure 4:

This data flow isn’t doing very much. The Flat File Source adapter is coupled to the Flat File Connection Manager. The Flat File Connection Manager reads data from the sensor1.all.csv file you downloaded from andyweather.com. Weather data is read into the data flow from the file via the Flat File Connection Manager and the Flat File Source adapter. The 106 rows of weather data are then passed into the Data Flow Path connecting the Flat File Source adapter output to the first Union All Input (Union All Input 1).

Insert a Warning

Let’s create an issue that will raise a warning. Open the Flat File Connection Manager Editor, click on the Advanced page, select the Min T column, and change its DataType property to “four-byte signed integer [DT_I4] as shown in Figure 5:

When you close the Flat File Connection Manager Editor, a warning icon displays on the Flat File Source adapter. If you hover over the Flat File Source adapter with your mouse, a tooltip will display some of the warning text as shown in Figure 6:

You can see more of the Warning if you execute the package and the view the Progress (if the package is running the SSIS Debugger) or Execution Results tab (if the package has been executed in the Debugger and then the Debugger has been stopped), as shown in Figure 7:

The warning we injected by changing the DataType property of the Min T column in the Flat File Connection Manager does not cause an error. The warning stems from the fact that there is a difference between the data type of the Min T column flowing into the Flat File Source adapter from the Flat File Connection Manager and the data type of the Min T column flowing out of the Flat File Source adapter. You can correct this warning by opening the Flat File Source adapter and clicking the Columns page. When opening the Editor (or Advanced Editor) of the Flat File Source adapter, you will be prompted as shown in Figure 8:

Click the “Yes” button to synchronize the Min T column’s data type in the Flat File Source adapter’s output (the metadata of the “output column”) with the Min T column’s data type supplied from the Flat File Connection Manager (the metadata of the “external column”).

Insert an Error

One way to generate an error in our simple data flow task is to configure the Flat File Connection Manager to perform an impossible data type coercion. Every value in the Min HI column contains “—“. Let’s configure the Min HI column in the Flat File Connection Manager and the Flat File Source adapter as data type Integer.

First, open the Flat File Connection Manager Editor and change the Min HI DataType property to DT_I4 as shown in Figure 9:

When you click the “OK” button, we see the same warning displayed in Figure 6 – the external columns are out of synchronization with the data source columns. Right-click the Flat File Source adapter and click Show Advanced Editor as shown in Figure 10:

When you attempt to open the Advanced Editor, you will be prompted to fix the data type mismatch between the output columns and the external columns. Click the “No” button to proceed, as shown in Figure 11:

When the Advanced Editor for the Flat File Source adapter opens, click the Input and Output Properties tab. Expand the Flat File Source Output node, and then expand the Output Columns node. Click on the Min HI column and change the DataType property to “four-byte signed integer [DT_I4] as shown in Figure 12:

Click the “OK” button to close the Advanced Editor. Note the Union All now displays a validation error as shown in Figure 13:

What has happened? The tooltip reveals an issue with a metadata mismatch, but the error is truncated. We can view the full error message in the Error List by clicking ViewàError List as shown in Figure 14:

Note: The Error List option is missing from the View menu in some versions of SSDT-BI. In those environments, use the keystrokes. Hold down Ctrl and press \ followed by E.

The Error List window displays the full text of errors, warnings, and other informational messages, and is shown in Figure 15:

There are a couple ways to fix this error. The easy way is to delete the Union All and add a new Union All transformation in its place (the hard way is to edit the package’s XML).


We are going to stop here. We have introduced an error in the Flat File Source Adapter, though. And we will begin there in Part 2.

If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Performance Statistic Collection – Notes from the Field #018

[Note from Pinal]: This is a 18th episode of Notes from the Field series. Performance is very crucial subject and to come up with the right strategy along with a performance one needs to have more knowledge related to the overall performance of the server. However, most of the organization does not come up with the best strategies related to performance. It is very crucial that DBA collects performance statistics of the entire collection methodologically and scientifically.

In this episode of the Notes from the Field series database expert David Klee explains a very crucial issue DBAs and Developer faces in their career – how to collect performance statistics. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.

Performance statistics are critically vital for the long-term health and capacity management of an enterprise environment. Ongoing performance metric collection ensures that you, the DBA, have the metrics of a system’s operating state whenever a performance problem is reporter – either real-time or historically. Too many times when I am troubleshooting a well-known performance problem, the client has no metrics about the system state during the time window where end users reported performance problems.

Let’s fix that. We need some performance statistic collection.

Why Collect Around the Clock?

Are you able to definitively state that, at 10:48pm tonight, you will have unusually high disk latency on your SQL Server archival data volume? If so, become a consultant and the world is yours! For the rest of us, our reality would be an email tomorrow morning stating that the application was ‘slow’ last night and the customer service reps received complaints. We would normally not have any method to get a clear picture of the state of our system last night, and without the numbers, are now left to start looking at the current system state to see if the issue reported is still lingering in the system.

Constant, around-the-clock performance statistic sampling is the simplest method of achieving this goal. Wouldn’t you like to be able to generate charts like the following at any time to get a simple picture of what your servers were doing at any given time?

A picture says a thousand words, and this simple graph quickly tells the reader the times where the system was busy. The underlying data was collected with Windows Perfmon on a five-minute interval throughout the week, and loaded into a database where an SSRS report is emailed out to the administrators every morning. The reports are quickly reviewed each morning as the DBA performs the usual morning review, and if an anomaly is present, would be quickly discovered.

Methods for Collection

Quite a number of third-party utilities from your favorite tooling vendors exists to assist you with ongoing performance statistic collection. They work great and I highly endorse any tool that makes you more efficient. But, if you do not have one of these utilities, or the price is prohibitive, you already have a tool that can handle this task for you. Windows Performance Monitor, or Perfmon, is built into the Windows Server operating system, and can be configured to constantly record performance statistics for future use.

Use the directions at Technet (link to http://technet.microsoft.com/en-us/library/cc749249.aspx) to set up an ongoing performance data collector set to collect information around the clock. I generally use a five-minute polling interval to start with. If you know you are experiencing undiagnosed performance issues, increase the polling interval.

What to Collect

The following list of Windows and SQL Server WMI counters should get you the basics needed to help with troubleshooting current performance problems. Feel free to modify this list according to your environment and your specific needs. The following web site contains a list of the performance counters and their descriptions.


Note: if a counter has multiple instances of a selected object, make sure you select <All instances> so the counters are separated appropriately. This option helps you get individual statistics for objects like CPU cores or disk drive letters.

Windows Server Counters

Counter Group Counter Set
Memory Available Mbytes
Page Faults / sec
Pages / sec
Network Interface Bytes Received / sec
Bytes Sent / sec
Paging File % Usage
PhysicalDisk % Idle Time
Average Disk Bytes / Read
Average Disk Bytes / Write
Average Disk sec / Read
Average Disk sec / Write
Disk Read Bytes / sec
Disk Reads / sec
Disk Write Bytes / sec
Disk Writes / sec
Processor % Privileged Time
% Processor Time
% User Time
System Processor Queue Length

SQL Server Counters

Counter Group Counter Set
Process (sqlserver.exe) % Processor Time
% Privileged Time
SQL Server:Buffer Manager Lazy writes/sec
Page life expectancy
Page reads/sec
Page writes/sec
SQL Server:Memory Manager Total Server Memory (KB)
Target Server Memory (KB)
Memory Grants Pending
SQL Server:Access Methods Forwarded Records/sec
Full Scans/sec
Index Searches/sec
Page Splits/sec
SQL Server:Locks Number of Deadlocks/sec
SQL Server:SQL Statistics Batch Requests / sec
Compilations / sec
Recompilations / sec

You can also build custom collectors for SQL Server-specific items that WMI will not serve. Use your favorite job scheduler to routinely execute queries against system DMOs and store the results into custom tables in a utility database. To get a good idea of the sort of metrics that you can collect, reference some of the many objects in Glenn Berry’s fantastic diagnostic queries, available at http://sqlserverperformance.wordpress.com. Items such as disk stall rates, instance and database level CPU, memory, and disk consumption metrics, and background CPU activity can all be sampled with these queries.

Now what?

Now, what do you do with this data that you are collecting? Let’s make some sense of it! Stay tuned until next time when we discuss how to start sifting through the great quantities of data that we are gathering.

If you want to get started with performance statistics SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)