SQL SERVER – Recovering from Snapshots – Notes from the Field #078

[Note from Pinal]: This is a 78th episode of Notes from the Fields series. What do you do when data is deleted accidentally or you are facing disaster? Well, there are plenty of the things, you can do, but when in panic quite often people make mistakes which just intensify the disaster. Database snapshot is very important but less frequently used feature.

JohnSterrett SQL SERVER   Recovering from Snapshots   Notes from the Field #078

In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very interesting subject of how to recover the database from snapshots. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


One of the most common – and most forgotten – scenarios in disaster recovery plans is data being updated or deleted accidentally. This surprises me because, in the field, I see accidental data changes as a common disaster. If you have enterprise edition you can leverage database snapshots as a step in your disaster recovery plan to help recover faster, or at least allow you to pull back the majority of your data while you concurrently restore backups (to bring the rest of your data back). Restoring most of your data quickly can be critical with very large databases (VLDBs), especially if you have tight Recovery Time Objectives (RTOs).

Database snapshots require sparse files and store the original pages in these files when they are changed. Therefore, you will want to manage the sparse files and make sure you purge database snapshots as needed. You might also be using database snapshots without even knowing it. If you are doing database integrity checks with the default values, a special snapshot is taken in the background and your integrity checks are executed against that snapshot.

HOW DO WE CREATE A SNAPSHOT?

The following example uses the AdventureWorks2014 sample database. We need to use the CREATE DATABASE syntax, include the names of your data files, and include the file location of the sparse files. Finally, include AS SNAPSHOT OF database syntax to define the database as a snapshot.

CREATE DATABASE [AW2014_Snapshot_morning] ON
( NAME = AdventureWorks2014_Data, FILENAME =
'C:\Demo\AW_data.ss')
AS SNAPSHOT OF AdventureWorks2014

REAL-WORLD ACCIDENTAL DATA CHANGE STATEMENT

Here is a common case of a mistaken DELETE statement. We have a DELETE statement with the primary key included to delete a single row. By mistake we didn’t highlight the filter so all the rows will be deleted.

If you execute this statement, you will see an unexpected result: we deleted all rows in the table as shown below.

notd 78 SQL SERVER   Recovering from Snapshots   Notes from the Field #078

(19972 row(s) affected)

HOW DO WE REVERT FROM SNAPSHOT?

You have two options if you created a database snapshot earlier.

First, you could insert the data back from your snapshot database as shown below. This could be done with SSIS, BCP or many other tools. For this quick example we will do this with an INSERT INTO SELECT statement.

SET IDENTITY_INSERT Person.EmailAddress ON
INSERT INTO
Person.EmailAddress (BusinessEntityID, EmailAddressID, EmailAddress, rowguid, ModifiedDate)
SELECT *
FROM AW2014_Snapshot_morning.Person.EmailAddress
SET IDENTITY_INSERT Person.EmailAddress OFF

Second, you can revert the database from the snapshot. Keep in mind this second option will revert all data changes in the database not just the data deleted in your accidental data change statement.

USE MASTER;
RESTORE DATABASE AdventureWorks2014 FROM
DATABASE_SNAPSHOT = 'AW2014_Snapshot_morning';
GO

From these examples, you can see database snapshots are a tool to help you recover data quickly. Please note that you wouldn’t want database snapshots to be your sole plan for disaster recovery and unplanned data change statements. If your snapshot or the original database suffers corruption, you wouldn’t be able to recover. So make sure you add snapshots into your existing disaster recovery plans, which should – at a minimum – include database backups to give you a better recovery point objective.

Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Script: Knowing Data and Log Files Are On the Same Drive

Last week it was wonderful to spend time at the GIDS conference to meet all the enthusiastic developers. Had fun preparing for the session and the number of people who walked to me and say they read this blog makes me really humble and motivated to write more. I love attending and presenting at these conferences because I get an opportunity to meet people and at many times able to answer real world problems.

During one of my sessions on performance, I mentioned about the need to keep the data files and log files on different drives. I showed the impact of placing log and/or data files on a slower drive and we had loads of fun learning. After the session, one of the attendees walked up and asked a simple question. He said he had 100’s of databases running in their environment. He wanted to know which of these databases were having both the data files and log files on the same drive.

My first instinct was to search the blog to see if I had already written about it but to my surprise it was not. So with a little bit of query to DMVs and metadata tables, here is what I came up with.

SELECT SERVERPROPERTY('machinename') AS 'Server Name',
ISNULL(SERVERPROPERTY('instancename'), SERVERPROPERTY('machinename'))  AS 'Instance Name',
name,
drive_letter AS 'Drive Letter',
Comments, Path
FROM
(
(
SELECT DISTINCT
UPPER(LEFT(LTRIM(physical_name),2)) AS drive_letter,
REVERSE(RIGHT(REVERSE(physical_name),(LEN(physical_name)-CHARINDEX('\', REVERSE(physical_name),1))+1)) [Path],
N'Device holds both tempdb and user database objects' AS 'Comments',
DB_NAME(database_id) [name],
1 AS OrderBy
FROM MASTER.sys.master_files
WHERE LOWER(DB_NAME(database_id)) = 'tempdb'
AND UPPER(LEFT(LTRIM(physical_name),2)) IN
(
SELECT UPPER(LEFT(LTRIM(physical_name),2))
FROM MASTER.sys.master_files
WHERE LOWER(DB_NAME(database_id)) NOT IN (N'tempdb', N'master', N'msdb', N'adventureworks', N'adventureworksdw', N'model')
)
)
UNION
(
SELECT drive_letter, path,
N'Device holds both data and log objects' AS 'Comments', name,
2 AS OrderBy
FROM
(
SELECT drive_letter, name, Path
FROM
(
SELECT DISTINCT UPPER(LEFT(LTRIM(physical_name),2)) AS drive_letter,
REVERSE(RIGHT(REVERSE(physical_name),(LEN(physical_name)-CHARINDEX('\', REVERSE(physical_name),1))+1)) [Path],
TYPE, DB_NAME(database_id) [name]
FROM MASTER.sys.master_files
WHERE LOWER(DB_NAME(database_id)) NOT IN (N'master', N'msdb', N'tempdb', N'adventureworks', N'adventureworksdw', N'model')
)
a
GROUP BY drive_letter, a.name, path
HAVING COUNT(1) >= 2
) Drives
)
)
Drive
ORDER BY OrderBy, drive_letter

On my dev box it looks like this:

data log same drive 01 SQL SERVER   Script: Knowing Data and Log Files Are On the Same Drive

I am sure if you run the script, it will show something similar to this. Do let me know if you ever needed such a script and were not able to get the same? I would love to know, are there some simple daily scripts that will make your life easy and you don’t have them handy? Let me know, we will try to make them into the blog somehow.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Finding Tables with Primary or Foreign Keys

If you want to know if a table has a primary key or foreign key, you can find out this in many ways.

Let us create these tables

CREATE TABLE product_master
(
prod_id INT PRIMARY KEY,
prod_name VARCHAR(100),
price DECIMAL(12,2)
)
GO
CREATE TABLE product_details
(
prod_id INT,
sales_date DATETIME,
sales_qty INT,
sales_amount DECIMAL(16,2)
)
GO
CREATE TABLE company_master
(
compnay_id INT,
company_name VARCHAR(100),
address VARCHAR(1000)
)
GO

Now let us create foreign key:

ALTER TABLE product_details ADD CONSTRAINT ck_item FOREIGN KEY(prod_id) REFERENCES product_master(prod_id)

Now if you want to check if a table has a primary key, you can use the following methods

1) Use sp_pkeys system stored procedure

EXEC sp_PKEYS product_master

The result is

objprop1 SQL SERVER   Finding Tables with Primary or Foreign Keys

2) Use Objectproperty function

SELECT
*
FROM
INFORMATION_SCHEMA.TABLES
WHERE
OBJECTPROPERTY(OBJECT_ID(TABLE_NAME),'TABLEHASPRIMARYKEY')=1 AND
TABLE_TYPE='BASE TABLE'
ORDER BY
TABLE_NAME

objprop2 SQL SERVER   Finding Tables with Primary or Foreign Keys
if you want to check if a table has a foreign key, you can use the following method

SELECT
*
FROM
INFORMATION_SCHEMA.TABLES
WHERE
OBJECTPROPERTY(OBJECT_ID(TABLE_NAME),'TABLEHASFOREIGNKEY')=1 AND
TABLE_TYPE='BASE TABLE'
ORDER BY
TABLE_NAME

The result is

objprop3 SQL SERVER   Finding Tables with Primary or Foreign Keys

If you want to check for the tables that do not have primary key or foreign key, you can use the following method
SELECT
*
FROM
INFORMATION_SCHEMA.TABLES
WHERE
OBJECTPROPERTY(OBJECT_ID(TABLE_NAME),'TABLEHASPRIMARYKEY')=0 AND
OBJECTPROPERTY(OBJECT_ID(TABLE_NAME),'TABLEHASFOREIGNKEY')=0 AND
TABLE_TYPE='BASE TABLE'
ORDER BY
TABLE_NAME

The result is
objprop4 SQL SERVER   Finding Tables with Primary or Foreign Keys

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Collect and Analyze SQL Server Data Efficiently

dbForge Event Profiler is one of the most useful SQL Server “build-in” tools. The Profiler records data associated with various SQL Server events for further analysis. This data is stored in a trace file that can later be analyzed or used to replay a specific series of steps when trying to diagnose SQL Server relates problems. The tool allows you to view communications between a client and SQL Server, and gives you an insight into its internal performance. To take full advantage of its potential, download dbForge Event Profiler for SQL Server for free now.

The tool offers a large spectrum of features that can assist you in:

  • analyzing trace results by grouping or aggregating them;
  • auditing user activity;
  • creating your own custom traces and save them for future use;
  • debugging T-SQL code and stored procedures;
  • executing quality assurance check;
  • identifying performance-related problems with front-end applications, queries, T-SQL, transactions, and so forth;
  • performing stress testing;
  • performing query analysis of execution plans;
  • viewing SQL Server performance when interacting with a client.

Essentially, the Event Profiler is designed to quickly and efficiently track down and fix many SQL Server related problems, such as poorly-performing queries, locking and blocking, excessive table/index scanning, and a lot more. For example, you can monitor the execution of a stored procedure to see whether it hampers SQL Server performance.

Using the Profiler, you can monitor the events that you are interested in. For example, you may want to capture events from a specific user or a given database. Smart filters allow you to collect only the events that you want, filtering out those of no interest. This reduces the amount of data that is stored in your trace.

dbForge Event Profiler provides a rich graphical user interface that can be used to create, analyze, and replay trace results. As the trace data is being collected, you can stop or pause the trace at a certain point and store the trace results to a physical file on a local hard disc. The saved SQL Server Profiler document has the “.*ssp” extension. This file may then be viewed to analyze data captured, share it with others, or compare the trace results to traces performed later.

Powerful Tool in Action

Below is an example of how you create a new trace using dbForge Event Profiler.

To create a new trace, you follow these steps:

  1. On the Start page, click Profile Server Events. The Profile Server Events wizard appears.
  2. Specify the connection.
  3. Optionally, select a profiling template, modify the trace file settings and the data storage settings. Click Next.

devarteventprofiler1 SQL SERVER   Collect and Analyze SQL Server Data Efficiently

  1. On the Events to Capture page, select the events you want to capture. Click Next.
  2. Optionally, on the Actions page, select the actions you want to capture in the current events session. Click Next.
  3. Optionally, on the Event Filters page, specify filter options to limit the tracing data.
  4. Click Execute. The Trace will start and the Server Event Profiler Document opens.

devarteventprofiler2 SQL SERVER   Collect and Analyze SQL Server Data Efficiently

Once you have collected enough data, stop the trace by clicking the Stop Trace button on the toolbar. At this point, you may review the collected data or save it to a file for future use.

Now that you have learned about many advantages of this smart tool, you can start mastering and making practical use of the dbForge Event Profiler for SQL Server by downloading it now for FREE.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Using “High Performance” Power Plan for SQL Server

A lot of times, I have seen DBA’s and administrators have their own startup tasks that they perform when rebuilding or bringing a new server online. Some of these practices are powerful and are mean to enhance performance of the box that you have bought. Recently, I was at a customer location looking at what a typical Admin was doing and this step of his caught my attention. I couldn’t stop myself from blogging this because it was a simple, powerful and yet less appreciated setting available on the Server.

On Windows Server 2008 and above, set the “High Performance” power plan in Control Panel -> Power Options -> OK. By default, Windows Server sets the “Balanced” power plan, which enables energy conservation by scaling the processor performance based on current CPU utilization. From Intel X5500 and other last-generation CPUs, the clock is throttled down to save power (Processor P-state), and only increases when CPU utilization reaches a certain point. The Minimum and Maximum Processor Performance State parameters are expressed as a percentage of maximum processor frequency, with a value in the range 0 – 100.

If a server requires ultra-low latency, invariant CPU frequency, or the very highest performance levels, such as a database servers like SQL Server, it might not be helpful that the processors keep switching to lower-performance states. As such, the High Performance power plan caps the minimum processor performance state at 100 percent.

The typical setting looks like:

PowerPlan Windows Server SQL SERVER   Using High Performance Power Plan for SQL Server

At this point, thought it would be helpful in bringing out what these plans are for my readers. These are the built-in power plans and their common use case scenarios:

Plan Description Common applicable scenarios Implementation highlights
Balanced Default setting. Highest energy efficiency with minimum performance impact. General computing. Matches capacity to demand. Energy-saving features balance power and performance.
High Performance Increases performance at the cost of high energy consumption. Should not be used unless absolutely necessary. Low latency.Application code sensitive to processor frequency changes. Processors are always locked at the highest performance state.

I hope most of you are aware and are using these settings on your servers too. Do let me know some of the settings that you use as part of your daily environment. I am sure there will a few things I can learn from you too. Do drop a line as part of comments for the benefit of all.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #017 – Performance Comparison of Union vs Union All

Here is what I just learned from email. One of the very prestigious organization asks one of their candidate following questions –

Question: If you have options to use Union or Union All – which one of the option will use keeping performance as a top most requirement for the query.

Answer: Though many of you may be surprised to know that this kind of questions can exist, I am personally not surprised to see this in an interview. Here is my answer – UNION and UNION ALL can’t be compared as they are absolutely different things – they are like apples and oranges. Just like Apples and Oranges are fruits Union and Union All our operators, but they both are there for very different purposes.

UNION
The UNION command is used to select related information from two tables, much like the JOIN command. However, when using the UNION command all selected columns need to be of the same data type. With UNION, only distinct values are selected.

UNION ALL
The UNION ALL command is equal to the UNION command, except that UNION ALL selects all values.

You can’t compare their performance as they do an absolutely different task.

Here are articles you can read for further understanding this issue.

Performance comparison: SQL SERVER – Difference Between Union vs. Union All – Optimal Performance Comparison

Different between Union and Union All: SQL SERVER – UNION ALL and UNION are Different Operation

Other relevant articles:

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Generate Different Random Number for Each Group Using RAND() Function

You know that RAND() function in SQL Server generates a random float value 0 through 1 (excluding 0 and 1).

Let us create the following dataset

CREATE TABLE #random(no INT)
INSERT INTO #random(no)
SELECT 1 UNION ALL
SELECT 1 UNION ALL
SELECT 1 UNION ALL
SELECT 2 UNION ALL
SELECT 3 UNION ALL
SELECT 3

If you want to generate a random value, you can use the following code

SELECT no, RAND() AS random_number FROM #random

which results to

no random_number
 1 0.370366365964781
 1 0.370366365964781
 1 0.370366365964781
 2 0.370366365964781
 3 0.370366365964781
 3 0.370366365964781

Please note that when you execute it , you may get different random number than what I got for column 2 but all will be same

What if you want to generate random number in such a way that it is reset to each column value (in this case the column is no)?
Did you know that RAND() accepts a seed value as well?

If you execute the following code,

SELECT no, RAND() AS random_number,RAND(no) AS random_number_reset FROM #random

the result is

no random_number random_number_reset
 1 0.58334760467751 0.713591993212924
 1 0.58334760467751 0.713591993212924
 1 0.58334760467751 0.713591993212924
 2 0.58334760467751 0.713610626184182
 3 0.58334760467751 0.71362925915544
 3 0.58334760467751 0.71362925915544

Please note that when you execute it , you may get different random number than what I got for column2 2 and 3. If you notice the result the values of second column is same for all rows, whereas the values of third column is same withing a group (column no), and different across groups. So if you want to generate random number differently for each group, you can make use of this method.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Different Methods to Know Parameters of Stored Procedure

Suppose you have a stored procedure with some input parameters. There are two methods to know the list of parameters defined in a stored procedure.

Let us create the following stored procedure

CREATE PROCEDURE TEST_PROCEDURE
(
@CUST_ID INT,
@YEAR INT
)
AS
SELECT
@CUST_ID,@YEAR

Now to know the parameters used in the TEST_PROCEDURE stored procedure, you can use the following methods

1 Use SP_HELP system stored procedure

EXEC sp_HELP 'TEST_PROCEDURE'

When you execute the above method, this is the result of the second result set.

Parameter_name Type Length Prec Scale Param_order Collation
------------------ ----------- ----------- ----------- ------------------------------
@cust_id int 4 10 0 1 NULL
@year int 4 10 0 2 NULL

2 Use INFORMATION_SCHEMA.PARAMETERS system view

SELECT
PARAMETER_NAME,DATA_TYPE ,ORDINAL_POSITION
FROM
INFORMATION_SCHEMA.PARAMETERS
WHERE
SPECIFIC_NAME='TEST_PROCEDURE'

The result is

PARAMETER_NAME DATA_TYPE ORDINAL_POSITION
------------------------ ------------ ----------------
@cust_id int 1
@year int 2

If you know any other methods feel free to post as a comment.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL Server – Error While Enabling CDC on a Table – The specified ‘@server’ is invalid (valid values are returned by sp_helpserver)

Every environment is different and it is no doubt some of these environment changes can give you results that are not expected. Recently one of my blog reader followed below blogs:

and they reported that when he is trying to enable CDC for a table, he was getting below error:

Msg 22832, Level 16, State 1, Procedure sp_cdc_enable_table_internal, Line 623
Could not update the metadata that indicates table [HumanResources].[Shift] is enabled for Change Data Capture. The failure occurred when executing the command ‘[sys].[sp_cdc_add_job] @job_type = N’capture”. The error returned was 22836: ‘Could not update the metadata for database AdventureWorks2014 to indicate that a Change Data Capture job has been added. The failure occurred when executing the command ‘sp_add_jobstep_internal’. The error returned was 14234: ‘The specified ‘@server’ is invalid (valid values are returned by sp_helpserver).’. Use the action and error to determine the cause of the failure and resubmit the request.’. Use the action and error to determine the cause of the failure and resubmit the request.

So the best way to learn these new things is by exploring the events how it happened. Below is the script he was using (taken from my blog)

-- You can run this stored procedure in the context of each database to enable CDC at database level.
-- (The following script will enable CDC in AdventureWorks2014 database)
USE AdventureWorks2014
GO
EXEC sys.sp_cdc_enable_db
GO
-- Following script will enable CDC on HumanResources.Shift table.
USE AdventureWorks2014
GO
EXEC sys.sp_cdc_enable_table
@source_schema = N'HumanResources',
@source_name   = N'Shift',
@role_name     = NULL
GO

First command was working fine. Error was raised by second command. I asked him to capture the profiler and share with me. I found that first SQL Server get instance name using below query:

SELECT @server = CONVERT(SYSNAME,SERVERPROPERTY('ServerName'))

The value is then passed to create the job for CDC. In procedure sp_verify_jobstep, below is the condition which was failing.

IF (@server IS NOT NULL) AND (NOT EXISTS (SELECT *
FROM MASTER.dbo.sysservers
WHERE (UPPER(srvname) = UPPER(@server))))
RAISERROR(14234, -1, -1, '@server', 'sp_helpserver')

Notice that this is the same error in error message (which I have highlighted) I asked him to check and he verified that SQL Server name was changed but sp_dropserver and sp_addserver was not executed. Here is the command to fix the issue. Please change the parameter values as per your SQL Instance.

sp_dropserver 'HostName\InstanceName_incorrect'
GO
sp_addserver 'HostName\InstanceName', 'local'
GO

In short, error is caused due to mismatch in value between SERVERPROPERTY(‘ServerName’)) and master.dbo.sysservers

Have you ever done similar troubleshooting in your environments using Profiler? Do let me know what you did and what was the learning. I am sure a lot of readers are going to benefit from your stories and scenarios.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Hey DBA – Watch Your Vendors! Hey Vendors – Try a Little Harder! – Notes from the Field #077

[Note from Pinal]: This is a 77th episode of Notes from the Field series. Every week, I personally wait for the notes from the fields from Mike because it contains lessons of life which directly impacts DBA and Developer’s life positively. Mike Walsh is an amazing person to meet and know. I am fortunate to be his friend. Every single time I meet him, I learn something as a human. He is always looking for an opportunity to improve the life of DBAs and Developers.

mikewalsh Hey DBA – Watch Your Vendors! Hey Vendors – Try a Little Harder!   Notes from the Field #077Everybody uses third party softwares and third party vendors in their organization. Our ecosystem is built around that concept and it is absolutely difficult to avoid this situation. When two different parties depend on each other to succeed in their life there should be some basic ground rules to follow for mutual success. I asked the very same question “How should be the relationship between Vendors and DBA?” and database expert Mike Walsh decided to guide me with the answer of this question.

Read the entire story in his own words.


It seems I rant about independent software vendors about once every 6-8 months. I’ve not done it on a post here on Pinal’s fine blog, though. Why do I do this? Because it really matters. Because it really should be different. Because I’ve seen the difference when DBAs and Vendors come together, plan properly and have a working relationship. Finally – because I’ve seen environments brought down hard by a failure in this important relationship and in this important “unwritten contract” between the parties. So, I’m continuing with the theme of the “Hey DBA!” type posts with advice we can all use, but I’m talking to vendors, too.

If your company buys software that your business runs on – don’t get stuck in this trap and learn some questions to ask the vendors.

If your company sells software to other companies? Make sure these things are clear for all of your customers and people know what they own and don’t own.

vendor Hey DBA – Watch Your Vendors! Hey Vendors – Try a Little Harder!   Notes from the Field #077

What Does Failure Look Like Here?

It takes different shapes. In most cases the following symptoms are there (not all are there, but many may be):

  • No maintenance is being done (no index rebuilds, no statistics updates, no DBCC CHECKDB jobs)
  • Sometimes an incorrect backup strategy exists
  • The database for an application is on an undersized, underperforming SQL Server
  • SQL Server is missing a lot of best practices and configurations, instead it feels more like it was installed by clicking next a bunch of times and finish once and never again touched.

But these symptoms can lead to downtime. They can lead to unresolved, and undetected corruption. They can lead to a restore that doesn’t go exactly as the business planned or expected or as the application vendor wants.

In many environments, a company will buy software that runs on SQL Server and not even fully realize or understand that it is running on an enterprise database platform that needs attention, care and regular maintenance. Often times the client eventually realizes it only when they’ve had a disaster and are trying to pick up the pieces and get their critical application online.

DBAs – What can you do?

Be on the lookout for applications coming into the company. This is sometimes easier said than done – but trying to get yourself into meetings before an application is purchased to understand the database needs can make a huge difference. Let it be known that you are there to help and you want to make a difference in the stability, performance and uptime of any new applications that come in and have a database backend. Let your managers speak on your behalf, let the project managers know, be vigilant and get involved when applications are brought in. And earlier is better. If a vendor is doing something dangerous, against best practices, or in violation of a security audit you have to go through – it is easier to get them to make those changes in the sales process than after you’ve purchased the application.

Be proactive – in so many of my posts here and on my own blog at http://www.straightpathsql.com and over at http://www.linchpinpeople.com, I focus on being proactive. Get to know your environments. Run checks on them, understand the uses, and understand the maintenance. Over at Linchpin People we have a free WellDBA™ Checklist that you can use to go through and check the most common mistakes we find in production environments. Grab that checklist and check your own environment. Dig in where there are issues. You can get that free checklist here (http://www.linchpinpeople.com/landing/well-dba-sql-server-checklist/)

Be Inquisitive – On my blog, I have a list of some sample questions you can ask a vendor. This checklist is a bit older now and I should probably update it – but the point in the questions should become clear as you look at them. You want to ask the questions of the vendor to know what challenges you’ll face as a DBA. You want to gauge who is responsible for which pieces, you want to get to know and trust your vendor and understand their expectations of you and what they are great at and what you may need to spend more time looking. Create a vendor interview question yourself and make sure any new software vendors bringing applications into your environment give you the information you need. You can see an example checklist on my personal blog – http://www.straightpathsql.com/archives/2009/01/dba-questions-to-ask-a-vendor/

vendor2 Hey DBA – Watch Your Vendors! Hey Vendors – Try a Little Harder!   Notes from the Field #077

Vendors – What can you do?

It’s pretty simple – be clear, be informative and be prescriptive…

Be Clear – Your customers may or may not understand what they are getting into with a database. They may have a great DBA, no DBA or a mediocre DBA. Be clear on what you expect from them and what they have to worry about. You can’t just say, “they’ll figure it out… backups are really important so they’ll understand this and they’ll start doing them…” You’ve made some big assumptions there. That’s true about maintenance, knowing what to do when there are issues, etc. Be clear on what you are responsible for, what you are not responsible for and what they should be worried about and what they should take care of on their own.

Be Informative – Don’t just be clear – but get to know your customers. If you are deploying a SQL server application – you should explain that you like to work with a DBA during the pre-sales and installation process. Get to know the DBA team. Talk about challenges that folks face – check out that set of questions above and think of other questions you’d want to ask if you were a DBA. Have answers to them and supply them even if the client doesn’t ask.

Be Prescriptive – I love it when I see a vendor with best practice guides, deploy guides, maintenance guides, SQL Server configuration guides, etc. These things are gold to your customers. But they are also gold to your image, and your respect and your referrals. If you prescribe what the environment should generally look like, what kind of maintenance should be done, what kind of recovery typically works best and give advice on how to configure SQL for success – you are equipping your customers for greater success. You don’t need to go specifics on everything but general prescriptions here, with latitude for companies with DBA teams that understand how to turn your guidelines into reality within their existing policies is great. But you should consider being specific enough to show the smaller shops with a system administrator playing the role of DBA what they should be learning about and either figuring out or bringing in consulting or mentoring help to get right with them.

Doing these things makes you ahead of the curve of software vendors and puts you in a great spot – and it probably saves your support budget down the road with calls that are not needed by your customers not paying attention to their SQL environment.

Everyone – What can you do?

Talk… Seriously – many of these issues come down to communication. A lot of DBAs complain about software vendors. A lot of software vendors probably secretly complain about their customers. If we all talked just a bit more about these things we’d be in a better spot and our environments would be better for it.

If you want to get started with performance analytics and Database Healthcheck of SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)