SQL SERVER – Are Power Options Slowing You Down? – Notes from the Field #095

[Note from Pinal]: This is a 95th episode of Notes from the Fields series. When it is about tuning SQL Server, we always look at the configuration of the SQL Server. However, there are few settings of the operating system can also impact the performance of the SQL Server. I recently asked John Sterrett a very simple question – “Which is the one option from the OS, would you check first when you are tuning SQL Server?”

JohnSterrett SQL SERVER   Are Power Options Slowing You Down?   Notes from the Field #095

In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very interesting story about how a simple setting of the OS impacts performance of SQL Server. Read the experience of John in his own words.


Doing several SQL Server health checks I have noticed that the operating system settings can slow you down. Today, we are going to focus on power options. By default many companies have the balanced power plan as the default configured option.  You will see that it is also the recommended setting. This is recommended to save power usage which reduces which reduces the amount of money required to power your servers.  Please keep in mind that your virtualization software or BIOS might also have Power Option settings that should be verified and adjusted. Today, we are focusing on the Windows Operating System.

notes 95 SQL SERVER   Are Power Options Slowing You Down?   Notes from the Field #095

The balanced power option is great for most servers where CPU power is not mission critical to the applications living on your server. SQL Server is an exception and you will want to get as much as you can out of your CPU Power.  In fact Enterprise edition licensing is now licensed by core so you want to make sure you are getting the most out of your SQL Server licensing.

How Do We Script Out The Change?

While you can load the power option GUI and manually change the setting I like to automate as much as possible.  The following script could be executed via PowerShell or Command Prompt to make sure High Performance power option is enabled.

Powercfg -SETACTIVE SCHEME_MIN

Conclusion

SQL Server places a different set of demands on the operating system. OS default settings are not always optimal and should be reviewed.

Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Performance Monitoring for Analysis Services – Notes from the Field #093

[Note from Pinal]: This is a new episode of Notes from the Field series. When we build any application, we build it with zeal and enthusiasm. We believe it will do well for many years but it is not the case always. After a while performance started to go down and everything is not in the best shape. This is the time when we need to monitor performance and take action based on our analysis.

Bill%20Anton SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

In this episode of the Notes from the Field series I asked SQL Expert Bill Anton a very crucial question – How to monitor performance of analysis services? Bill was very kind to answer the questions and provides plenty of information about how a novice developer can work with SSAS.


When it comes to ad-hoc query performance in business intelligence solutions, very few technologies rival a well-designed Analysis Services Multidimensional cube or Tabular model. And when that cube/tabular model is performing well, as it usually does in the beginning, life is good!

notes 93 1 SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

Photo Credit: SuperCar-RoadTrip.fr

Over time, however, things can change. The underlying business evolves (hopefully) and information workers start asking different questions, resulting in new query patterns. The business grows and now more users are asking more questions of larger data volumes. Performance starts to deteriorate. Queries are taking longer to complete and nightly processing starts to run past the end of the maintenance window into business hours. Users are complaining, management is unhappy, and life is no longer good.

notes 93 2 SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

Photo Credit: Charlie

How did we not see this coming?

If this describes your personal situation, don’t worry, you’re not alone. In fact the majority of clients who bring me in to troubleshoot and resolve their Analysis Services performance problems ask the same question (or some variation of it).

In my experience, 80% of the time the reason no one sees this type of issue coming is because there wasn’t a performance monitoring solution in place. The other 20% who have a performance monitoring solution simply aren’t using it or reviewing information being collected.

I don’t know why so many Analysis Services environments are neglected (I’m not a Business or IT therapist) but I’m going to tell you what steps need to be taken if you want to avoid it.

The Secret is Simple

The first step to maintaining a well running Analysis Services instance is simple. Take measurements!

notes 93 3 SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

Photo Credit: Official U.S. Navy Page

Taking regular measurements is the only way to know how things are performing overtime. This seems ridiculously obvious, but so few companies actually do it. My hunch is that these folks just don’t know what needs to be measured.

Analysis Services can be broken down into 2 fundamental types of workloads:

Processing Workloads

Processing is the term used to describe how data gets loaded into the Analysis Services database. This workload is usually done at night, outside of business hours, so as not to coincide with user activity.

From a performance perspective, the primary things to keep an eye on are:

Processing Duration
This is the amount of time it takes to load data into the Analysis Services database. As long as the processing duration fits within the scheduled maintenance window, there’s not much to worry about. However, if this duration is increasing over time and you think it will eventually run past that window, then you’ll need to review the rest of the information to figure out “why” and “what to do”.

How long does it take to process the Analysis Services database?
Is the processing duration increasing over time?

Resource Consumption
Keeping an eye on resource consumption (e.g. CPU, memory, disk, network) during processing is also a good idea. This kind of information can help shed some light on bottlenecks and provide guidance when coming up with a solution to processing related problems.

Is the processing workload requiring more and more memory? Are we maxing out CPU? How’s disk space?

There are many solutions to problems faced during processing, but without some insight into what’s happening on the system, it’s hard to know which solution is the optimal one.

For example, say we have a Multidimensional cube and notice that the processing duration for one of the measure groups is steadily increasing over time. We review the resource consumption and see that there’s plenty of CPU/Memory/IO to spare. In this case, we may consider partitioning this particular fact table and only processing the most recent partition or processing the partitions in parallel.

Pro Tip: Breaking up processing into stages will provide context to the information above and make it much easier to see which part(s) of the Analysis Services database are contributing to the increase in processing duration or memory consumption.

Query Workloads

This refers to any activity that generates queries against the SSAS database. It could be users running reports, loading dashboards, analyzing data via pivot tables, etc. Because users typically don’t run the same queries at the same time every day, this workload can be much more complicated to monitor.

The key to success is to start with the high level stuff and only go into the details if/when necessary.

Queries
The single most important thing to track for this type of workload is a log of the actual queries being executed against the Analysis Services database.

Not only will you be able to see which queries are slow, but you’ll also have the actual MDX/DAX executed. You won’t have to wait for the user to complain (telling you their report is taking too long to load) because you’ll already have the query and can start reviewing it immediately.

Some Analysis Services implementations actually have service level agreements (SLA) with criteria such as “no query should take more than 30 seconds to complete” and “the average query response time should be less than 5 seconds”. If you’re tracking every query against the Analysis Services database, not only will you know if the SLA has been violated, but you’ll know which query or queries it was that led up to the violation and can start troubleshooting immediately.

Users
Tracking the number of folks using your system (and when they are using it) will prove very helpful for knowing if/when to start considering options for scaling the system up and/or out.

This information can usually be extracted from whatever mechanism you use to track queries being executed against the Analysis Services database, but it is important enough to deserve its own section.

Resource Consumption
In the same vein as the above discussion around tracking resource consumption of the processing workload, you’ll also want to track the same measures (e.g. CPU, memory, disk, network) throughout the day. This information may provide some clues as to why a query is slow.

For example, say you’re reviewing the top 10 slowest queries from the previous week and find several of the queries are now running very fast. At this point you can switch over and start looking at the state of the system last week at the time the query was slow (from a resource consumption perspective) and perhaps find some clues, such as memory pressure or CPU bottleneck caused by a spike in activity.

Here are some examples of the types of questions you should be able to answer with the above information:

What are the top 10 slowest queries each week?

Who are your top users and what is the average number of queries they execute per day/week/month?

What are the average number of users per day/week/month?

What is the max number/average number of concurrent users per day/week/month?

Pro Tip: some folks incorrectly assume the OLAPQueryLog tracks queries. However, this table only tracks parts of queries (storage engine requests). A single query executed against an Analysis Services database could potentially generate 10s of records in this table. The implication is that it doesn’t give you the whole story and you won’t always be able to determine which queries are slow.

Next Steps

Now that you know what types of information you should be collecting and reviewing regularly, the next step is to figure out how you’re going to collect that information. The good news is that there are quite a few options available. The bad news is that you’ll have to wait until next time to find out. Here is the SQLPASS session which discusses about the same concept Analysis Services: Show Me Where It Hurts.

If you want to get started on performance monitoring with SSAS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – A Stored Procedure for Executing SSIS Packages in the SSIS Catalog – Notes from the Field #092

[Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications.

andyleonard SQL SERVER   A Stored Procedure for Executing SSIS Packages in the SSIS Catalog   Notes from the Field #092

In this episode of the Notes from the Field series I asked SSIS Expert Andy Leonard a very crucial question – How to create a stored procedure for executing SSIS Package in the SSIS Catalog? Andy was very kind to answer the questions and provides plenty of information about how a novice developer can learn SSIS from the beginning and become expert in the technology.


The following is a snippet from Chapter 2 of the book SSIS Design Patterns co-written by Matt Masson (Blog | @mattmasson), Tim Mitchell (Blog | @Tim_Mitchell), Jessica Moss (Blog | @jessicammoss), and Michelle Ufford (Blog | @sqlfool). Earlier in the chapter there are demos that describe a simple SSIS package named Chapter2.dtsx which is part of an SSIS project named Chapter2, and which is deployed to an instance of the SSIS Catalog in a Folder named “Chapter 2”. But you can use this stored procedure to execute any SSIS package in the SSIS Catalog. That’s the whole point!

SQL Server 2014 provides a new way to manage and execute Integration Services packages: Integration Server Catalogs. We explore this method next.

Integration Server Catalogs

You can only manage SSIS projects that use the Project Deployment Model in Integration Services Catalogs. To execute a package in the catalog, use SSMS to connect to the instance of SQL Server hosting the SSISDB database. Expand the Integration Services Catalogs node, and then expand the SSISDB node. Drill into the folder containing the SSIS project and package(s). Right-click the package you wish to execute and click Execute, as shown in Figure 2-1.

notes91 1 SQL SERVER   A Stored Procedure for Executing SSIS Packages in the SSIS Catalog   Notes from the Field #092

Figure 2-1. Executing an SSIS Package deployed to the SSIS Catalog

The Execute Package Window displays, as shown in Figure 2-2. It allows you to override Parameter values, ConnectionString properties of Connection Managers built at design-time, or any other externalize-able property accessible from a Package Path (via the Advanced tab) for this execution instance of the SSIS package stored in the SSIS Catalog.

notes91 2 SQL SERVER   A Stored Procedure for Executing SSIS Packages in the SSIS Catalog   Notes from the Field #092

Figure 2-2. Execute Package Window

Integration Server Catalog Stored Procedures

Please note the Script button above the Parameters tab in Figure 2-2. This button allows you to generate Transact-SQL statements that will execute the SSIS package. For the Chapter2.dtsx package stored in the SSIS Catalog, the scripts will appear similar to that in Listing 2-1.

Listing 2-1. Transact-SQL Script Generated From the Execute Package Window

Declare @execution_id bigint
EXEC [SSISDB].[catalog].[create_execution]
   @package_name=N'Chapter2.dtsx'
  ,@execution_id=@execution_id OUTPUT
  ,@folder_name=N'Chapter2'
  ,@project_name=N'Chapter2'
  ,@use32bitruntime=False
  ,@reference_id=Null
Select @execution_id
DECLARE @var0 smallint = 1
EXEC [SSISDB].[catalog].[set_execution_parameter_value]
   @execution_id
  ,@object_type=50
  ,@parameter_name=N'LOGGING_LEVEL'
  ,@parameter_value=@var0
EXEC [SSISDB].[catalog].[start_execution] @execution_id
GO

You can use these same stored procedures to execute SSIS Packages in the SSIS Catalog! In fact, I designed a script to create a wrapper stored procedure that will call the Transact-SQL statements executed when an SSIS Package is executed in the SSIS Catalog. You can see that script in Listing 2-2.

Listing 2-2. Script to Build a Wrapper Stored Procedure for Executing SSIS Packages in the SSIS Catalog

 /* Select the SSISDB database */
Use SSISDB
Go

 /* Create a parameter (variable) named @Sql */
Declare @Sql varchar(2000)

 /* Create the Custom schema if it does not already exist */
print 'Custom Schema'
If Not Exists(Select name 
              From sys.schemas 
                Where name = 'custom')
 begin
   /* Create Schema statements must occur first in a batch */
  print ' - Creating custom schema'
  Set @Sql = 'Create Schema custom'
  Exec(@Sql)
  print ' - Custom schema created'
 end
Else
 print ' - Custom Schema already exists.'
print ''

 /* Drop the Custom.execute_catalog_package Stored Procedure if it already exists */
print 'Custom.execute_catalog_package Stored Procedure'
  If Exists(Select s.name + '.' +  p.name
            From sys.procedures p
            Join sys.schemas s
                On s.schema_id = p.schema_id
         Where s.name = 'custom'
           And p.name = 'execute_catalog_package')
   begin
    print ' - Dropping custom.execute_catalog_package'
    Drop Procedure custom.execute_catalog_package
    print ' - Custom.execute_catalog_package dropped'
   end

   /* Create the Custom.execute_catalog_package Stored Procedure */
  print ' - Creating custom.execute_catalog_package'
go

/*

     Stored Procedure: custom.execute_catalog_package
     Author: Andy Leonard
     Date: 4 Mar 2012
     Description: Creates a wrapper around the SSISDB Catalog procedures
                  used to start executing an SSIS Package. Packages in the
                SSIS Catalog are referenced by a multi-part identifier
                 - or path - that consists of the following hierarchy:
        Catalog Name: Implied by the database name in Integration Server 2014
        |-Folder Name: A folder created before or at Deployment to contain the SSIS project
        |-Project Name: The name of the SSIS Project deployed
        |-Package Name: The name(s) of the SSIS Package(s) deployed

        Parameters:
        @FolderName [nvarchar(128)] {No default} – 
         contains the name of the Folder that holds the SSIS Project
        @ProjectName [nvarchar(128)] {No default} – 
         contains the name of the SSIS Project that holds the SSIS Package
        @PackageName [nvarchar(260)] {No default} – 
         contains the name of the SSIS Package to be executed
        @ExecutionID [bigint] {Output} – 
         Output parameter (variable) passed back to the caller
        @LoggingLevel [varchar(16)] {Default} – 
         contains the (case-insensitive) name of the logging level
         to apply to this execution instance
        @Use32BitRunTime [bit] {Default} – 
         1 == Use 64-bit run-time
                                                      0 == Use 32-bit run-time
        @ReferenceID [bigint] {Default} –          contains a reference to an Execution Environment
        @ObjectType [smallint] –          contains an identifier that appears to be related to the          SSIS PackageType property 
        Guessing: @ObjectType == PackageType.ordinal (1-based-array) * 10
         Must be 20, 30, or 50 for catalog.set_execution_parameter_value
         stored procedure

        Test: 
        1. Create and deploy an SSIS Package to the SSIS Catalog.
        2. Exec custom.execute_catalog_package and pass it the 
          following parameters: @FolderName, @ProjectName, @PackageName, @ExecutionID Output 
        @LoggingLevel, @Use32BitRunTime, @ReferenceID, and @ObjectType are optional and 
        defaulted parameters.

         Example:
           Declare @ExecId bigint
           Exec custom.execute_catalog_package
         'Chapter2'
        ,'Chapter2'
        ,'Chapter2.dtsx'
        ,@ExecId Output
        3. When execution completes, an Execution_Id value should be returned.
        View the SSIS Catalog Reports to determine the status of the execution 
        instance and the test.

*/
Create Procedure custom.execute_catalog_package
  @FolderName nvarchar(128)
 ,@ProjectName nvarchar(128)
 ,@PackageName nvarchar(260)
 ,@ExecutionID bigint Output
 ,@LoggingLevel varchar(16) = 'Basic'
 ,@Use32BitRunTime bit = 0
 ,@ReferenceID bigint = NULL
 ,@ObjectType smallint = 50
As

 begin
  
  Set NoCount ON
  
   /* Call the catalog.create_execution stored procedure
      to initialize execution location and parameters */
  Exec catalog.create_execution
   @package_name = @PackageName
  ,@execution_id = @ExecutionID Output
  ,@folder_name = @FolderName
  ,@project_name = @ProjectName
  ,@use32bitruntime = @Use32BitRunTime
  ,@reference_id = @ReferenceID

   /* Populate the @ExecutionID parameter for OUTPUT */
  Select @ExecutionID As Execution_Id

   /* Create a parameter (variable) named @Sql */
  Declare @logging_level smallint
   /* Decode the Logging Level */
  Select @logging_level = Case 
                           When Upper(@LoggingLevel) = 'BASIC'
                           Then 1
                           When Upper(@LoggingLevel) = 'PERFORMANCE'
                           Then 2
                            When Upper(@LoggingLevel) = 'VERBOSE'
                           Then 3
                           Else 0 /* 'None' */
                          End 
   /* Call the catalog.set_execution_parameter_value stored
      procedure to update the LOGGING_LEVEL parameter */
  Exec catalog.set_execution_parameter_value
    @ExecutionID
   ,@object_type = @ObjectType
   ,@parameter_name = N'LOGGING_LEVEL'
   ,@parameter_value = @logging_level

   /* Call the catalog.start_execution (self-explanatory) */
  Exec catalog.start_execution
    @ExecutionID

 end

GO

If you execute this script to create the custom schema and stored procedure in your instance of the SSISDB database, you can test it using the statement in Listing 2-3.

Listing 2-3. Testing the SSISDB.custom.execute_catalog_package Stored Procedure

Declare @ExecId bigint
Exec SSISDB.custom.execute_catalog_package 'Chapter2','Chapter2','Chapter2.dtsx',
@ExecId Output

Conclusion

This custom.execute_catalog_package stored procedure can be used to execute an SSIS package from any folder and project in the SSIS Catalog.

The SSIS Script Task can accomplish much more than generating log messages. This brief introduction and basic example have demonstrated how to get started configuring and using the SSIS Script Task. As you can see, SSIS Script Tasks give you development capabilities you may not have been aware of.

If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How Do We Find Deadlocks? – Notes from the Field #086

[Note from Pinal]: This is an 86th episode of Notes from the Fields series. Deadlock is very annoying when they happen in our database. In my early career my biggest frustration was I had no idea why the deadlocks are happening and because I did not know the root cause, I was never able to solve them. As time passed by, now I have a better understanding of this situation.

JohnSterrett SQL SERVER   How Do We Find Deadlocks?   Notes from the Field #086

In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very interesting subject of how to find deadlocks in SQL Server. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


For one reason or another I tend to find out that DBAs in the field have a hard time detecting and understand how deadlocks occur.  A deadlock occurs when two or more sessions are blocked on each other and one statement has to be terminated to prevent the sessions from being hung on each other.

Let’s take a quick look at locking, blocking and deadlocks with a real-world example that is included in the video below. In the simple example provided in the video there are two sessions each with statements that update the same tables concurrently with explicit transactions that are held until the statements are committed or rollback. For example, session one updates all the records in table one while session two updates all the records in table two. At this point we have locking. Session one has a lock on table one and session two has a lock on table two, but neither session is blocking each other from doing work.  Next while our statements are not committed in either session we have session one update the same records being updated on session two. This is known as blocking because session two is blocking session ones update table two.  Next we have session two attempting to update table one which is currently being locked by session one. Now we have a deadlock because session one is blocked by session two and session two is blocked by session one. To prevent both sessions from being hung one of them becomes the deadlock victim while the other get to continue its work.

How Do We Find Deadlocks?

Before SQL Server 2008 we would have to rely on configuring trace flags, service broker, or a server side traces to capture deadlock information so we could be proactive and take action to prevent the deadlocks from reoccurring. In SQL Server 2008 we were given extended events and could leverage a default system health extended event which is automatically running in the background to identify deadlocks. This is a great addition because we didn’t have to manually configure anything to catch the deadlocks. In SQL Server 2012 this got easier because we could rely on the graphical user interface inside of SQL Server Management Studio to filter our system health extended event to focus straight on our deadlocks.

In the video below you will be able to create deadlocks and be able to proactively monitor them with the native tools provided with SQL Server Management Studio in SQL Server 2012 or 2014.

Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Database Snapshots and Replica Causing Write Delays

For folks who have not worked with SQL Server Database snapshot feature, please refer to the blog I wrote sometime back (SQL SERVER – 2008 – Introduction to Snapshot Database – Restore From Snapshot). I am giving this reference because some might not be aware of this powerful capability with SQL Server.

In the above article, one of the readers did ask if there is any performance overhead of having multiple Database snapshots with SQL Server.  Well, database snapshots are built on the principle of COW (Copy On Write) where a database page is copied to the snapshots when they are written in the primary database. That is the whole idea of Database snapshots. Hence, if you have multiple database snapshots configured, for each write on the primary DB, we will be indirectly getting “n” more write based on the number of snapshots made on the DB. It is as simple as that.

In one of my customer locations, they were getting heavy performance issue on their writers and when I got an option to inspect, I found there were close to 6 database snapshots that were created in that database. When I asked, they said it is some sort of backup copy so that they can come back onto that version. Being a heavy transactional system, this showed up easily when the load was high.

The immediate question was, how can I identify we have far too many Snapshots created on the databases? I suggested a simple script so that they can start looking into it from time to time:

IF EXISTS( SELECT source_database_id
FROM sys.databases
WHERE source_database_id IS NOT NULL
GROUP BY source_database_id
HAVING COUNT(*) > 1)
AND EXISTS(
SELECT waiting_tasks_count FROM sys.dm_os_wait_stats
WHERE wait_type = 'replica_writes' AND waiting_tasks_count > 0)
BEGIN
SELECT
DB_NAME(source_database_id) AS 'SourceDB',
COUNT(*) AS 'NumSnapshots'
FROM sys.databases
WHERE source_database_id IS NOT NULL
GROUP BY DB_NAME(source_database_id)
HAVING COUNT(*) > 1
SELECT * FROM sys.dm_os_wait_stats
WHERE wait_type = 'replica_writes' AND waiting_tasks_count > 0
END

In their environment this showed up, having 6 snapshots for their main transactional database and they were able to delete some of them based on age.

Our database systems are sensitive to such mistakes and can cause us huge performance overhead. The development team thought that the system was deteriorating over a period of time and were taking this performance overhead without analyzing what could be the problem. I am a firm believer that one should always know the root cause for a problem rather than just solving the problem superficially.

Do let me know if you use Snapshots actively in your environments and have you faced such issues in your SQL Server box? One of the key places where I have seen people use Database Snapshots is when Mirroring is configured and we want to query from our Mirror Server. Are there other places where you have relied on Database Snapshots? Let me know via your comments.

Reference : Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Stress free life with Spotlight for SQL Server DBA

Being on the cutting edge of technology is something that we need to be prepared as IT professionals. Newer capabilities get into the software almost every single day and it is difficult to keep track of what is getting added into the software’s we work with. I am a firm believer that we need to innovate and keep learning every single day. That is the best way we can be competitive and make our lives easier.

I get an opportunity to speak at multiple conferences. Though the presentations range from short tips to full-fledged presentations for hours. More than the presentation time, it is the process of preparing and adding a new concept in every session is something I cherish a lot. Most of my writings are based on what people ask me from time-to-time.

In a recent conversation I had with a junior DBA, it taught me something interesting that I thought was worth a share. The DBA had emailed me the below screen shot:

 SQL SERVER   Stress free life with Spotlight for SQL Server DBA

He was complaining that he was not able to take the Transaction Log backup, but it was encountering an error. I said, most of the times it is important to watch the error message because in the recent past, I have seen the error messages give us the solution too. These errors are simple and a little search on MSDN can give the answer. Having said that, I did talk about other things that involve around the error. To get started, I would like to list all the backup types we can do with SQL Server:

  • Full backups
  • Differential backups
  • File backups
  • Filegroup backups
  • Partial backups
  • Copy-Only backups
  • Mirror backups
  • Transaction log backups

We are not going to look at each of the types in this blog, but this was important to mention all of them for a future reference.

Find when the last backup done

I wrote back to the DBA stating there is something bigger problem in hand. I always recommend as a seasoned DBA, you need to know when the last backup was taken on the server. This becomes useful to restore the same incase of any disaster:

SELECT sdb.Name AS DatabaseName,
COALESCE(CONVERT(VARCHAR(12), MAX(bus.backup_finish_date), 101),'-') AS LastBackUpTime
FROM sys.sysdatabases sdb
LEFT OUTER JOIN msdb.dbo.backupset bus
ON bus.database_name = sdb.name
GROUP BY sdb.Name

As a contingency plan, we need to be always aware when a backup was taken. It is critical that as a DBA, all critical production databases are monitored from time to time.

In the above error, even though we are on a FULL recovery model – if we try to take a TLog backup without a FULL backup, the error is encountered. So to mitigate this, we need to be taking a FULL backup before the TLog backups can be initiated. The concept is simple right.

SQL Server Dashboard with Spotlight

Ask a naval officer about how a command center looks and how critical it is for them? A better view of the battle field and the strategies for the field is always decided in the command center. In a similar way, a DBA can use Spotlight Dashboard to get a high level understanding of what happens on his servers.

 SQL SERVER   Stress free life with Spotlight for SQL Server DBA

In our example, you can see that our servers don’t have a backup being taken till date. Incase the FULL backup is not taken from our databases over, this alert is being raised. The same thing is shown in a visual form to in the tiles view.

If you ever get a chance to play around with the Spotlight dashboard, I would highly recommend watching some of the out-of-box warnings they arise and take corrective actions immediately.

Note: A database even in the FULL recovery model will act like SIMPLE recovery model till the first FULL backup is taken.

So this error / behavior can get any seasoned DBA into trouble. The Dashboard is a big time savior and can save your job if gone un-noticed. If you want to lead a stress free DBA life, then such early warnings can save you big time.

Final Words

As we wrap up, sometimes we are likely to miss some of the basic principles that we assume from the system. Having something that can remind us of these basics can get us out of trouble in many ways. Most the tools we work give us early warnings and recommendations. It is up to a DBA to look at them and take necessary actions. If you would like to try out these tools, then you can use the SQL Server Evaluation Version to try them out.

Click here for a free 30 day trial of Spotlight on SQL Server

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #024 – What is the Best Recovery Model?

Here is one of the most popular questions I often see people asking on the internet.

Question: What is the best recovery model for my database?

Answer: Every situation is different and each situation has different needs for the recovery model.

SQL Server offers three recovery models: full recovery, simple recovery and bulk-logged recovery. The recovery models determine how much data loss is acceptable and determines whether and how transaction logs can be backed up.

Select Simple Recovery Model if:
* Your data is not critical.
* Losing all transactions since the last full or differential backup is not an issue.
* Data is derived from other data sources and is easily recreated.
* Data is static and does not change often.

-- Set the recovery model to Simple
ALTER DATABASE Advetureworks SET RECOVERY SIMPLE
GO

Select Bulk-Logged Recovery Model if:
* Data is critical, but logging large data loads bogs down the system.
* Most bulk operations are done off hours and do not interfere with normal transaction processing.
* You need to be able to recover to a point in time.

-- Set the recovery model to Bulk Logged
ALTER DATABASE Advetureworks SET RECOVERY BULK_LOGGED
GO

Select Full Recovery Model if:
* Data is critical and no data can be lost.
* You always need the ability to do a point-in-time recovery.
* Bulk-logged activities are intermixed with normal transaction processing.
* You are using replication and need the ability to resynchronize all databases involved in replication to a specific point in time.

-- Set the recovery model to Full
ALTER DATABASE Advetureworks SET RECOVERY FULL
GO

You can switch from any recovery model to another recovery model, but prior to or after the switch, you may need to issue additional transaction log or full backups to ensure you have a complete backup set.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Know Your Backup Before Deleting Database

Earlier last week I wrote a blog around, SQL SERVER – FIX – Msg 3702, Level 16, State 3 – Cannot Drop Database “DB_Name” Because it is Currently in Use. The premise of that blog was driven by some of the demo’s I show at conferences. During one of the UG Meets, I met my good friend Balmukund doing something different and that inspired me to write that blog.

When I wrote that, many people did write back stating it can be dangerous etc. I sort of agree if you are on a production box. As my usecase was based on Demo environment for sessions, lesser did I think about it. Having said that, immediately I thought it would be good to back-it-up with a blog that will prevent us from getting into trouble.

So I wrote back to my friend who wrote an email to me about the dangers. Yes, we need to be careful while dropping databases but be prepared for contingencies. If you have taken a backup of your database, make sure to check if the same is available by querying the MSDB. I gave a typical script would look like this:

USE MSDB
GO
SELECT
msdb.dbo.backupset.database_name,
msdb.dbo.backupset.backup_start_date,
msdb.dbo.backupset.backup_finish_date,
CASE msdb..backupset.TYPE
WHEN
'D' THEN 'Database'
WHEN 'L' THEN 'Log'
WHEN 'I' THEN 'Differential'
WHEN 'F' THEN 'Filegroup'
END AS backup_type,
msdb.dbo.backupmediafamily.physical_device_name
FROM msdb.dbo.backupmediafamily
INNER JOIN msdb.dbo.backupset
ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id
-- Add the WHERE condition if you want it for a specific database
-- WHERE msdb.dbo.backupset.database_name = 'AdventureWorks2014'
ORDER BY msdb.dbo.backupset.backup_finish_date

 

The above script is a classic way to find all the backups done for a given database or the databases on a given instance.

As a best practice, I would like to figure out from these DMV’s if any backups were taken on every single database before working on them. I am sure as a seasoned DBA, you all are always aware of this important steps when working on production databases.

I know each organization has a restore strategy, so can you let me know about your restore strategies for critical databases? When do you take FULL, Differential and how often you take TLog backups in your production environments? Sharing this is a great way to tell the blog readers about typical usage patterns.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Backup Failure – Error: 3203, Severity: 16, State: 1

In the recent past, I have been writing about different error messages on SQL Server that one encounters. Thanks to my blog readers who have been sharing a lot of errors with me either by email or by blog comments and asking for solution. If I find something interesting and more helpful to other, I write a blog for that. This is one of such blog.

2015-05-05 10:00:00.440 Backup       Error: 3203, Severity: 16, State: 1.
2015-05-05 10:00:00.440 Backup       Read on “M:\MSSQL\TLog\Production_1.ldf” failed: 1(Incorrect function.)
2015-05-05 10:00:00.440 Backup       Error: 3041, Severity: 16, State: 1.
2015-05-05 10:00:00.440 Backup       BACKUP failed to complete the command BACKUP LOG ProductionDB. Check the backup application log for detailed messages.

When I was searching for other emails with error message, I found another one as below.

2014-01-05 16:07:38.19 Backup   Error: 3203, Severity: 16, State: 1.
2014-01-05 16:07:38.19 Backup   Read on “M:\DatabaseName_DefaultData.ndf” failed: 1117(failed to retrieve text for this error. Reason: 15100)

This error would vary based on the actual cause. In first error message, it is operating system error 1 and in second case, it is operating system error 1117. To convert any operating system error number to text, we can use windows net command (net helpmsg) from command prompt as shown below

nethelpmsg 01 SQL SERVER   Backup Failure   Error: 3203, Severity: 16, State: 1

If you notice the second message, we are not seeing the text of the message 1117 but we are seeing “failed to retrieve text for this error. Reason: 15100”

What you should do in case of such error?

You should start looking at system and application event log based in operating system error. The example error which I have shown above are due to hardware level issues. Here is what we found in event log:

Log Name: System
Source: Disk
Date: 5/5/2015 09:32:11 AM
Event ID: 51
Task Category: None
Level: Warning
Keywords: Classic
User: N/A
Computer: SQLServerMachine
Description: An error was detected on device \Device\Harddisk4\DR4 during a paging operation.

Here is an article which explained event ID 51 https://support.microsoft.com/en-us/kb/244780/

Have you ever seen any similar errors? Put into the comment section to help others and share your knowledge. The best thing about these errors is that we get to learn from each other interesting concepts.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Recovering from Snapshots – Notes from the Field #078

[Note from Pinal]: This is a 78th episode of Notes from the Fields series. What do you do when data is deleted accidentally or you are facing disaster? Well, there are plenty of the things, you can do, but when in panic quite often people make mistakes which just intensify the disaster. Database snapshot is very important but less frequently used feature.

JohnSterrett SQL SERVER   Recovering from Snapshots   Notes from the Field #078

In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very interesting subject of how to recover the database from snapshots. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


One of the most common – and most forgotten – scenarios in disaster recovery plans is data being updated or deleted accidentally. This surprises me because, in the field, I see accidental data changes as a common disaster. If you have enterprise edition you can leverage database snapshots as a step in your disaster recovery plan to help recover faster, or at least allow you to pull back the majority of your data while you concurrently restore backups (to bring the rest of your data back). Restoring most of your data quickly can be critical with very large databases (VLDBs), especially if you have tight Recovery Time Objectives (RTOs).

Database snapshots require sparse files and store the original pages in these files when they are changed. Therefore, you will want to manage the sparse files and make sure you purge database snapshots as needed. You might also be using database snapshots without even knowing it. If you are doing database integrity checks with the default values, a special snapshot is taken in the background and your integrity checks are executed against that snapshot.

HOW DO WE CREATE A SNAPSHOT?

The following example uses the AdventureWorks2014 sample database. We need to use the CREATE DATABASE syntax, include the names of your data files, and include the file location of the sparse files. Finally, include AS SNAPSHOT OF database syntax to define the database as a snapshot.

CREATE DATABASE [AW2014_Snapshot_morning] ON
( NAME = AdventureWorks2014_Data, FILENAME =
'C:\Demo\AW_data.ss')
AS SNAPSHOT OF AdventureWorks2014

REAL-WORLD ACCIDENTAL DATA CHANGE STATEMENT

Here is a common case of a mistaken DELETE statement. We have a DELETE statement with the primary key included to delete a single row. By mistake we didn’t highlight the filter so all the rows will be deleted.

If you execute this statement, you will see an unexpected result: we deleted all rows in the table as shown below.

notd 78 SQL SERVER   Recovering from Snapshots   Notes from the Field #078

(19972 row(s) affected)

HOW DO WE REVERT FROM SNAPSHOT?

You have two options if you created a database snapshot earlier.

First, you could insert the data back from your snapshot database as shown below. This could be done with SSIS, BCP or many other tools. For this quick example we will do this with an INSERT INTO SELECT statement.

SET IDENTITY_INSERT Person.EmailAddress ON
INSERT INTO
Person.EmailAddress (BusinessEntityID, EmailAddressID, EmailAddress, rowguid, ModifiedDate)
SELECT *
FROM AW2014_Snapshot_morning.Person.EmailAddress
SET IDENTITY_INSERT Person.EmailAddress OFF

Second, you can revert the database from the snapshot. Keep in mind this second option will revert all data changes in the database not just the data deleted in your accidental data change statement.

USE MASTER;
RESTORE DATABASE AdventureWorks2014 FROM
DATABASE_SNAPSHOT = 'AW2014_Snapshot_morning';
GO

From these examples, you can see database snapshots are a tool to help you recover data quickly. Please note that you wouldn’t want database snapshots to be your sole plan for disaster recovery and unplanned data change statements. If your snapshot or the original database suffers corruption, you wouldn’t be able to recover. So make sure you add snapshots into your existing disaster recovery plans, which should – at a minimum – include database backups to give you a better recovery point objective.

Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)