SQL SERVER – Backup Failure – Error: 3203, Severity: 16, State: 1

In the recent past, I have been writing about different error messages on SQL Server that one encounters. Thanks to my blog readers who have been sharing a lot of errors with me either by email or by blog comments and asking for solution. If I find something interesting and more helpful to other, I write a blog for that. This is one of such blog.

2015-05-05 10:00:00.440 Backup       Error: 3203, Severity: 16, State: 1.
2015-05-05 10:00:00.440 Backup       Read on “M:\MSSQL\TLog\Production_1.ldf” failed: 1(Incorrect function.)
2015-05-05 10:00:00.440 Backup       Error: 3041, Severity: 16, State: 1.
2015-05-05 10:00:00.440 Backup       BACKUP failed to complete the command BACKUP LOG ProductionDB. Check the backup application log for detailed messages.

When I was searching for other emails with error message, I found another one as below.

2014-01-05 16:07:38.19 Backup   Error: 3203, Severity: 16, State: 1.
2014-01-05 16:07:38.19 Backup   Read on “M:\DatabaseName_DefaultData.ndf” failed: 1117(failed to retrieve text for this error. Reason: 15100)

This error would vary based on the actual cause. In first error message, it is operating system error 1 and in second case, it is operating system error 1117. To convert any operating system error number to text, we can use windows net command (net helpmsg) from command prompt as shown below

nethelpmsg 01 SQL SERVER   Backup Failure   Error: 3203, Severity: 16, State: 1

If you notice the second message, we are not seeing the text of the message 1117 but we are seeing “failed to retrieve text for this error. Reason: 15100”

What you should do in case of such error?

You should start looking at system and application event log based in operating system error. The example error which I have shown above are due to hardware level issues. Here is what we found in event log:

Log Name: System
Source: Disk
Date: 5/5/2015 09:32:11 AM
Event ID: 51
Task Category: None
Level: Warning
Keywords: Classic
User: N/A
Computer: SQLServerMachine
Description: An error was detected on device \Device\Harddisk4\DR4 during a paging operation.

Here is an article which explained event ID 51 https://support.microsoft.com/en-us/kb/244780/

Have you ever seen any similar errors? Put into the comment section to help others and share your knowledge. The best thing about these errors is that we get to learn from each other interesting concepts.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Recovering from Snapshots – Notes from the Field #078

[Note from Pinal]: This is a 78th episode of Notes from the Fields series. What do you do when data is deleted accidentally or you are facing disaster? Well, there are plenty of the things, you can do, but when in panic quite often people make mistakes which just intensify the disaster. Database snapshot is very important but less frequently used feature.

JohnSterrett SQL SERVER   Recovering from Snapshots   Notes from the Field #078

In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very interesting subject of how to recover the database from snapshots. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


One of the most common – and most forgotten – scenarios in disaster recovery plans is data being updated or deleted accidentally. This surprises me because, in the field, I see accidental data changes as a common disaster. If you have enterprise edition you can leverage database snapshots as a step in your disaster recovery plan to help recover faster, or at least allow you to pull back the majority of your data while you concurrently restore backups (to bring the rest of your data back). Restoring most of your data quickly can be critical with very large databases (VLDBs), especially if you have tight Recovery Time Objectives (RTOs).

Database snapshots require sparse files and store the original pages in these files when they are changed. Therefore, you will want to manage the sparse files and make sure you purge database snapshots as needed. You might also be using database snapshots without even knowing it. If you are doing database integrity checks with the default values, a special snapshot is taken in the background and your integrity checks are executed against that snapshot.

HOW DO WE CREATE A SNAPSHOT?

The following example uses the AdventureWorks2014 sample database. We need to use the CREATE DATABASE syntax, include the names of your data files, and include the file location of the sparse files. Finally, include AS SNAPSHOT OF database syntax to define the database as a snapshot.

CREATE DATABASE [AW2014_Snapshot_morning] ON
( NAME = AdventureWorks2014_Data, FILENAME =
'C:\Demo\AW_data.ss')
AS SNAPSHOT OF AdventureWorks2014

REAL-WORLD ACCIDENTAL DATA CHANGE STATEMENT

Here is a common case of a mistaken DELETE statement. We have a DELETE statement with the primary key included to delete a single row. By mistake we didn’t highlight the filter so all the rows will be deleted.

If you execute this statement, you will see an unexpected result: we deleted all rows in the table as shown below.

notd 78 SQL SERVER   Recovering from Snapshots   Notes from the Field #078

(19972 row(s) affected)

HOW DO WE REVERT FROM SNAPSHOT?

You have two options if you created a database snapshot earlier.

First, you could insert the data back from your snapshot database as shown below. This could be done with SSIS, BCP or many other tools. For this quick example we will do this with an INSERT INTO SELECT statement.

SET IDENTITY_INSERT Person.EmailAddress ON
INSERT INTO
Person.EmailAddress (BusinessEntityID, EmailAddressID, EmailAddress, rowguid, ModifiedDate)
SELECT *
FROM AW2014_Snapshot_morning.Person.EmailAddress
SET IDENTITY_INSERT Person.EmailAddress OFF

Second, you can revert the database from the snapshot. Keep in mind this second option will revert all data changes in the database not just the data deleted in your accidental data change statement.

USE MASTER;
RESTORE DATABASE AdventureWorks2014 FROM
DATABASE_SNAPSHOT = 'AW2014_Snapshot_morning';
GO

From these examples, you can see database snapshots are a tool to help you recover data quickly. Please note that you wouldn’t want database snapshots to be your sole plan for disaster recovery and unplanned data change statements. If your snapshot or the original database suffers corruption, you wouldn’t be able to recover. So make sure you add snapshots into your existing disaster recovery plans, which should – at a minimum – include database backups to give you a better recovery point objective.

Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to View the Dirty Pages In Memory of a Database?

I always get an opportunity to talk on topics that are basic from time to time. As I was preparing for GIDS 2015 (Great Indian Developer Summit) this year, I was vividly remembering one of the conversations I had with an attendee last year at our Pluralsight booth. Most of these conversations are forgotten, but this one was unique. During this conversation, the attendee was trying to learn the basics and was curious to listen to me. I always get carried away and spend quite some time with folks who are eager to learn. In this context, I was talking how RDBMS’s use the concept of Write-Ahead Transaction Log for  maintaining ACID properties. The response I got was interesting – “Hey Pinal, is there a way to know which of the pages in memory are currently dirty? Do you know how to get this?”

As I was preparing for this year’s session, this question was on top of my mind and I thought let me take a moment in this blog to show you how this can be achieved. Again, everything we do inside SQL Server can be accessed by the Dynamic Management views and it is important for us to know which ones to use.

T-SQL Scripts

Create the dummy database for test and we will enable few traceflags for this demo.

CREATE DATABASE DirtyPagesDB
GO
USE DirtyPagesDB
GO
-- Disable automatic checkpoint so that data isn't flushed
--
DBCC TRACEON(3505, -1);
DBCC TRACESTATUS();
GO

Let us create a table and enter some values. We will also flush the page using Checkpoint.

CREATE TABLE t1 (Speaker_Bio CHAR(8000))
GO
INSERT INTO t1 VALUES ('SQL'),('Authority')
GO
CHECKPOINT

Let us check if there are any pages dirty currently in our buffer. Since we have done a checkpoint, there will be no rows for the below query.

-- Get the rows of dirtied pages
SELECT
database_name = d.name,
OBJECT_NAME =
CASE au.TYPE
WHEN
1 THEN o1.name
WHEN 2 THEN o2.name
WHEN 3 THEN o1.name
END,
OBJECT_ID =
CASE au.TYPE
WHEN
1 THEN p1.OBJECT_ID
WHEN 2 THEN p2.OBJECT_ID
WHEN 3 THEN p1.OBJECT_ID
END,
index_id =
CASE au.TYPE
WHEN
1 THEN p1.index_id
WHEN 2 THEN p2.index_id
WHEN 3 THEN p1.index_id
END,
bd.FILE_ID,
bd.page_id,
bd.page_type,
bd.page_level
FROM sys.dm_os_buffer_descriptors bd
INNER JOIN sys.databases d
ON bd.database_id = d.database_id
INNER JOIN sys.allocation_units au
ON bd.allocation_unit_id = au.allocation_unit_id
LEFT JOIN sys.partitions p1
ON au.container_id = p1.hobt_id
LEFT JOIN sys.partitions p2
ON au.container_id = p2.partition_id
LEFT JOIN sys.objects o1
ON p1.OBJECT_ID = o1.OBJECT_ID
LEFT
JOIN sys.objects o2
ON p2.OBJECT_ID = o2.OBJECT_ID
WHERE is_modified = 1
AND d.name = 'DirtyPagesDB'
AND
(
o1.name = 't1'
OR o2.name = 't1'
);
GO

Since we got no rows, let us insert a row to check.

INSERT INTO t1 VALUES ('Pinal')
GO

Go ahead and run the above big query again. You are likely to see 2 rows, one for IAM and one for Datapage. If you run the CHECKPOINT, these will also get flushed to the data disk.

-- Clean up
USE MASTER
GO
DROP DATABASE DirtyPagesDB
GO

This was in my mind for a long time to research about this and write. I am sure this will be a good learning for my blog readers too. Do let me know if this was an interesting learning today for you.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Script – Removing Multiple Databases from Log Shipping

Have you ever come across a situation where you have large number of databases in log shipping and you have to remove all of them? If you use SQL Server Management Studio, it would take a long time because you have to Right Click on each database, go to properties, choose Transaction Log Shipping tab, choose remove for secondary, uncheck the box and then hit OK. Though monotonous, these are painful when the number of databases we are really huge.

In the background, it executes stored procedures to remove the metadata from the log shipping related tables in MSDB database.

Below is the sample which runs on primary. I have changed input parameters.

-- primary
EXEC MASTER.dbo.sp_delete_log_shipping_primary_secondary
@primary_database = N'Primary_Database_Name'
,@secondary_server = N'Secondary_Server_Name'
,@secondary_database = N'Secondary_Database_Name'
GO
EXEC MASTER.dbo.sp_delete_log_shipping_primary_database @database = N'Primary_Database_Name'
GO

And below runs on secondary (here also I have changed input parameter)

-- secondary
EXEC MASTER.dbo.sp_delete_log_shipping_secondary_database
@secondary_database = N'Seconday_Database_Name'

Essentially, if we want to remove log shipping we need to get primary database name, secondary database name and secondary server name. I have used metadata table to find that details.

-- Script for removal of Log Shipping from primary
SET NOCOUNT ON
GO
DECLARE @ExecString VARCHAR(MAX)
SELECT @ExecString = 'EXEC master.dbo.sp_delete_log_shipping_primary_secondary
@primary_database = N'''
+ pd.primary_database +'''
,@secondary_server = N'''
+ ps.secondary_server+ '''
,@secondary_database = N'''
+ ps.secondary_database + ''''
+'
go'
FROM   msdb.dbo.log_shipping_primary_secondaries ps,
msdb.dbo.log_shipping_primary_databases pd
WHERE ps.primary_id = pd.primary_id
SELECT @ExecString
GO
DECLARE @ExecString VARCHAR(MAX)
SELECT @ExecString = 'EXEC master.dbo.sp_delete_log_shipping_primary_database @database = N'''+primary_database+'''
go'
FROM msdb.dbo.log_shipping_primary_databases
SELECT @ExecString
GO

Once you run the script, you would get output with execute statement, just copy paste and run into new query window.

Here is the similar script for secondary server.

-- Script for removal of LS from Secondary
DECLARE @ExecString VARCHAR(MAX)
SELECT @ExecString = 'EXEC master.dbo.sp_delete_log_shipping_secondary_database @secondary_database = N'''+secondary_database+'''
go'
FROM log_shipping_secondary_databases
SELECT @ExecString
GO

Note: Above scripts would generate which you need to copy paste in new query window on respective servers.  Please verify before running the output on production server.

Trying to automate and use the power of T-SQL is one of the best things I have always felt. Do let me know if you did these sort of things in your environments? Can you share some of the experiences?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – The Basics of the File System Task – Part 2 – Notes from the Field #075

[Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications.

andyleonard SQL SERVER   The Basics of the File System Task   Part 2   Notes from the Field #075

In this episode of the Notes from the Field series I asked SSIS Expert Andy Leonard a very crucial question – What are the Basics of the File System Task and where do we start with it? Andy was very kind to answer the questions and provides plenty of information about how a novice developer can learn SSIS from the beginning and become expert in the technology.


Many data integration scenarios involve reading data stored in flat files or performing extracts from a relational (or legacy) system into flat files. Learning how to configure and use the SQL Server Integration Services (SSIS) File System Task will support your efforts when loading data to and from flat files. In a previous article, I described configuring the File System Task to archive a file. In this article, I will repeat the exercise, but I will add flexibility (and complexity – the two always go together) by using SSIS Variables to manage the Source File and Destination Directory locations. This article is an edited version of The Basics of the File System Task, Part 1. I chose to write it this way for those who find this article but haven’t read Part 1.

Remember: SSIS is a software development platform. With “SQL Server” included in the name, it is easy for people to confuse SSIS as a database tool or accessory, but Control Flow Tasks put that confusion to rest.

SSIS provides several Control Flow tasks. Here is a list that provides a good approximation of which tasks I use most, from most-used to least-used:

In this article I provide an advanced example of configuring the SSIS File System Task, shown in Figure 1:

notes 75 1 SQL SERVER   The Basics of the File System Task   Part 2   Notes from the Field #075
Figure 1: SSIS File System Task

The File System Task provides one way to implement an SSIS Design Pattern for source file archival. When you first open the File System Task Editor, you will note several properties in the property grid. Whenever you see an Operation property in an SSIS task editor, know that that property drives the other property selections. Options for the Operation property of the SSIS File System Task are shown in Figure 2:
notes 75 2 SQL SERVER   The Basics of the File System Task   Part 2   Notes from the Field #075
Figure 2: SSIS File System Task Operation Property Options

The Operation options are:

  • Copy directory
  • Copy file (default)
  • Create directory
  • Delete directory
  • Delete directory content
  • Delete file
  • Move directory
  • Move file
  • Rename file
  • Set Attributes

I stated the Operation property drives the other property selections. Take a look at the File System Task Editor when I change the Operation option from “Copy file” (Figure 2) to “Delete file” as shown in Figure 3:
notes 75 3 SQL SERVER   The Basics of the File System Task   Part 2   Notes from the Field #075
Figure 3: The File System Task Editor with the “Delete file” Operation Selected

See? There are less properties required for the “Delete file” operation. The available properties are even more different for the “Set Attributes” operation, shown in Figure 4:
notes 75 4 SQL SERVER   The Basics of the File System Task   Part 2   Notes from the Field #075
Figure 4: The File System Task Editor with the “Set Attributes” Operation Selected

The Operation property changes the editable properties, exposing some and hiding others. With flexibility come complexity. Even though the File System Task is complex, I’ve found the task is stable and extremely useful. Let’s look at a practical example; using the File System Task to archive a flat file.

To begin configuring the SSIS File System Task for file archival, select the “Move file” operation as shown in Figure 5:
notes 75 5 SQL SERVER   The Basics of the File System Task   Part 2   Notes from the Field #075
Figure 5: SSIS File System Task with the “Move file” Operation Selected

Using the IsSourcePathVariable and IsDestinationPathVariable properties extends the flexibility of the File System Task and further changes the list of available properties in the property grid, as shown in Figure 6:
notes 75 6 SQL SERVER   The Basics of the File System Task   Part 2   Notes from the Field #075
Figure 6: Opting to Use Variables for Source and Destination Paths

Note the SourceConnection and DestinationConnection properties are hidden and the SourceVariable and DestinationVariable properties are available in their place. Click the SourceVariable property dropdown, and click “<New variable…>” as shown in Figure 7:
notes 75 7 SQL SERVER   The Basics of the File System Task   Part 2   Notes from the Field #075
Figure 7: Selecting “<New variable…>” from the SourceVariable Property

When the Add Variable window displays, enter “SourceFilePath” for the variable name property and a full path to your source file in the Value textbox, as shown in Figure 8:
notes 75 8 SQL SERVER   The Basics of the File System Task   Part 2   Notes from the Field #075
Figure 8: Configuring the SourceFilePath SSIS Variable

Click the OK button to close the Add Variable window and return to the File System Task Editor. Click the DestinationVariable property dropdown, and then click “<New variable…>” to open a new Add Variable window. Configure the new variable by setting the Name property to “DestinationFolder” and the Value property to a location you wish to move the file, as shown in Figure 9:
notes 75 9 SQL SERVER   The Basics of the File System Task   Part 2   Notes from the Field #075
Figure 9: Configuring the DestinationFolder SSIS Variable

Click the OK button to close the Add Variable window and return to the File System Task Editor. You have configured an SSIS File System Task to move a file using SSIS Variables to manage the source and destination of the file, as shown in Figure 10:
notes 75 10 SQL SERVER   The Basics of the File System Task   Part 2   Notes from the Field #075
Figure 10: An SSIS File System Task Configured to Move a File Using SSIS Variables

The SSIS File System Task is now configured to archive a file. Let’s test it! Click the OK button to close the File System Task Editor. Press the F5 key or select SSIS->Start Debugging to test your work. My result is shown in Figure 11:
notes 75 11 SQL SERVER   The Basics of the File System Task   Part 2   Notes from the Field #075
Figure 11: Successful Test Execution of the SSIS File System Task

Viewing the source and destination directories, we see the file was successfully moved – shown in Figure 12:
notes 75 12 SQL SERVER   The Basics of the File System Task   Part 2   Notes from the Field #075
Figure 12: The File, Moved!

One tricky part when configuring the SSIS File System Task to move a file is realizing that you need to select the actual file for the source and the directory for the destination.

As I stated earlier, the SSIS File System Task is powerful, flexible, and robust. This article has demonstrated another way you can use the File System Task to archive files. Archiving files after loading the data they contain is a common practice in data integration.

If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Using MaxTransferSize parameter with SQL Server Backups

Off late I have been writing a lot around SQL Server backups and administration topics. I have seen a lot of my readers have been enjoying some of the deep conversations I have with people in the industry. As I always say, most of the blog posts are inspired by someone asking me a question, me doing the research and then reaching out to them to give a solution. This blog post is no different in that aspect.

Recently I was fortunate to be in Delhi for a session to one of our customers. As many have seen me around, one of the DBA came and asked me an interesting question. The organization had bought a new EMC storage and one of the recommendations was to use a transfer size of 256 KB. The first instinct for me – “Is it? I didn’t know that.” I said I would revert back to them after I reach the hotel and do my bit of research.

On reading a number of documentation and recommendations on the various sites. I got to know, the optimal value for the MaxTransferSize parameter for SQL Server backups for EMC storage systems is 256 KB, while by default SQL server uses a value of 1,024 KB (1 MB). To promote storage array stability and performance, full database, differential and transaction log backups must be performed with MaxTransferSize specified at 262,144 bytes (256 KB).

Well, that was easier said than done. I now wanted to know how this can be done because the DBA wanted to automate this using their scripting inside a SQL Server Agent Job. I turned to MSDN for help and as always it didn’t disappoint me.

Below are some sample T-SQL Backup commands that are optimized for this requirement:

/* Full database backup */
BACKUP DATABASE AdventureWorks TO DISK = 'C:\MS SQL\Bakup Files\AdventureWorks.bak'
WITH INIT, MAXTRANSFERSIZE = 262144, STATS = 1;
/* Differential database backup */
BACKUP DATABASE AdventureWorks TO DISK = 'C:\MS SQL\Bakup Files\AdventureWorks.diff'
WITH DIFFERENTIAL, INIT, MAXTRANSFERSIZE = 262144, STATS = 1;
/* Transaction Log backup */
BACKUP LOG AdventureWorks TO DISK = 'C:\MS SQL\Bakup Files\AdventureWorks.trn'
WITH INIT, MAXTRANSFERSIZE = 262144, STATS = 1;

To my blog readers, I would like to know if you have ever used these settings in your environments. What type of storage are you using and what values are you using for MAXTRANSFERSIZE parameter? Have you seen any performance improvements while using the same? Have you seen any difference in behavior while using these parameters? Do let me know via the comments section.

Finally, as I wrap up – I always talk about the power of using T-SQL over UI and when I write such blog posts, this just gets better. I need to learn the fineprints of using the T-SQL commands and all the parameters in future. Lest assured, you are going to see more of these in this blog for sure.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Backup and Restore Behavior of ReadOnly Filegroup Databases

Last week I wrote about SQL SERVER – Marking Filegroup as ReadOnly with SQL Server and it got me interested into few key questions. One of the questions someone asked was, how will backups behave? Are there anything I need to know about ReadOnly Filegroups? Will these databases when restored take this setting along? So in this blog post, let me take few simple steps in the learning journey I had when working with ReadOnly filegroups.

TSQL Scripts

Let us start out by creating the database first.
CREATE DATABASE [ReadOnlyDB]
CONTAINMENT
= NONE
ON  PRIMARY
( NAME = N'ReadOnlyDB', FILENAME = N'C:\Temp\ReadOnlyDB.mdf' , SIZE = 4024KB , FILEGROWTH = 1024KB )
,
FILEGROUP [ReadOnlyDB_FG]
( NAME = N'ReadOnlyDB_FG', FILENAME = N'C:\Temp\ReadOnlyDB_FG.ndf' , SIZE = 4096KB , FILEGROWTH = 1024KB )
LOG ON
( NAME = N'ReadOnlyDB_log', FILENAME = N'C:\Temp\ReadOnlyDB_log.ldf' , SIZE = 20480KB , FILEGROWTH = 10%)
GO
-- Mark the filegroup read-only
ALTER DATABASE ReadOnlyDB MODIFY FILEGROUP ReadOnlyDB_FG READ_ONLY;
GO

I have gone ahead by marking the filegroup as Read_Only. Next I am going ahead with a FULL Backup.

BACKUP DATABASE [ReadOnlyDB]
TO  DISK = N'C:\Temp\ReadOnlyDB.bak'
WITH NOFORMAT, INIT,
NAME = N'ReadOnlyDB-Full Database Backup',
SKIP, NOREWIND, NOUNLOAD,  STATS = 10, CHECKSUM
GO

Next we will drop the database and restore the same.

-- Clean up time
USE MASTER
GO
DROP DATABASE ReadOnlyDB
GO
USE [master]
RESTORE DATABASE [ReadOnlyDB] FROM  DISK = N'C:\Temp\ReadOnlyDB.bak' WITH  FILE = 1,  NOUNLOAD,  STATS = 5
GO

In the above command, we restored the database back to the same location. Now let us go ahead and check the filegroup’s settings for read_only attribute.

USE ReadOnlyDB
-- Check the status
SELECT type_desc, physical_name, is_read_only
FROM sys.database_files
GO

You can see our secondary filegroup is still marked as read_only.

readonlydb backup 01 SQL SERVER   Backup and Restore Behavior of ReadOnly Filegroup Databases

There is one important learning that I got in this experiment. The database fileroups that are marked as read_only is retained as part of fullbackup and when we restore such backups, these settings get carried forward in our restored database.

I hope you got some learning as part of this experiment. Are you using these concepts in your environments? Do let us know via the comments below.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – ReadOnly Databases with ReadOnly File Attribute

As part of continued series of ReadOny databases, the earlier blog had an interesting comment that read as some people were getting errors of 5120 and others. This got me curious and was in the pending list of solutions in some way.

One of the readers had mailed me saying, they attached a database and the databases came up as ReadOnly. They didn’t know why this happened and when they try to mark the database as ReadWrite, it was giving them an error.

Here are the steps to reproduce the situation. I have the database files in a fileshare. We will try to attach the same using the below command:

USE [master]
GO
CREATE DATABASE [ReadOnlyDB] ON
( FILENAME = N'C:\ReadOnlyDB\ReadOnlyDB.mdf'),
(
FILENAME = N'C:\ReadOnlyDB\ReadOnlyDB_log.ldf')
FOR ATTACH
GO

Note: In your case, you can have database files on a different folder.

As soon as the command is executed, we can see in the object explorer the following:

ReadOnly file 01 SQL SERVER   ReadOnly Databases with ReadOnly File Attribute

We can see that the database is marked as Read-Only on attach. This was strange. So based on the blog, let us try to make the database as ReadWrite using the following command:

USE [master]
GO
ALTER DATABASE [ReadOnlyDB] SET  READ_WRITE WITH NO_WAIT
GO

Now this raised the following error:

Msg 5120, Level 16, State 101, Line 36
Unable to open the physical file “C:\ReadOnlyDB\ReadOnlyDB.mdf”. Operating system error 5: “5(Access is denied.)”.
Msg 5181, Level 16, State 5, Line 36
Could not restart database “ReadOnlyDB”. Reverting to the previous status.
Msg 5069, Level 16, State 1, Line 36
ALTER DATABASE statement failed.

The catch to the error is hidden in the message. It says “Operating system error”. So let us go ahead and detach the DB to change the OS File attributes.

USE [master]
GO
EXEC MASTER.dbo.sp_detach_db @dbname = N'ReadOnlyDB'
GO

Next get to the File system attribute for the database and Log files to check for the ReadOnly attribute. In our example, the mdf file had this attribute enabled.

ReadOnly file 02 SQL SERVER   ReadOnly Databases with ReadOnly File Attribute

Please go ahead and uncheck the “Read-only” attribute from the file. That is it. Now attach the database after the change.

USE [master]
GO
CREATE DATABASE [ReadOnlyDB] ON
( FILENAME = N'C:\ReadOnlyDB\ReadOnlyDB.mdf'),
(
FILENAME = N'C:\ReadOnlyDB\ReadOnlyDB_log.ldf')
FOR ATTACH
GO

This got us back to normal life as far as our ReadOnly databases is concerned. Hope you learnt something new as I am learning something new almost every single day. Do let me know if you got a chance to learn so much working with ReadOnly databases in the past.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Database Stuck in “In Recovery” Mode After Restart

Read this blog and I am sure it will bring some instances in your environments. This is one of the common issues I have observed while working with SQL Server from long time now. In this blog we will discuss little details about the issue and possible action you might take.

Whenever there is a restart of SQL Server, all databases would undergo “Recovery” process. This is the stage where the database has to come back online in a consistent state. There are three sub-phases with-in the process. Discovery, Roll forward and Rollback. The names are pretty self-explanatory. Let me explain for those who are interested to learn in detail:

  • Analysis: This is the phase where SQL Server would go through the LDF file and build the in-memory structures to know how much work is needed in the next two phases.
  • Roll forward (redo): During the shutdown of the database, there might be transactions which are committed but not yet written to the MDF file via checkpoint.
  • Rollback (undo): If there were any transactions which were uncommitted then they have to be rolled back to bring the database to a consistent state.

When we would see database in “InRecovery” state?

  • Restart of SQL Server.
  • Database offline and online.
  • Restore of database from backup.

All of the above would is called “recovery” process of the database and all databases must go through three phases as explained earlier.

What should we do?

The very first thing I always check is ERRORLOG. In Errorlog, we should see the very first message in the database (TestMe is the name of the database):

Starting up database ‘TestMe’.

This means the files are opened and recovery is started. After sometime, you should see phase 1.

Recovery of database ‘TestMe’ (28) is 0% complete (approximately 37 seconds remain). Phase 1 of 3. This is an informational message only. No user action is required.
Recovery of database ‘TestMe’ (28) is 3% complete (approximately 36 seconds remain). Phase 1 of 3. This is an informational message only. No user action is required.

Once phase 1 is complete, it would go with Phase 2 and 3 as shown below.

Recovery of database ‘TestMe’ (28) is 3% complete (approximately 36 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
Recovery of database ‘TestMe’ (28) is 0% complete (approximately 142 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
Recovery of database ‘TestMe’ (28) is 7% complete (approximately 19 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
Recovery of database ‘TestMe’ (28) is 15% complete (approximately 26 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
Recovery of database ‘TestMe’ (28) is 21% complete (approximately 25 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
Recovery of database ‘TestMe’ (28) is 27% complete (approximately 20 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
Recovery of database ‘TestMe’ (28) is 34% complete (approximately 19 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
Recovery of database ‘TestMe’ (28) is 41% complete (approximately 16 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
Recovery of database ‘TestMe’ (28) is 48% complete (approximately 14 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
Recovery of database ‘TestMe’ (28) is 55% complete (approximately 12 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
Recovery of database ‘TestMe’ (28) is 62% complete (approximately 10 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
Recovery of database ‘TestMe’ (28) is 69% complete (approximately 9 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
Recovery of database ‘TestMe’ (28) is 75% complete (approximately 7 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
Recovery of database ‘TestMe’ (28) is 82% complete (approximately 5 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
Recovery of database ‘TestMe’ (28) is 88% complete (approximately 3 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
Recovery of database ‘TestMe’ (28) is 95% complete (approximately 1 seconds remain). Phase 3 of 3. This is an informational message only. No user action is required.

And once it completes, you should use something similar.

3807 transactions rolled forward in database ‘TestMe’ (28). This is an informational message only. No user action is required.
0 transactions rolled back in database ‘TestMe’ (28). This is an informational message only. No user action is required.
Recovery is writing a checkpoint in database ‘TestMe’ (28). This is an informational message only. No user action is required.
Recovery completed for database TestMe (database ID 28) in 30 second(s) (analysis 1289 ms, redo 29343 ms, undo 72 ms.) This is an informational message only. No user action is required

The text in green color explains the three phases which I explained earlier.

What are the possible causes?

  • Huge size of transaction log file.
  • SQL restarted during a long running transaction.
  • Huge number of VLFs.
  • You might be hitting a bug which is fixed in SQL Server. I have referenced KB below.

List of known issues

If you are running SQL Server 2005, 2008, 2008 R2 or SQL 2012, please make sure you have applied fixes given in below.

http://support.microsoft.com/kb/2455009 (FIX: Slow performance when you recover a database if there are many VLFs inside the transaction log in SQL Server 2005, in SQL Server 2008 or in SQL Server 2008 R2)

http://support.microsoft.com/kb/2524743 (FIX: Recovery takes longer than expected for a database in a SQL Server 2008 or in a SQL Server 2008 R2 environment)

The fixes would help in speeding up the phases of recovery. Hope this blog helps you in a direction to look at some of the SQL Server behavior.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – ReadOnly Databases and Notes Using Snapshots

read only icon SQL SERVER   ReadOnly Databases and Notes Using SnapshotsThis seems to be the third week that I am writing about ReadOnly databases, this blog is in continuation of last weeks notes. When I wrote the notes from last week, one of my blog reader said if it is possible to get the ReadOnly error without making the database into a ReadOnly database. This got me thinking and I couldn’t sleep that day. And a quick call to my friend to get some ideas – the answer was simple – using Snapshots.

I couldn’t control my excitement and this triggered me to write this special case scenario. As we execute the script, we will learn some of the behavior.

Let us start by creating our database which will be used for testing.

CREATE DATABASE [ReadOnlyDB]
CONTAINMENT
= NONE
ON  PRIMARY
( NAME = N'ReadOnlyDB', FILENAME = N'C:\Temp\ReadOnlyDB.mdf' , SIZE = 4024KB , FILEGROWTH = 1024KB )
LOG ON
( NAME = N'ReadOnlyDB_log', FILENAME = N'C:\Temp\ReadOnlyDB_log.ldf' , SIZE = 20480KB , FILEGROWTH = 10%)
GO
USE ReadOnlyDB
GO
-- Creating our table
CREATE TABLE tbl_SQLAuth (id INT, Longname CHAR(8000))
GO

Next let us go ahead and create a snapshot database out of our ReadOnlyDB.

-- Create a snapshot on our DB
CREATE DATABASE ReadOnlyDB_SS ON
( NAME = ReadOnlyDB, FILENAME =
'C:\Temp\ReadOnlyDB_data_1800.ss' )
AS SNAPSHOT OF ReadOnlyDB;
GO

Now that our snapshot database (ReadOnlyDB_SS) is created, it is important to know that snapshot databases are created as ReadOnly databases. The behavior is similar to creating ReadOnly Databases. To test this, let us try to insert some data into our Snapshot DB.

USE ReadOnlyDB_SS
GO
INSERT INTO tbl_SQLAuth VALUES (1, 'SQLAuth');
GO

Now we will be presented with the same familiar error (3906):

Msg 3906, Level 16, State 1, Line 25

Failed to update database “ReadOnlyDB_SS” because the database is read-only.

Even though we have not explicitly marked the snapshot database as Readonly we are getting this error. I was curious to know, what will happen if we try to mark the snapshot database as ReadOnly again. The error is self-explanatory.

-- Let us set the Snapshot DB as READ_ONLY
USE MASTER
GO
ALTER DATABASE [ReadOnlyDB_SS] SET READ_ONLY
GO

Msg 5093, Level 16, State 1, Line 31

The operation cannot be performed on a database snapshot.

Msg 5069, Level 16, State 1, Line 31

ALTER DATABASE statement failed.

If that was easy, what will happen if we try to make the snapshot database to read_write? The error is self-explanatory again here:

USE [master]
GO
ALTER DATABASE [ReadOnlyDB_SS] SET  READ_WRITE WITH NO_WAIT
GO

Msg 5093, Level 16, State 1, Line 37

The operation cannot be performed on a database snapshot.

Msg 5069, Level 16, State 1, Line 37

ALTER DATABASE statement failed.

Now that gives us two learnings, we cannot mark a Snapshot database as ReadOnly and nor can we mark a snapshot database as Read_Write enabled.

With these learnings in place, let us do the cleanup. Make sure to drop the Snapshot database before deleting the main database. So our cleanup script for this blog looks:

-- Clean up time
USE MASTER
GO
DROP DATABASE ReadOnlyDB_SS
GO
DROP DATABASE ReadOnlyDB
GO

Reference: Pinal Dave (http://blog.sqlauthority.com)