SQL SERVER – Why my Database is in Restoring Mode After Taking Backup?

Once upon a time there was a SharePoint consultant named Pepper. She wanted to learn about SQL Server Administration/DBA work. While she was a master SharePoint consultant at her office in Bethel she grew curious to learn SQL Server. She wanted to be independent when dealing with SQL server chores like creating maintaining database, taking backups, creating jobs and things alike. Being busy as she is at her current task on working with SharePoint she would sometimes squeeze in some time to poke around on SQL server and learn about it. Our coffee discussions would turn out to be SQL discussions and how SharePoint uses SQL as its backend.

Last week, I got a call from Pepper at noon timeframe Now that does not happen and I was sure there was something fishy…

Pepper: Tony, is it normal for database to get in RESTORING state after backup?
I said in my usual tone of ‘romcasim

Me: depends on what state you wanted it to be in

Pepper: I am in no mood to play ‘take-my-hint’ I need to get back to testing something please tell me what did I do wrong.

Me: OK OK.. hummmm tell me what all you did so I can help

Pepper: Let me ping you.

Here is what I learnt from her chat messages. Pepper wanted to test a few things which involved her taking database backup and transaction log backup. So she scrambled on the net and got the syntax quick without bothering to read much.  She did her test runs or code and ran the backup.

BACKUP DATABASE [BKP] TO DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Backup\full_backup.bak'

Then again some testing from SharePoint and now Ran below to take Tlog backup

BACKUP LOG [BKP] TO DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Backup\tlog_backup.trn'

Now this is where I figured the mistake the “NORECOVERY” clause. As per MSDN documentation


Backs up the tail of the log and leaves the database in the RESTORING state. NORECOVERY is useful when failing over to a secondary database or when saving the tail of the log before a RESTORE operation.

To perform a best-effort log backup that skips log truncation and then take the database into the RESTORING state atomically, use the NO_TRUNCATE and NORECOVERY options together.

After having explained what that clause does, the obvious question pops out.

Pepper: Tony, all that is nice to know. How do I get out of the RESTORING state?

I gave her the following command, which she happily executed and voila DB was back online.


Pepper: I owe you a coffee. Starbucks @ 5?

I was thinking in my head – One more midday coffee is going to be over the Tail Log backup discussion. As expected, she asked questions and I decided to share them here.

Question #1: Can this happen from user interface also?

Answer #1: Yes. Here is the option in SQL Server Management Studio (SSMS)

Question #2: But why would someone do that? Why would someone take production database into restoring state?

Answer #2: These kind of log backups are called as tail log backups. Imagine a situation where DBA has configured log shipping. As a part of DR drill, application team wants to move production workload to secondary server. One DR drill is complete, the old primary should again take primary role. If we don’t want to reinitialize the log-shipping via full backup then here are the steps.

  1. Disable all log shipping jobs (on primary and secondary)
  2. Restore all pending transaction logs which are not yet applied on secondary with norecovery.
  3. Take tail log backup with norecovery. This would leave primary database in restoring state.
  4. Restore this tail log backups on secondary database using “with recovery” clause.
  5. This would bring secondary open for read/write activities and testing.
  6. Once testing completes, take a tail log back-up from current primary (secondary initially)
  7. Restore that backup with recovery on current secondary (primary initially)
  8. Enabled all log shipping jobs which were disabled in first step.

That was an eye-opener for Pepper and at least, she paid for my coffee.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Database Stuck in Restoring State

Here is a one of the very common question I keep on getting via email.

“I just restored all the backup files of my database, however, my database is still stuck in restoring state. How do I fix it?”

Well, if you have already restored all the database, you can execute the following code and it will bring your database from recovery to operational state.


If due to any reason, above query returns error about restoring log files or any other file and you do not have that file in your hand, you can run following command.

Remember above command will roll forward your database and you will be not able to restore any other database after that.


If you are not sure what to do, leave a comment and I will help you out with your situation.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – A Practical Use of Backup Encryption

 Backup is extremely important for any DBA. Think of any disaster and backup will come to rescue users in adverse situation. Similarly, it is very critical that we keep our backup safe as well. If your backup fall in the hands of bad people, it is quite possible that it will be misused and become serious data integrity issue. Well, in this blog post we will see a practical scenario where we will see how we can use Backup Encryption to improve security of the bakcup.

Feature description

Database Backup Encryption is a brand new and long expected feature that is available now in SQL Server 2014. You can create an encrypted backup file by specifying the encryption algorithm and the encryptor (either a Certificate or Asymmetric Key).

The ability to protect a backup file with the password has been existing for many years. If you use SQL Server for a long time, you might remember the WITH PASSWORD option for the BACKUP command. The option prevented unauthorized access to the backup file.

However this approach did not provide reliable protection. In that regard, Greg Robidoux noted on MSSQLTIPS: “Although this does add a level of security if someone really wants to crack the passwords they will find a way, so look for additional ways to secure your data.

To protect a backup file, SQL Server 2008 introduced the transparent data encryption (TDE) feature. Thus, a database had to be transparently encrypted before backup. Therefore, start with SQL Server 2012 the PASSWORD and MEDIAPASSWORD parameters are not used while creating backups. Even so, data encryption and backup files encryption are two different scenarios.

In case a database is stored locally, there is no need to encrypt it before backup. Fortunately in SQL Server 2014 there are two independent processes. Along with data encryption it is possible to encrypt a backup file based on a certificate or an asynchronous key. Supported encryption algorithms are:

  • AES 128
  • AES 192
  • AES 256
  • Triple DES

Practical use

To illustrate above mentioned, I will create an encrypted backup of the Adventureworks database. Also, you can back up directly to Azure. If needed, you may restore the encrypted back up file on Azure.

I will use dbForge Studio for SQL Server to create the encrypted backup file.

To protect the backup file we need to create an encryptor: either a Certificate or Asymmetric Key. Then, we need to pass this encryptor to the target SQL Server to restore the backup. For this, the encryptor must be exported from the source SQL Server and imported to the target SQL Server. There are no problems with the certificate in this regard. It is more complicated with asymmetric keys.

Taking into account that the BACKUP ASYMMETRIC KEY command is not available, and we can not just create a duplicate for an asymmetric key (compared to symmetric key), the only approach is to create the asymmetric key outside the SQL Server. Then we can use the sn.exe utility to transfer it inside SQL Server (CREATE ASYMMETRIC KEYkeynameFROM FILE = ‘filename.snk‘). After that we can use this asymmetric key to encrypt the backup file on the source instance. Further we need to use the same *.snk file to create the asymmetric key on the target instance (and restore the backup file).

In our example we will not use asymmetric keys. We will use a certificate. Moreover the certificate (behind the scene) is the pair of open/closed keys.

Let’s create the server certificate and use it to encrypt the backup file.

The certificate will be protected with the database master key, because we didn’t specify the ENCRYPTION BY statement.

This is exactly what we need. Only certificates signed with the database master-key can be used for the encryption purposes. Otherwise, If we for instance, protect the certificate with the password ENCRYPTION BY PASSWORD = ‘strongpassword‘, the following error appears while attempting to encrypt the backup file:

“Cannot use certificate ‘CertName’, because its private key is not present or it is not protected by the database master key.”

Encrypted backups (along with usual backups) can be traditionally created locally on the hard drive and in Azure Storage.

Instead of writing tons of SQL code I will use the convenient dbForge Studio for SQL Server Back Up wizard. The wizard allows to create the database backup in several clicks.

Step 1: Setup the DB Connection and the backup file location.

Step2: Setup mediaset

Step 3: Select the encryption algorithm and certificate.

In case you don’t want to pay extra attention to transferring the backup file to the Windows Azure, you can backup directly to Azure.

After the script execution in the required container the blob (with the backup) appears.

In case you had already created a backup with the same name in the same container, you can get the following error: There is currently a lease on the blob and no lease ID was specified in the request.

Further, you can restore the back up file on the Windows Azure.


Obviously, it is a good practice to encrypt a backup file while transferring. This, for instance, allows to avoid data leak while transferring backups from one DPC to another.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Backup and Restore Events Report

A DBA wears multiple hats and in fact does more than what an eye can see. One of the core task of a DBA is to take backups. This looks so trivial that most developers shrug this off as the only activity a DBA might be doing. I have huge respect for DBA’s all around the world because even if they seem cool with all the scripting, automation, maintenance works round the clock to keep the business working almost 365 days 24×7, their worth is knowing that one day when the systems / HDD crashes and you have an important delivery to make. So these backup tasks / maintenance jobs that have been done come handy and are no more trivial as they might seem to be as considered by many. So the important question like: “When was the last backup taken?”, “How much time did the last backup take?”, “What type of backup was taken last?” etc are tricky questions and this report lands answers to the same in a jiffy.

So the SSMS report, we are talking can be used to find backups and restore operation done for the selected database. Whenever we perform any backup or restore operation, the information is stored in the msdb database. This report can utilize that information and provide information about the size, time taken and also the file location for those operations.

Here is how this report can be launched.


Once we launch this report, we can see 4 major sections shown as listed below.

  • Average Time Taken For Backup Operations
  • Successful Backup Operations
  • Backup Operation Errors
  • Successful Restore Operations

Let us look at each section next.

Average Time Taken For Backup Operations

Information shown in “Average Time Taken For Backup Operations” section is taken from a backupset table in the msdb database. Here is the query and the expanded version of that particular section

USE msdb;
,       1 AS l2
,       1 AS l3
,       t1.TYPE AS [type]
,       (AVG(DATEDIFF(ss,backup_start_date, backup_finish_date)))/60.0 AS AverageBackupDuration
FROM backupset t1
INNER JOIN sys.databases t3 ON ( t1.database_name = t3.name)
WHERE t3.name = N'AdventureWorks2014'

On my small database the time taken for differential backup was less than a minute, hence the value of zero is displayed. This is an important piece of backup operation which might help you in planning maintenance windows.

Successful Backup Operations

Here is the expanded version of this section.


This information is derived from various backup tracking tables from msdb database.  Here is the simplified version of the query which can be used separately as well.

FROM sys.databases t1
INNER JOIN backupset t3 ON (t3.database_name = t1.name)
LEFT OUTER JOIN backupmediaset t5 ON ( t3.media_set_id = t5.media_set_id)
LEFT OUTER JOIN backupmediafamily t6 ON ( t6.media_set_id = t5.media_set_id)
WHERE (t1.name = N'AdventureWorks2014')
ORDER BY backup_start_date DESC,t3.backup_set_id,t6.physical_device_name;

The report does some calculations to show the data in a more readable format. For example, the backup size is shown in KB, MB or GB. I have expanded first row by clicking on (+) on “Device type” column. That has shown me the path of the physical backup file.

Personally looking at this section, the Backup Size, Device Type and Backup Name are critical and are worth a note. As mentioned in the previous section, this section also has the Duration embedded inside it.

Backup Operation Errors

This section of the report gets data from default trace. You might wonder how. One of the event which is tracked by default trace is “ErrorLog”. This means that whatever message is written to errorlog gets written to default trace file as well. Interestingly, whenever there is a backup failure, an error message is written to ERRORLOG and hence default trace. This section takes advantage of that and shows the information. We can read below message under this section, which confirms above logic.

No backup operations errors occurred for (AdventureWorks2014) database in the recent past or default trace is not enabled.

Successful Restore Operations

This section may not be very useful in production server (do you perform a restore of database?) but might be useful in the development and log shipping secondary environment, where we might be interested to see restore operations for a particular database. Here is the expanded version of the section. To fill this section of the report, I have restored the same backups which were taken to populate earlier sections.

Here is the simplified version of the query used to populate this output.

USE msdb;
FROM restorehistory t1
LEFT OUTER JOIN restorefile t2 ON ( t1.restore_history_id = t2.restore_history_id)
LEFT OUTER JOIN backupset t3 ON ( t1.backup_set_id = t3.backup_set_id)
WHERE t1.destination_database_name = N'AdventureWorks2014'
ORDER BY restore_date DESC,  t1.restore_history_id,t2.destination_phys_name

Have you ever looked at the backup strategy of your key databases? Are they in sync and do we have scope for improvements? Then this is the report to analyze after a week or month of maintenance plans running in your database. Do chime in with what are the strategies you are using in your environments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Backup to Azure Blob using SQL Server 2014 Management Studio

Recently, while working on a demo about backup, I realized that in management studio, now there is a new option in the backup screen called a URL.  This feature is called as “backup to URL”.

I checked earlier version of SSMS and found that this was not there in the SQL 2012 management studio. There the options were “disk” and “tape” in SSMS 2012.

As soon as the URL is selected, it would ask for various options about backup destination in Azure.

Let’s go step by step and take a database backup to Microsoft Azure Blob storage.

To use this feature, you need to have is a Microsoft Azure account. If you do not have a Windows Azure account, visit Windows Azure 3-Month free trial. After logging it to the portal, I have created a “storage” called sqlauthority.

Under sqlauthority, I have created a container called “backup”.  This container was created as “Public Blob”.

If we closely look at URL it is http://<StorageAccount>.blob.core.windows.net/<ContainerName> Going back to SSMS, this is what we have to provide. But wait, how would SQL Server connect to Azure Blob storage? Well, that’s where the credential comes into the picture. Note that authentication to the storage account is required for SQL Server backup and restore even if we choose to create a public container. There are multiple ways to connect to the storage – publishing profile, certificate or SQL Credential. To use SQL credential, we need to get the secret key from the portal using below the screen. The screen would appear once we click on “Manage Access Key” in the third image of this blog.

Once we have the secret key, we can go to SSMS and right click on “Credential” under “Logins” to choose “New Credential”.

Once this is done, we are all set. Go back to UI and choose “backup-cred” in the drop down. Here is the final screen.

Once the backup is complete, we should see the backup on the portal. Notice URL http://sqlauthority.blob.core.windows.net/backup/Sales_backup_2014_06_04_101547.bak

Once the backup is available, we can also restore it. We can use management studio as well as T-SQL to restore the backup taken on the URL. While choosing the device, we need to use “URL” option, browse to storage container and choose correct file.

Go ahead and play with this feature.Let me know what you think of the same.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Database – Taking Backup of MySQL and MongoDB on Cloud

Here is the story of my latest weekend experiments. Those who know me are familiar that I try to do something new every weekend and post my experience over here. Earlier this week, I published my course on MySQL Backup and Recovery. After doing this course, I realize that everything I had covered in the course was related to native MySQL backup, but there should be some solution outside which caters to the larger need of MySQL Backup. Additionally, I found that taking backup with MySQL is hit and miss as well as there is no systemic reporting there. I decided to do some research on my own on the internet. I tried out various tools, softwares and quite a few third party plug-ins.

Out of all my experiments, there is one product indeed that got my attention. That is BitCan. It is indeed an excellent product. Here are few things I found out about the product:

  • User friendly interface and no need to write scripts or cron jobs
  • There is no need to install any plugin
  • Cloud based solution
  • Storage is very elastic and grows/shrinks with the needs
  • SSH secure communication layer integrated
  • Backup Monitoring Dashboard

I decided to try out the solution on my own and I find it very effective and efficient. I decided to quickly sign up for a FREE account and do the following very quickly.

  • Connect my public database with the BitCan
  • Schedule backup at interval
  • Create an immediate backup
  • Download the backup to install on the local machine
  • Monitor backup health in a single dashboard

Well, I already have a MySQL database which is available on the internet and decided to take backup of the same database. Here are a few images from my experiment.

Let us start how we can take backup with the help of BitCan.

First register yourself for FREE at GoBitCan.com

Step 1: Create Account

Once you login you will see the following screen. It will allow us for 30 day trial. Now click on CREATE BACKUP link.

On the same screen there is a video as well. Please refer the video for additional information. I watched this five minutes video and learned how to get started quickly.

Step 2: New Backup

Over here you will select your backup name and opt for what kind of backup you will go for. You can take backup of file, directory, MongoDB as well as MySQL. I wanted to backup MySQL so I will selected MySQL and click on Save Backup.

Step 3: Configuration

Over here provide all the necessary details for your backup. Remember, I am selecting the public database as MySQL is accessible via internet.

You can also see that I have selected backup time for Sunday, Tuesday, Thursday and Saturday for 12:00 AM. Click on Save and continue to the next step.

Step 4: All Done!

That’s it. We are done. The backup will automatically happen at our scheduled time. If we want we can execute the backup manually and can see the various information related to the same.


The dashboard is a very interesting element of BitCan. In my life I have never seen a backup dashboard which is so comprehensive and detailed. I loved it. May be in future blog post we will discuss this in details.

Here is the another view of the dashboard.

Well, that’s it. In future blog posts we will discuss various other aspects of the BitCan.

If you care about your database backup, I suggest you register for a FREE account of BitCan today.

Reference: Pinal Dave (http://blog.sqlauthority.com)

MySQL – Video Course – MySQL Backup and Recovery Fundamentals

Data is the one of the most crucial things for any organization and keeping data safe is the biggest challenge for any DBA. This is true for any organizations. Think about the scenario that you have a database which is extremely important and suddenly you accidently delete the most important table from that database. I am sure this is a very difficult time. In times like this people often get stressed or just make even second mistake. In my career of 10 years I have done often this mistake and often got stressed out due to un-availability of the database backup.

In the SQL Server field, we have plenty of the help on this subject, but in MySQL domain there is not enough help. For the same reason I have build this MySQL course on Backup and Recovery.

Course Outline

Data is very important to any application and business. It is very important that every business plan for data safety. Database backup strategies are often discussed after the disaster has already happened. In this introductory course we will explore a few of the basic backup strategies every business should implement for data safely. We will explore how we can recover our server quickly after any unfriendly incident to our MySQL database. Click to View Course

Here are various important aspects which we have discussed in this course.

  • How to take backup of single database?
  • How to take backup of multiple database?
  • How to backup various database objects?
  • How to restore a single database?
  • How to restore multiple databases?
  • How to use MySQL Workbench for Backup and Restore?
  • How to restore Point in Time for any database?
  • What is the best time to backup?
  • How to copy database from one server to another server?

All of the above concepts and many more subjects are covered in the MySQL Backup and Recovery Fundamentals course. It is available on Pluralsight.


As learning about Backup and Recovery can be very much boring, I decided to create two fictitious characters and demonstrate the entire course based on their conversation. The story is about Mike and Rahul. Mike is Sr. Database administrator in USA and Rahul is an intern in India. Rahul aspires to become a senior database administrator and this is a story about his challenges and how he overcomes those challenges. I had a great time to build this course and I have got very good feedback on this course. I encourage all of you to attempt to learn MySQL Backup and Recovery Fundamental course with this innovative effort. It will be very valuable to know your feedback.

You will need a valid Pluralsight subscription to watch this course.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to Recover SQL Database Data Deleted by Accident

In Repair a SQL Server database using a transaction log explorer, I showed how to use ApexSQL Log, a SQL Server transaction log viewer, to recover a SQL Server database after a disaster. In this blog, I’ll show you how to use another SQL Server disaster recovery tool from ApexSQL in a situation when data is accidentally deleted. You can download ApexSQL Recover here, install, and play along.

With a good SQL Server disaster recovery strategy, data recovery is not a problem. You have a reliable full database backup with valid data, a full database backup and subsequent differential database backups, or a full database backup and a chain of transaction log backups. But not all situations are ideal. Here we’ll address some sub-optimal scenarios, where you can still successfully recover data.

If you have only a full database backup

This is the least optimal SQL Server disaster recovery strategy, as it doesn’t ensure minimal data loss.

For example, data was deleted on Wednesday. Your last full database backup was created on Sunday, three days before the records were deleted. By using the full database backup created on Sunday, you will be able to recover SQL database records that existed in the table on Sunday. If there were any records inserted into the table on Monday or Tuesday, they will be lost forever. The same goes for records modified in this period. This method will not bring back modified records, only the old records that existed on Sunday.

If you restore this full database backup, all your changes (intentional and accidental) will be lost and the database will be reverted to the state it had on Sunday. What you have to do is compare the records that were in the table on Sunday to the records on Wednesday, create a synchronization script, and execute it against the Wednesday database.

If you have a full database backup followed by differential database backups

Let’s say the situation is the same as in the example above, only you create a differential database backup every night.

Use the full database backup created on Sunday, and the last differential database backup (created on Tuesday). In this scenario, you will lose only the data inserted and updated after the differential backup created on Tuesday.

If you have a full database backup and a chain of transaction log backups

This is the SQL Server disaster recovery strategy that provides minimal data loss. With a full chain of transaction logs, you can recover the SQL database to an exact point in time.

To provide optimal results, you have to know exactly when the records were deleted, because restoring to a later point will not bring back the records.

This method requires restoring the full database backup first. If you have any differential log backup created after the last full database backup, restore the most recent one. Then, restore transaction log backups, one by one, it the order they were created starting with the first created after the restored differential database backup.

Now, the table will be in the state before the records were deleted. You have to identify the deleted records, script them and run the script against the original database.

Although this method is reliable, it is time-consuming and requires a lot of space on disk.

How to easily recover deleted records?

The following solution enables you to recover SQL database records even if you have no full or differential database backups and no transaction log backups.

To understand how ApexSQL Recover works, I’ll explain what happens when table data is deleted.

Table data is stored in data pages. When you delete table records, they are not immediately deleted from the data pages, but marked to be overwritten by new records. Such records are not shown as existing anymore, but ApexSQL Recover can read them and create undo script for them.

How long will deleted records stay in the MDF file? It depends on many factors, as time passes it’s less likely that the records will not be overwritten. The more transactions occur after the deletion, the more chances the records will be overwritten and permanently lost.

Therefore, it’s recommended to create a copy of the database MDF and LDF files immediately (if you cannot take your database offline until the issue is solved) and run ApexSQL Recover on them. Note that a full database backup will not help here, as the records marked for overwriting are not included in the backup.

First, I’ll delete some records from the Person.EmailAddress table in the AdventureWorks database.


I can delete these records in SQL Server Management Studio, or execute a script such as

DELETE FROM Person.EmailAddress
WHERE BusinessEntityID BETWEEN 70 AND 80

Then, I’ll start ApexSQL Recover and select From DELETE operation in the Recovery tab.


In the Select the database to recover step, first select the SQL Server instance. If it’s not shown in the drop-down list, click the Server icon right to the Server drop-down list and browse for the SQL Server instance, or type the instance name manually.

Specify the authentication type and select the database in the Database drop-down list.


In the next step, you’re prompted to add additional data sources. As this can be a tricky step, especially for new users, ApexSQL Recover offers help via the Help me decide option.


The Help me decide option guides you through a series of questions about the database transaction log and advises what files to add.

If you know that you have no transaction log backups or detached transaction logs, or the online transaction log file has been truncated after the data was deleted, select No additional transaction logs are available.

If you know that you have transaction log backups that contain the delete transactions you want to recover, click Add transaction logs. The online transaction log is listed and selected automatically.


Click Add if to add transaction log backups. It would be best if you have a full transaction log chain, as explained above.

The next step for this option is to specify the time range.


Selecting a small time range for the time of deletion will create the recovery script just for the accidentally deleted records. A wide time range might script the records deleted on purpose, and you don’t want that. If needed, you can check the script generated and manually remove such records.

After that, for all data sources options, the next step is to select the tables. Be careful here, if you deleted some data from other tables on purpose, and don’t want to recover them, don’t select all tables, as ApexSQL Recover will create the INSERT script for them too.


The next step offers two options: to create a recovery script that will insert the deleted records back into the Person.EmailAddress table, or to create a new database, create the Person.EmailAddress table in it, and insert the deleted records. I’ll select the first one.


The recovery process is completed and 11 records are found and scripted, as expected.


To see the script, click View script. ApexSQL Recover has its own script editor, where you can review, modify, and execute the recovery script.

The insert into statements look like:

INSERT INTO Person.EmailAddress( BusinessEntityID,
N'david2@adventure-works.com' COLLATE SQL_Latin1_General_CP1_CI_AS,
'20030109 00:00:00.000' );

To execute the script, click Execute in the menu.


If you want to check whether the records are really back, execute

SELECT * FROM Person.EmailAddress
WHERE BusinessEntityID BETWEEN 70 AND 80

As shown, ApexSQL Recover recovers SQL database data after accidental deletes even without the database backup that contains the deleted data and relevant transaction log backups. ApexSQL Recover reads the deleted data from the database data file, so this method can be used even for databases in the Simple recovery model.

Besides recovering SQL database records from a DELETE statement, ApexSQL Recover can help when the records are lost due to a DROP TABLE, or TRUNCATE statement, as well as repair a corrupted MDF file that cannot be attached to as SQL Server instance.

You can find more information about how to recover SQL database lost data and repair a SQL Server database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Parsing SSIS Catalog Messages – Notes from the Field #030

[Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications. The tool may also be used to automate maintenance of SQL Server databases and updates to multidimensional cube data.

In this episode of the Notes from the Field series I requested SSIS Expert Andy Leonard to discuss one of the most interesting concepts of SSIS Catalog Messages. There are plenty of interesting and useful information captured in the SSIS catalog and we will learn together how to explore the same.

The SSIS Catalog captures a lot of cool information by default. Here’s a query I use to parse messages from the catalog.operation_messages table in the SSISDB database, where the logged messages are stored.

This query is set up to parse a default message transmitted by the Lookup Transformation. It’s one of my favorite messages in the SSIS log because it gives me excellent information when I’m tuning SSIS data flows. The message reads similar to:

Data Flow Task:Information: The Lookup processed 4485 rows in the cache. The processing time was 0.015 seconds. The cache used 1376895 bytes of memory.

The query:

DECLARE @MessageSourceType INT = 60
DECLARE @StartOfIDString VARCHAR(100) = 'The Lookup processed '
DECLARE @ProcessingTimeString VARCHAR(100) = 'The processing time was '
DECLARE @CacheUsedString VARCHAR(100) = 'The cache used '
DECLARE @StartOfIDSearchString VARCHAR(100) = '%' + @StartOfIDString + '%'
DECLARE @ProcessingTimeSearchString VARCHAR(100) = '%' + @ProcessingTimeString + '%'
DECLARE @CacheUsedSearchString VARCHAR(100) = '%' + @CacheUsedString + '%'
SELECT operation_id
PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1),
PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1))
PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1))) AS LookupRowsCount
PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1),
PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1))
PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))) AS LookupProcessingTime
PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1),
PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1))
PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))))) = 0
PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1),
PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1))
PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1))))
PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1),
PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1))
PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))))
END AS LookupRowsPerSecond
PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1),
PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1))
PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1))) AS LookupBytesUsed
PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1),
PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1))
PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))))= 0
PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1),
PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1))
PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1))))
PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1),
PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1))
PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1))))
END AS LookupBytesPerRow
FROM [catalog].[operation_messages]
WHERE message_source_type = @MessageSourceType

Note that you have to set some parameter values:

  • @MessageSourceType [int] – represents the message source type value from the following results:
    Value     Description
    10           Entry APIs, such as T-SQL and CLR Stored procedures
    20           External process used to run package (ISServerExec.exe)
    30           Package-level objects
    40           Control Flow tasks
    50           Control Flow containers
    60           Data Flow task
    70           Custom execution message
    Note: Taken from Reza Rad’s (excellent!) helper.MessageSourceType table found here.
  • @StartOfIDString [VarChar(100)] – use this to uniquely identify the message field value you wish to parse.
    In this case, the string ‘The Lookup processed ‘ identifies all the Lookup Transformation messages I desire to parse.
  • @ProcessingTimeString [VarChar(100)] – this parameter is message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Processing Time value.
  • For this execution, I use the string ‘The processing time was ‘.
  • @CacheUsedString [VarChar(100)] – this parameter is also message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Cache  Used value. It returns the memory used, in bytes.
    For this execution, I use the string ‘The cache used ‘.
  • The other parameters are built from variations of the parameters listed above.

The query parses the values into text. The string values are converted to numeric values for ratio calculations; LookupRowsPerSecond and LookupBytesPerRow. Since ratios involve division, CASE statements check for denominators that equal 0.

Here are the results in an SSMS grid:

This is not the only way to retrieve this information. And much of the code lends itself to conversion to functions. If there is interest, I will share the functions in an upcoming post.

If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Database File Names and Extentions – Notes from the Field #025

[Notes from Pinal]: People believe Shakespeare’s birth date is on April 26. ” What is there in a name?” – I often hear this statement, which famously quotes from Shakespeare.  I think we developer believe in this statement as most of our various are either BAR or FOO. However, this is extremely inefficient and not convenient. There are many reasons we should have meaningful names to our objects, variables and various elements. There are times when incorrect naming convention can just get user in trouble or lead organizations to disaster. My friend Tim shares this humorous and interesting story. Read on

Linchpin People are database coaches and wellness experts for a data driven world. In this 25th episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words of names and extensions.

Database file naming standards are important. With Microsoft SQL Server the default naming standard is not a requirement so administrators can change from the default .mdf, .ldf and .ndf.  Recently I encountered a SQL Server instance that was having performance issues. The customer was reporting very slow response times and couldn’t figure out what the issue was.

Reluctantly the administrator reached out to the database team to get a professional opinion.  For the database team this was the first they learned of this new server so we quickly performed our standard check. Two things stood out which was high CPU utilization and the executable consuming the most CPU was antivirus as well as high disk IO.  We stopped the “on access” scan and performance of the SQL Server improved as well as disk IO dropped very low.

We reached out to our antivirus team and was assured that our standard policies were applied to this server which includes excludes for our standards for database file naming.  We continued to research deviations from our standard server build against this instance, since we did not build it and that is when we found the admin/vendor/customer (non dba) chose a different naming standard for the files.  The extensions they chose were.001 for data, .002 for logs, .003 for ndf.

Due to not having proper excludes for those files by the antivirus software, the customer created a denial of service attack against their self. The on access scan effectively created such a load that it prevented anything else from running.

This is not the first time I have encountered this type of issue and blogged about this in 2012. http://timradney.com/2012/06/18/file-extensions-for-sql-server-database-files/

If you decide that for whatever reason you would like to deviate from the default naming standard or your companies naming standard for database files, please get with the people that manage your antivirus software to make sure they create exclusions for your new naming standard.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone.

Reference: Pinal Dave (http://blog.sqlauthority.com)