SQL SERVER – How to Recover SQL Database Data Deleted by Accident

In Repair a SQL Server database using a transaction log explorer, I showed how to use ApexSQL Log, a SQL Server transaction log viewer, to recover a SQL Server database after a disaster. In this blog, I’ll show you how to use another SQL Server disaster recovery tool from ApexSQL in a situation when data is accidentally deleted. You can download ApexSQL Recover here, install, and play along.

With a good SQL Server disaster recovery strategy, data recovery is not a problem. You have a reliable full database backup with valid data, a full database backup and subsequent differential database backups, or a full database backup and a chain of transaction log backups. But not all situations are ideal. Here we’ll address some sub-optimal scenarios, where you can still successfully recover data.

If you have only a full database backup

This is the least optimal SQL Server disaster recovery strategy, as it doesn’t ensure minimal data loss.

For example, data was deleted on Wednesday. Your last full database backup was created on Sunday, three days before the records were deleted. By using the full database backup created on Sunday, you will be able to recover SQL database records that existed in the table on Sunday. If there were any records inserted into the table on Monday or Tuesday, they will be lost forever. The same goes for records modified in this period. This method will not bring back modified records, only the old records that existed on Sunday.

If you restore this full database backup, all your changes (intentional and accidental) will be lost and the database will be reverted to the state it had on Sunday. What you have to do is compare the records that were in the table on Sunday to the records on Wednesday, create a synchronization script, and execute it against the Wednesday database.

If you have a full database backup followed by differential database backups

Let’s say the situation is the same as in the example above, only you create a differential database backup every night.

Use the full database backup created on Sunday, and the last differential database backup (created on Tuesday). In this scenario, you will lose only the data inserted and updated after the differential backup created on Tuesday.

If you have a full database backup and a chain of transaction log backups

This is the SQL Server disaster recovery strategy that provides minimal data loss. With a full chain of transaction logs, you can recover the SQL database to an exact point in time.

To provide optimal results, you have to know exactly when the records were deleted, because restoring to a later point will not bring back the records.

This method requires restoring the full database backup first. If you have any differential log backup created after the last full database backup, restore the most recent one. Then, restore transaction log backups, one by one, it the order they were created starting with the first created after the restored differential database backup.

Now, the table will be in the state before the records were deleted. You have to identify the deleted records, script them and run the script against the original database.

Although this method is reliable, it is time-consuming and requires a lot of space on disk.

How to easily recover deleted records?

The following solution enables you to recover SQL database records even if you have no full or differential database backups and no transaction log backups.

To understand how ApexSQL Recover works, I’ll explain what happens when table data is deleted.

Table data is stored in data pages. When you delete table records, they are not immediately deleted from the data pages, but marked to be overwritten by new records. Such records are not shown as existing anymore, but ApexSQL Recover can read them and create undo script for them.

How long will deleted records stay in the MDF file? It depends on many factors, as time passes it’s less likely that the records will not be overwritten. The more transactions occur after the deletion, the more chances the records will be overwritten and permanently lost.

Therefore, it’s recommended to create a copy of the database MDF and LDF files immediately (if you cannot take your database offline until the issue is solved) and run ApexSQL Recover on them. Note that a full database backup will not help here, as the records marked for overwriting are not included in the backup.

First, I’ll delete some records from the Person.EmailAddress table in the AdventureWorks database.

 

I can delete these records in SQL Server Management Studio, or execute a script such as

DELETE FROM Person.EmailAddress
WHERE BusinessEntityID BETWEEN 70 AND 80

Then, I’ll start ApexSQL Recover and select From DELETE operation in the Recovery tab.

 

In the Select the database to recover step, first select the SQL Server instance. If it’s not shown in the drop-down list, click the Server icon right to the Server drop-down list and browse for the SQL Server instance, or type the instance name manually.

Specify the authentication type and select the database in the Database drop-down list.

 

In the next step, you’re prompted to add additional data sources. As this can be a tricky step, especially for new users, ApexSQL Recover offers help via the Help me decide option.

 

The Help me decide option guides you through a series of questions about the database transaction log and advises what files to add.

If you know that you have no transaction log backups or detached transaction logs, or the online transaction log file has been truncated after the data was deleted, select No additional transaction logs are available.

If you know that you have transaction log backups that contain the delete transactions you want to recover, click Add transaction logs. The online transaction log is listed and selected automatically.

 

Click Add if to add transaction log backups. It would be best if you have a full transaction log chain, as explained above.

The next step for this option is to specify the time range.

 

Selecting a small time range for the time of deletion will create the recovery script just for the accidentally deleted records. A wide time range might script the records deleted on purpose, and you don’t want that. If needed, you can check the script generated and manually remove such records.

After that, for all data sources options, the next step is to select the tables. Be careful here, if you deleted some data from other tables on purpose, and don’t want to recover them, don’t select all tables, as ApexSQL Recover will create the INSERT script for them too.

 

The next step offers two options: to create a recovery script that will insert the deleted records back into the Person.EmailAddress table, or to create a new database, create the Person.EmailAddress table in it, and insert the deleted records. I’ll select the first one.

 

The recovery process is completed and 11 records are found and scripted, as expected.

 

To see the script, click View script. ApexSQL Recover has its own script editor, where you can review, modify, and execute the recovery script.

The insert into statements look like:

INSERT INTO Person.EmailAddress( BusinessEntityID,
EmailAddressID,
EmailAddress,
rowguid,
ModifiedDate)
VALUES( 70,
70,
N'david2@adventure-works.com' COLLATE SQL_Latin1_General_CP1_CI_AS,
'd62c5b4e-c91f-403f-b630-7b7e0fda70ce',
'20030109 00:00:00.000' );

To execute the script, click Execute in the menu.

 

If you want to check whether the records are really back, execute

SELECT * FROM Person.EmailAddress
WHERE BusinessEntityID BETWEEN 70 AND 80

As shown, ApexSQL Recover recovers SQL database data after accidental deletes even without the database backup that contains the deleted data and relevant transaction log backups. ApexSQL Recover reads the deleted data from the database data file, so this method can be used even for databases in the Simple recovery model.

Besides recovering SQL database records from a DELETE statement, ApexSQL Recover can help when the records are lost due to a DROP TABLE, or TRUNCATE statement, as well as repair a corrupted MDF file that cannot be attached to as SQL Server instance.

You can find more information about how to recover SQL database lost data and repair a SQL Server database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – Parsing SSIS Catalog Messages – Notes from the Field #030

[Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications. The tool may also be used to automate maintenance of SQL Server databases and updates to multidimensional cube data.

In this episode of the Notes from the Field series I requested SSIS Expert Andy Leonard to discuss one of the most interesting concepts of SSIS Catalog Messages. There are plenty of interesting and useful information captured in the SSIS catalog and we will learn together how to explore the same.


The SSIS Catalog captures a lot of cool information by default. Here’s a query I use to parse messages from the catalog.operation_messages table in the SSISDB database, where the logged messages are stored.

This query is set up to parse a default message transmitted by the Lookup Transformation. It’s one of my favorite messages in the SSIS log because it gives me excellent information when I’m tuning SSIS data flows. The message reads similar to:

Data Flow Task:Information: The Lookup processed 4485 rows in the cache. The processing time was 0.015 seconds. The cache used 1376895 bytes of memory.

The query:

USE SSISDB
GO
DECLARE @MessageSourceType INT = 60
DECLARE @StartOfIDString VARCHAR(100) = 'The Lookup processed '
DECLARE @ProcessingTimeString VARCHAR(100) = 'The processing time was '
DECLARE @CacheUsedString VARCHAR(100) = 'The cache used '
DECLARE @StartOfIDSearchString VARCHAR(100) = '%' + @StartOfIDString + '%'
DECLARE @ProcessingTimeSearchString VARCHAR(100) = '%' + @ProcessingTimeString + '%'
DECLARE @CacheUsedSearchString VARCHAR(100) = '%' + @CacheUsedString + '%'
SELECT operation_id
, SUBSTRING(MESSAGE,
(
PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1),
((
CHARINDEX(' ',
MESSAGE,
PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1))
-
(
PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1))) AS LookupRowsCount
, SUBSTRING(MESSAGE,
(
PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1),
((
CHARINDEX(' ',
MESSAGE,
PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1))
-
(
PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))) AS LookupProcessingTime
, CASE WHEN (CONVERT(numeric(3,3),SUBSTRING(MESSAGE,
(
PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1),
((
CHARINDEX(' ',
MESSAGE,
PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1))
-
(
PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))))) = 0
THEN 0
ELSE CONVERT(bigint,SUBSTRING(MESSAGE,
(
PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1),
((
CHARINDEX(' ',
MESSAGE,
PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1))
-
(
PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1))))
/
CONVERT(numeric(3,3),SUBSTRING(MESSAGE,
(
PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1),
((
CHARINDEX(' ',
MESSAGE,
PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1))
-
(
PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))))
END AS LookupRowsPerSecond
, SUBSTRING(MESSAGE,
(
PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1),
((
CHARINDEX(' ',
MESSAGE,
PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1))
-
(
PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1))) AS LookupBytesUsed
,CASE WHEN (CONVERT(bigint,SUBSTRING(MESSAGE,
(
PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1),
((
CHARINDEX(' ',
MESSAGE,
PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1))
-
(
PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))))= 0
THEN 0
ELSE CONVERT(bigint,SUBSTRING(MESSAGE,
(
PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1),
((
CHARINDEX(' ',
MESSAGE,
PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1))
-
(
PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1))))
/
CONVERT(bigint,SUBSTRING(MESSAGE,
(
PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1),
((
CHARINDEX(' ',
MESSAGE,
PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1))
-
(
PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1))))
END AS LookupBytesPerRow
FROM [catalog].[operation_messages]
WHERE message_source_type = @MessageSourceType
AND MESSAGE LIKE @StartOfIDSearchString
GO

Note that you have to set some parameter values:

  • @MessageSourceType [int] – represents the message source type value from the following results:
    Value     Description
    10           Entry APIs, such as T-SQL and CLR Stored procedures
    20           External process used to run package (ISServerExec.exe)
    30           Package-level objects
    40           Control Flow tasks
    50           Control Flow containers
    60           Data Flow task
    70           Custom execution message
    Note: Taken from Reza Rad’s (excellent!) helper.MessageSourceType table found here.
  • @StartOfIDString [VarChar(100)] – use this to uniquely identify the message field value you wish to parse.
    In this case, the string ‘The Lookup processed ‘ identifies all the Lookup Transformation messages I desire to parse.
  • @ProcessingTimeString [VarChar(100)] – this parameter is message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Processing Time value.
  • For this execution, I use the string ‘The processing time was ‘.
  • @CacheUsedString [VarChar(100)] – this parameter is also message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Cache  Used value. It returns the memory used, in bytes.
    For this execution, I use the string ‘The cache used ‘.
  • The other parameters are built from variations of the parameters listed above.

The query parses the values into text. The string values are converted to numeric values for ratio calculations; LookupRowsPerSecond and LookupBytesPerRow. Since ratios involve division, CASE statements check for denominators that equal 0.

Here are the results in an SSMS grid:

This is not the only way to retrieve this information. And much of the code lends itself to conversion to functions. If there is interest, I will share the functions in an upcoming post.

If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Database File Names and Extentions – Notes from the Field #025

[Notes from Pinal]: People believe Shakespeare’s birth date is on April 26. ” What is there in a name?” – I often hear this statement, which famously quotes from Shakespeare.  I think we developer believe in this statement as most of our various are either BAR or FOO. However, this is extremely inefficient and not convenient. There are many reasons we should have meaningful names to our objects, variables and various elements. There are times when incorrect naming convention can just get user in trouble or lead organizations to disaster. My friend Tim shares this humorous and interesting story. Read on

Linchpin People are database coaches and wellness experts for a data driven world. In this 25th episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words of names and extensions.


Database file naming standards are important. With Microsoft SQL Server the default naming standard is not a requirement so administrators can change from the default .mdf, .ldf and .ndf.  Recently I encountered a SQL Server instance that was having performance issues. The customer was reporting very slow response times and couldn’t figure out what the issue was.

Reluctantly the administrator reached out to the database team to get a professional opinion.  For the database team this was the first they learned of this new server so we quickly performed our standard check. Two things stood out which was high CPU utilization and the executable consuming the most CPU was antivirus as well as high disk IO.  We stopped the “on access” scan and performance of the SQL Server improved as well as disk IO dropped very low.

We reached out to our antivirus team and was assured that our standard policies were applied to this server which includes excludes for our standards for database file naming.  We continued to research deviations from our standard server build against this instance, since we did not build it and that is when we found the admin/vendor/customer (non dba) chose a different naming standard for the files.  The extensions they chose were.001 for data, .002 for logs, .003 for ndf.

Due to not having proper excludes for those files by the antivirus software, the customer created a denial of service attack against their self. The on access scan effectively created such a load that it prevented anything else from running.

This is not the first time I have encountered this type of issue and blogged about this in 2012. http://timradney.com/2012/06/18/file-extensions-for-sql-server-database-files/

If you decide that for whatever reason you would like to deviate from the default naming standard or your companies naming standard for database files, please get with the people that manage your antivirus software to make sure they create exclusions for your new naming standard.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Tools for Proactive DBAs – Central Management Server – Notes from the Field #024

[Note from Pinal]: This is a new episode of Notes from the Fields series. AlwaysOn is a very complex subject and not everyone knows many things about this. The matter of the fact is there is very little information available on this subject online and not everyone knows everything about this. This is why when a very common question related to AlwaysOn comes, people get confused.

In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very common issue DBAs and Developer faces in their career and is related to AlwaysOn Availability Group. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


After completing several AlwaysOn Availability Group implementations there are two questions that come up frequently. When did my availability group failover? Where is my read-write replica? The answer to the first one is provided here [http://johnsterrett.com/2014/03/18/where-is-my-availability-group/ ]. Today, were going to look at the answer to the second question.

Where is my read-write replica?

The following script, when executed on an availability group replica returns the availability group name, current role state, and database name. This will let you know if this instance is hosting the PRIMARY “read/write” replica.

IF SERVERPROPERTY ('IsHadrEnabled') = 1 BEGIN
SELECT
AvailabilityGroup = ag.name,
AvailabilityGroupRole = ars.role_desc,
db.name
FROM sys.databases db
INNER JOIN sys.availability_databases_cluster adc ON db.group_database_id = adc.group_database_id
INNER JOIN sys.availability_groups ag ON adc.group_id = ag.group_id
INNER JOIN sys.dm_hadr_availability_replica_states ars ON ag.group_id = ars.group_id
AND db.replica_id = ars.replica_id
END
ELSE BEGIN
SELECT
NULL AS AvailabilityGroup, NULL AS AvailabilityGroupRole, name
FROM sys.databases
END

How do I check all replicas?

Now you know how to check if an instance of SQL Server participating in an AlwaysOn Availability Group is the “Read/Write” PRIMARY role. Next, we can utilize Central Management Server to run the same script across your other instances participating as replicas in your AlwaysOn Availability Group configuration. If you have never used Central Management Server this step-by-step guide can get you going in 10 minutes.

For this week, I have an windows failover cluster with two nodes “SQL2012DR” and “SQL2012PROD1 aka localhost,1433”). Each node hosts its own default instance of SQL Server. There are two separate AlwaysOn Availability Groups. Using Central Management Server, I can rerun the query above and quickly see where my read/write replica is.

Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Finding Last Backup Time for All Database – Last Full, Differential and Log Backup – Optimized

Four years ago, I wrote a blog post SQL SERVER – Finding Last Backup Time for All Database. It has been a very popular script. SQL Expert Sravani has posted a fantastic script which also displays last full, differential and log backup for the database – I have blogged about it here SQL SERVER – Finding Last Backup Time for All Database – Last Full, Differential and Log Backup.

Well, in technology, there is always space for improvement and we should always be ready to learn and progress. SQL Expert TheSQLGuru came up with further optimized script which also lists all the details which earlier blog post listed, and the script is very quick to execute. Here is the script.

SELECT d.name AS 'DATABASE_Name',
MAX(CASE WHEN bu.TYPE = 'D' THEN bu.LastBackupDate END) AS 'Full DB Backup Status',
MAX(CASE WHEN bu.TYPE = 'I' THEN bu.LastBackupDate END) AS 'Differential DB Backup Status',
MAX(CASE WHEN bu.TYPE = 'L' THEN bu.LastBackupDate END) AS 'Transaction DB Backup Status',
CASE d.recovery_model WHEN 1 THEN 'Full' WHEN 2 THEN 'Bulk Logged' WHEN 3 THEN 'Simple' END RecoveryModel
FROM MASTER.sys.databases d
LEFT OUTER JOIN (SELECT database_name, TYPE, MAX(backup_start_date) AS LastBackupDate
FROM msdb.dbo.backupset
GROUP BY database_name, TYPE) AS bu ON d.name = bu.database_name
GROUP BY d.Name, d.recovery_model

Thanks TheSQLGuru for excellent script, you win USD 20 Amazon Gift Card or INR 1000 Flipkart Gift Voucher. I have already sent you emails with details.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Finding Jobs Shrinking Database Files – Notes from the Field #023

[Notes from Pinal]: Search on the internet about ‘Shrinking Database’, you will find plenty of advice why it is bad and it should not be done. If you do not believe me, just try it yourself. Even I have blogged about it before that it is an absolutely bad thing to do. However, there are very few blogs which talks about how to solve this ancient problem. The reason, I call it ancient problem is that there are quite a few organizations which are doing this for many years and they have many different places where they have included Shrinking Database code. There are quite a few places, where I see that people have created database jobs to shrink the database. I was talking to Tim about this and he comes up with a beautifully simple script where he demonstrated how to find out jobs which are shrinking database files.

Linchpin People are database coaches and wellness experts for a data driven world. In this 16th episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains a very simple script to find out jobs shrinking database files.


Often when analyzing a SQL Server Instance I come across jobs that are automating shrinking a database file.  Checking for a shrinking operation is important when analyzing a server for a number of reasons.  One is that anytime a data file or log file has to grow, transactions have to wait until the growth operation is complete thus causing a performance impact.  Shrinking a log file can contribute to high virtual log file counts and shrinking a data file will lead to fragmenting the database.

I check to see if any database maintenance plans exist and if so I check to see if auto shrink has been chosen, occasionally I find this is the case.  I also have a TSQL script that will search for any reference to the word ‘shrink’ in a job step.  I find custom jobs like this more often than auto shrink being used in a maintenance plan.  Often times the justification for someone configuring a job like this is due to drive space issues.

In cases where the shrink is to reclaim drive space it is most often due to not having a proper backup routine in place for the transaction logs or a process that keeps a transaction open for an extended period of time.  The transaction log will need to be large enough to handle your index maintenance, ETL processes and transactions that occur between log backups.   If you find that you are having an erratic large growth, then you will need to examine what transactions or processes are holding open an active transaction for such a long duration or the frequency of your log backups.

Below is the script I use to search for the word shrink in any tsql job step.

DECLARE @search VARCHAR(100)
SET @Search = 'shrink'
SELECT  A.[job_id],
B.[name],
[step_id],
[step_name],
[command],
[database_name]
FROM    [msdb].[dbo].[sysjobsteps] A
JOIN [msdb].dbo.sysjobs B ON A.job_id = B.[job_id]
WHERE   command LIKE '%' + @Search + '%'
ORDER BY [database_name],
B.[name],
[step_id]

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone.


Related Articles:

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Finding Last Backup Time for All Database – Last Full, Differential and Log Backup

About four years ago, I wrote a blog post where I posted a script about finding backup time for all the databases. You can see the blog post over here SQL SERVER – Finding Last Backup Time for All Database. It has been a very popular script. However, this script was just giving details about last full backup time. SQL Expert Sravani has posted a fantastic script which also displays last full, differential and log backup for the database.

Here is the script.

SET NOCOUNT ON
GO
SET quoted_identifier OFF
DECLARE
@dbname AS VARCHAR(80)
DECLARE @msgdb AS VARCHAR(100)
DECLARE @dbbkpname AS VARCHAR(80)
DECLARE @dypart1 AS VARCHAR(2)
DECLARE @dypart2 AS VARCHAR(3)
DECLARE @dypart3 AS VARCHAR(4)
DECLARE @currentdate AS VARCHAR(10)
DECLARE @server_name AS VARCHAR(30)
SELECT @server_name = @@servername
SELECT @dypart1 = DATEPART(dd,GETDATE())
SELECT @dypart2 = DATENAME(mm,GETDATE())
SELECT @dypart3 = DATEPART(yy,GETDATE())
SELECT @currentdate= @dypart1 + @dypart2 + @dypart3
PRINT "#####################################################################"
PRINT "# SERVERNAME : "+ @server_name + " DATE : "+ @currentdate +"#"
PRINT "#####################################################################"
PRINT "DatabaseName Full Diff TranLog"
PRINT "##########################################################################################################################################"
SELECT SUBSTRING(s.name,1,50) AS 'DATABASE Name',
b.backup_start_date AS 'Full DB Backup Status',
c.backup_start_date AS 'Differential DB Backup Status',
d.backup_start_date AS 'Transaction Log Backup Status'
FROM MASTER..sysdatabases s
LEFT OUTER JOIN msdb..backupset b
ON s.name = b.database_name
AND b.backup_start_date =
(SELECT MAX(backup_start_date)AS 'Full DB Backup Status'
FROM msdb..backupset
WHERE database_name = b.database_name
AND TYPE = 'D') -- full database backups only, not log backups
LEFT OUTER JOIN msdb..backupset c
ON s.name = c.database_name
AND c.backup_start_date =
(SELECT MAX(backup_start_date)'Differential DB Backup Status'
FROM msdb..backupset
WHERE database_name = c.database_name
AND TYPE = 'I')
LEFT OUTER JOIN msdb..backupset d
ON s.name = d.database_name
AND d.backup_start_date =
(SELECT MAX(backup_start_date)'Transaction Log Backup Status'
FROM msdb..backupset
WHERE database_name = d.database_name
AND TYPE = 'L')
WHERE s.name <>'tempdb'
ORDER BY s.name

Sravani, please send me email at my mail id and I will send USD 20 worth Amazon Gift Card or INR 1000 Flipkart Gift Voucher for your wonderful contribution. If you use any such script in your daily routine. Please send me email and I will be glad to post the same on blog with due credit.

Reference: Pinal Dave (http://blog.SQLAuthority.com)