Undo Human Errors in SQL Server – SQL in Sixty Seconds #109 – Point in Time Restore

Undo Human Errors in SQL Server - SQL in Sixty Seconds #109 - Point in Time Restore 109-UndoHumanErrors-800x450 In this blog post, we are going to talk about how to undo human errors in SQL Server with the help of Point in Time Recovery. Let me start asking some questions –

  • Have you ever dropped your table by mistake?
  • Have you ever wrote an UPDATE statement without the WHERE condition?
  • Have you ever deleted the data which are critical to the business?

If the answer to this question is yes. I am sure you understand the importance of rolling back the transactions. In SQL Server you can always Undo Human Errors with the help of Point in Time Recovery. Let us learn how you can do that in this video.

Here are a few things to remember. You can rollback your transaction if your recovery model is Full Recovery Model and also if you have taken one full backup before you made the error. If either of the conditions is not satisfied you may not be able to recover your data again.

One of the most common questions I have received for this video is what happens if the database has many other important transactions going on other tables and you want to keep them rolling forward. If that is the case, you should restore your database on another system or with a different name on the same system and restore your database to the point right before you made the error.

Once you do that, you can retrieve your data from the table where you have accidentally faced the disaster and insert into your original live database.

In any case, it is always a good idea to perform all of your experiments on the development environment first. Once you are satisfied with the results and output, you should try them in the production environment. It is always a good idea to test your work before deploying it.

Here is the sample script which I have used in this blog post.

CREATE DATABASE SQLAuthority
GO
ALTER DATABASE SQLAuthority
SET RECOVERY FULL
GO
USE SQLAuthority
GO
CREATE TABLE TestTable (ID INT)
GO
BACKUP DATABASE [SQLAuthority] 
TO DISK = N'D:\SQLAuthority.bak'
GO
INSERT INTO TestTable (ID)
VALUES (1), (2), (3)
GO
SELECT GETDATE() BeforeTruncateTime;
-- 2020-08-18 19:45:38.717
SELECT *
FROM TestTable
GO
-- Disaster
TRUNCATE TABLE TestTable
GO
SELECT *
FROM TestTable
GO
BACKUP LOG [SQLAuthority] TO
DISK = N'D:\SQLAuthorityLog.trn'
GO
-- ------------------
-- Restore operation
USE master
GO
RESTORE DATABASE [SQLAuthority] 
FROM DISK = N'D:\SQLAuthority.bak'
WITH NORECOVERY, REPLACE
GO
RESTORE LOG [SQLAuthority] 
FROM DISK = N'D:\SQLAuthorityLog.trn'
WITH STOPAT = '2020-08-18 19:45:38.717', RECOVERY
GO
USE SQLAuthority
GO
SELECT *
FROM TestTable
GO
USE master
GO
DROP DATABASE SQLAuthority
GO

Please run the script above with the video which is included in this blog post. If you accidentally run the script without understanding what each command do, you may damage your system.

Remember final thing: When something goes wrong, do not worry or panic. Just note down the time when it happened. You can always rollback if you have a full back and full recovery model using the technique explained in this video.

If you still need help you can reach out to me here:  Comprehensive Database Performance Health Check.

Reference: Pinal Dave (https://blog.sqlauthority.com)

Quest

SQL Backup, SQL in Sixty Seconds, SQL Restore, SQL Scripts, SQL Server
Previous Post
Solve Puzzle about Data type – SQL in Sixty Seconds #108
Next Post
Dirty Read with NOLOCK – SQL in Sixty Seconds #110

Related Posts

18 Comments. Leave new

  • Carsten Saastamoinen
    August 25, 2020 12:20 pm

    And now you have introduced an even bigger problem. From 2020-08-18 19:45:38.717 and until the system was shut down 2020-08-18 19:47:36.743 there were 5 new customers who were created online and all these customers ordered products and expects to get this products in 2 days (Grandma celebrates her 90 years birthday, David bought a ticket to xxx, where he will start at the university, … ), 8 old customers paid their invoice with their last money and the price of 3 products was changed, …

    So the above solution is not a solution. The solution don’t tell how to capture changes from the 00:01:58,027 period. Systems today are online systems and not old batch systems.

    Reply
  • Carsten, you do bring up a valid point for fast moving databases. However this is actually a great solution depending on your requirements. Mr. Dave explains above that you can restore your backup on another test system and retrieve the data you need while the production system keeps working. Also some systems are not updated fast enough to worry about new entries with-in a few minutes. And some, like ours have a paper trail that can be used to re-enter data lost during those few minutes.
    I also think this is a great exercise in actually performing point-in-time restores which I encourage all DBA’s to do. And document what you do so you can use it in times of disaster.

    Reply
  • Carsten Saastamoinen
    August 28, 2020 9:49 am

    My comment is more that you have problems with one table and now you are creating problems with many tables – all other tables in the database. In principle, you must look at all other tables to see if any changes have been made during the period. I do not believe that it is normal in the year 2020 to have hard copies of all changes to data for all tables in their databases, perhaps for some of the changes. And even in a short period of time, there can be many changes, few changes or none if you are lucky. But ALL tables must be examined for both insertions, updates and deletions. This work is a huge job and perhaps impossible. Databases like SAP, DYN AX, DYN NAV, … and other standard systems have several thousand tables !!!!!!

    The correct method must always be to restore a copy of the database – on the same server just with a different name or on a different server, depending on where the capacity is available. If you are lucky, your database is divided into several filegroups, so you can just restore one filegroup with partial restore, so you get up and running again faster. And here point-in-time is used !!! But not on the production database.

    So the exercise should more go on restoring the database with another name or on another server and having a script ready for partial restore if this is possible and if the situation arises

    Reply
    • I have tried to answer this in the paragraph here: One of the most common questions I have received for this video…

      Reply
  • Carsten Saastamoinen
    August 28, 2020 1:11 pm

    USE master;
    GO
    DROP DATABASE BackupDB;
    GO
    CREATE DATABASE BackupDB
    ON PRIMARY
    (NAME = BackupDB_sys,
    FILENAME = ‘c:\Databaser\BackupDB_sys.mdf’,
    SIZE = 5MB,
    MAXSIZE = 5MB,
    FILEGROWTH = 10%),

    FILEGROUP BackupDB_FILEGROUP_1
    (NAME = BackupDB_fg1_1,
    FILENAME = ‘c:\Databaser\BackupDB_fg1_1.ndf’,
    SIZE = 2MB,
    MAXSIZE = 5MB,
    FILEGROWTH = 10%),

    FILEGROUP BackupDB_FILEGROUP_2
    (NAME = BackupDB_fg2_1,
    FILENAME = ‘c:\Databaser\BackupDB_fg2_1.ndf’,
    SIZE = 2MB,
    MAXSIZE = 5MB,
    FILEGROWTH = 10%),

    FILEGROUP BackupDB_FILEGROUP_3
    (NAME = BackupDB_fg3_1,
    FILENAME = ‘c:\Databaser\BackupDB_fg3_1.ndf’,
    SIZE = 2MB,
    MAXSIZE = 5MB,
    FILEGROWTH = 10%)

    LOG ON
    (NAME = BackupDB_log,
    FILENAME = ‘c:\Databaser\BackupDB.ldf’,
    SIZE = 2MB,
    MAXSIZE = 5MB,
    FILEGROWTH = 10%);
    GO
    USE BackupDB;

    CREATE TABLE dbo.t1
    (
    i INT NOT NULL
    CONSTRAINT PK_t1 PRIMARY KEY
    ) ON BackupDB_FILEGROUP_1;

    CREATE TABLE dbo.t2
    (
    i INT NOT NULL
    CONSTRAINT PK_t2 PRIMARY KEY,
    i_t1 INT NOT NULL
    CONSTRAINT FK_t2_t1 FOREIGN KEY REFERENCES dbo.t1 (i)
    ) ON BackupDB_FILEGROUP_2;

    CREATE TABLE dbo.t3
    (
    i INT
    ) ON BackupDB_FILEGROUP_3;
    GO
    INSERT INTO dbo.t1 VALUES
    (11),
    (12),
    (13);

    INSERT INTO dbo.t2 VALUES
    (21, 11),
    (22, 11),
    (23, 13);

    INSERT INTO dbo.t3 VALUES
    (31),
    (32),
    (33);
    GO
    — Full backup of database
    –BACKUP DATABASE BackupDB TO DISK = ‘c:\rod\FullBackupDB.bak’ WITH FORMAT;
    — or Filegroup backup
    BACKUP DATABASE BackupDB FILEGROUP = ‘Primary’ TO DISK = ‘c:\rod\Primary.bak’ WITH FORMAT;
    BACKUP DATABASE BackupDB FILEGROUP = ‘BackupDB_FILEGROUP_1’ TO DISK = ‘c:\rod\BackupDB_FILEGROUP_1.bak’ WITH FORMAT;
    BACKUP DATABASE BackupDB FILEGROUP = ‘BackupDB_FILEGROUP_2’ TO DISK = ‘c:\rod\BackupDB_FILEGROUP_2.bak’ WITH FORMAT;
    BACKUP DATABASE BackupDB FILEGROUP = ‘BackupDB_FILEGROUP_3’ TO DISK = ‘c:\rod\BackupDB_FILEGROUP_3.bak’ WITH FORMAT;

    — Log backup for both
    BACKUP LOG BackupDB TO DISK = ‘c:\rod\BackupDB_log1.bak’ WITH FORMAT;
    GO
    INSERT INTO dbo.t1 VALUES
    (14),
    (15),
    (16);

    INSERT INTO dbo.t2 VALUES
    (24, 15),
    (25, 16),
    (26, 12);
    GO
    BACKUP LOG BackupDB TO DISK = ‘c:\rod\BackupDB_log2.bak’ WITH FORMAT;
    GO
    USE BackupDB;

    INSERT INTO dbo.t1 VALUES
    (17),
    (18),
    (19);

    INSERT INTO dbo.t2 VALUES
    (27, 15),
    (28, 16),
    (29, 19);

    INSERT INTO dbo.t3 VALUES
    (34),
    (35),
    (36);

    DROP TABLE IF EXISTS #Tid;

    SELECT GETDATE() AS Tid
    INTO #Tid;

    TRUNCATE TABLE dbo.t2;
    GO
    — Start solve the problem
    BACKUP LOG BackupDB TO DISK = ‘c:\rod\BackupDB_log3.bak’ WITH FORMAT, COPY_ONLY;
    GO
    — test many times
    DROP DATABASE IF EXISTS BackupDB1;
    GO
    — If full backup
    –RESTORE DATABASE BackupDB1
    — FILEGROUP=’Primary’,
    — FILEGROUP=’BackupDB_FILEGROUP_2′ FROM DISK = ‘c:\rod\FullBackupDB.bak’
    — WITH PARTIAL, NORECOVERY,
    — MOVE ‘BackupDB_sys’ TO ‘c:\rod\BackupDB_CopyPrimary.mdf’,
    — MOVE ‘BackupDB_fg1_1’ TO ‘c:\rod\BackupDB_Copy_fg1_1.mdf’,
    — MOVE ‘BackupDB_fg2_1’ TO ‘c:\rod\BackupDB_Copy_fg2_2.mdf’,
    — MOVE ‘BackupDB_fg3_1’ TO ‘c:\rod\BackupDB_Copy_fg3_3.mdf’,
    — MOVE ‘BackupDB_log’ TO ‘c:\rod\BackupDB_Copy.ldf’;

    — Or if filegroup backup
    RESTORE DATABASE BackupDB1 FILEGROUP=’primary’
    FROM DISK = ‘c:\rod\Primary.bak’
    WITH PARTIAL, NORECOVERY,
    MOVE ‘BackupDB_sys’ TO ‘c:\rod\BackupDB_CopyPrimary.mdf’,
    MOVE ‘BackupDB_fg1_1’ TO ‘c:\rod\BackupDB_Copy_fg1_1.mdf’,
    MOVE ‘BackupDB_fg2_1’ TO ‘c:\rod\BackupDB_Copy_fg2_2.mdf’,
    MOVE ‘BackupDB_fg3_1’ TO ‘c:\rod\BackupDB_Copy_fg3_3.mdf’,
    MOVE ‘BackupDB_log’ TO ‘c:\rod\BackupDB_Copy.ldf’;

    RESTORE DATABASE BackupDB1 FILEGROUP=’BackupDB_FILEGROUP_2′
    FROM DISK = ‘c:\rod\BackupDB_FILEGROUP_2.bak’ WITH NORECOVERY;

    — Log restore for both – full or filegroup
    RESTORE LOG BackupDB1 FROM DISK = ‘c:\rod\BackupDB_log1.bak’ WITH NORECOVERY;

    RESTORE LOG BackupDB1 FROM DISK = ‘c:\rod\BackupDB_log2.bak’ WITH NORECOVERY;

    DECLARE @Tid DATETIME = (SELECT Tid FROM #Tid);

    RESTORE LOG BackupDB1 FROM DISK = ‘c:\rod\BackupDB_log3.bak’ WITH RECOVERY, STOPAT = @Tid;
    GO
    USE BackupDB1;

    SELECT *
    FROM dbo.t2;
    GO
    USE BackupDB;

    SELECT *
    FROM dbo.t1;

    SELECT *
    FROM dbo.t2;

    SELECT *
    FROM dbo.t3;
    GO
    USE BackupDB;

    INSERT INTO dbo.t2 (i, i_t1)
    SELECT i,
    i_t1
    FROM BackupDB1.dbo.t2;
    GO
    USE BackupDB;

    SELECT *
    FROM dbo.t1;

    SELECT *
    FROM dbo.t2;

    SELECT *
    FROM dbo.t3;
    GO

    Reply
  • Carsten,

    First, thank you for your point of view. A civil debate is what makes our profession stronger and better.
    You are partially correct here in that the database tables must be inspected and verified in case of a disaster scenario. And thank you for posting the example scripts. More examples is always a good thing. Another good point you make is having data spread across filegroups. Something I believe developers and dba’s need to be more aware of when building complex software systems.
    Now, I must disagree on your statement that ‘a’ method ‘must’ always be the correct one. Over the years I have observed that the one constant in the world is that ‘every solution is contingent upon the context of the problem, constrained by the environment in which it lives’. For example, you state the database backup ‘must’ be restored on the same production server with a different name. In many cases this is not possible. Some systems only allocate enough storage for the database with a little extra room for growth and some breathing room. If the database is 500 GB and the storage allowed is 900 GB then you cannot restore that backup there. Also if the database has to remain online then the action of restoring the backup would put extra pressure on the server, slow down the system and saturate the network connection bringing work to a halt.
    Since a software application is usually the mechanism that actually runs the SQL code to input the data, and if built properly, the truncate scenario is highly unlikely in a real world environment. I would think this would be more of a problem in development where a DBA can run untested scripts directly against the database which would not be in production. The more common issue I see is a specific record being deleted which can easily be fixed by retrieving the data from the backup using a point in time restore on a test server, and either merging it back into the production database or letting the application operator re-enter the data via the application interface.

    Reply
  • Carsten Saastamoinen
    August 29, 2020 1:41 pm

    Dan!

    Just a small correction. I write – quote Carsten – “The correct method must always be to restore a copy of the database – on the same server just with a different name or on a different server, depending on where the capacity is available”

    My clear position when I discuss solutions is, that there can be several possibilities and one can be just as good as another in terms of correctness. But there are also solutions that are wrong or too simple. Many read posts without being critical. When discussing online, the argument is too often that xxx says. Not even a proof can convince them that it is wrong. How many times have you meet the comment that COUNT (*) should be changed to COUNT (1) – it is Oracle that has or had that problem, not SQL Server. Or that … EXISTS (SELECT * …. should be changed to … EXISTS (SELECT pk …. It’s not a problem to use * in COUNT (*) or EXISTS … for SQL Server neither in terms of correctness or performance, but using SELECT * in the projection section of the end result of a SELECT statement is a major performance issue.

    My reaction to this post is that the example is too simple. And I miss comments how to make a correct solution. In my world, it is wrong to restore to point-in-time on a production server because you don’t know if you loose data. A comment about that it is necessary to examine all other tables is missing – so restore is only a very small part of the work. One of the most important tasks for a DBA is to make sure there is no data loss.

    Reply
  • Hi Carsten,

    Just saying sometimes it is not possible to cover everything in every conversation.

    When someone ask direction, we may say – Take Left from Cross Roads.

    We do not say – Take left from the cross road but make sure that there are no other people driving on the cross roads, also the speed limit should not cross over 15 miles per hour. Additionally, make sure that people are following rules there too… and if someone is before you, please let them go and if people are walking they have right of the way…. and so on.

    It is understood that DBA should not have any data lose and they should try out in the development system as multiple times mentioned in the blog post.

    Being optimistic and positive person, I feel that this blog and video teaches us a feature which we should all know. Let us learn from it and apply if we ever need that in the future.

    Reply
    • Thanks Mike,

      I do appreciate what Carsten and Dan are doing. They are actually helping the community be better.

      Thanks for stopping by and adding your bit.

      I personally feel DEV and DBA should always try their solution on separate server/environment before they deploy on production.

      Reply
  • Carsten Saastamoinen
    August 29, 2020 4:19 pm

    Hi CallMeMike,

    the heading is “Undo Human Errors in SQL Server” and restore to point-in-time is only the first 10% of the job!

    It was “covering everything in every conversation” if we was talking about how to examine and insert data in all the other tables in the correct order depending on FK, … We are talking about “Undo Human Errors in SQL Server” and not only “Several ways to Restore”.

    Reply
  • Hi Carsten,

    What are you expecting as a outcome from this conversation?

    Reply
  • Carsten Saastamoinen
    August 29, 2020 10:06 pm

    Hi CallMeMike,

    I hope that the people who reads this, will use point-in-time the right way – not use it on a production db and have learned, that there is must more work to do, for finishing the job “Undo Human Errors in SQL Server” in a professional way.

    Reply
  • Carsten Saastamoinen
    August 29, 2020 10:12 pm

    When I teach developers and DBAs, I meet again and again course participants who come up with claims that they have read online. So many to posts uncritically. And I see it also in posts on the web.

    Reply
  • Carsten,

    The particular thing which you were so much critical about was always clear from the beginning for many of us and I am sure for the readers of the blog. However, I am glad that you get to express yourself on this forum.

    I appreciate Pinal for being open staying positive throughout the conversation.

    Reply
  • Carsten Saastamoinen-Jakobsen
    August 30, 2020 1:21 pm

    CallMeMike,

    Ankur
    August 9, 2020 4:46 pm
    I reckon SELECT COUNT(0) would be slightly faster than SELECT COUNT(*)

    Reply

    Pinal Dave
    August 9, 2020 5:20 pm
    The suggestion is to not use SELECT COUNT(*) at all.

    The above shows why it is important to discuss. Discussing is not the same as being critical !!! I will categorize the above example as a mistake that should be corrected in people’s mindset. Time must be spent on fixing something that gives better performance and not on something that is absolutely indifferent!

    In all the cases where I have discussed, I have also attached scripts that show examples, not just make claims. This applies to e.g. for cluster-key in nonclustered index and count (*) instead of using system tables.

    But it’s obviously not a good tone to correct mistakes! So I want to stop that!

    Reply
  • Hi Carsten,

    I hope you are happy after the conversation.

    Best wishes,

    Reply

Leave a Reply