SQL SERVER – Recovery Models and Selection

SQL Server offers three recovery models: full recovery, simple recovery and bulk-logged recovery. The recovery models determine how much data loss is acceptable and determines whether and how transaction logs can be backed up.

Select Simple Recovery Model if:
* Your data is not critical.
* Losing all transactions since the last full or differential backup is not an issue.
* Data is derived from other data sources and is easily recreated.
* Data is static and does not change often.

Select Bulk-Logged Recovery Model if:
* Data is critical, but logging large data loads bogs down the system.
* Most bulk operations are done off hours and do not interfere
with normal transaction processing.
* You need to be able to recover to a point in time.

Select Full Recovery Model if:
* Data is critical and no data can be lost.
* You always need the ability to do a point-in-time recovery.
* Bulk-logged activities are intermixed with normal transaction processing.
* You are using replication and need the ability to resynchronize all
databases involved in replication to a specific point in time.

You can switch from any recovery model to another recovery model, but prior to or after the switch, you may need to issue additional transaction log or full backups to ensure you have a complete backup set.
ALTER DATABASE Pubs SET RECOVERY FULL
GO

Microsoft TechNet
Reference : Pinal Dave (http://blog.SQLAuthority.com)

About these ads

8 thoughts on “SQL SERVER – Recovery Models and Selection

  1. I have a 65 GB database which gets updated by additional, new information everyday through a bcp command (takes 15 minutes, data udpated from external .txt file). Users access the database for queries only. I really don’t need any recovery since if the bulk insert fails one day, I can redo it in 15 minutes.

    How do I see what my recovery model is, switch it to Simple (because my LDF log file is 21 GB and growing and I have only 20 GB left on my server). At this rate I might run out of disk space in 6 months or less. Then I need to truncate this huge file and get the space.

    Many thanks

  2. select [name], recovery_model_desc
    from master.sys.databases

    use
    go

    BACKUP LOG WITH TRUNCATE_ONLY

    DBCC SHRINKFILE(‘databasename log file’,5)

    Good Luck!
    Sri

  3. hi..
    correction in full recovery Model.
    it should be full recovery in stand of bulk logged

    * full recovery activities are intermixed with normal transaction processing.

  4. Hi DBA’s
    Greetings..
    I have some trouble in accessing the secondary filegroup tables after a RESTORE.
    Here is my process.
    with SQL 2005 Express, I have a table with 20+ tables in PRIMARY filegroup and 1 Table for Images in SECONDARY filegroup.
    For the ease of file transfer, i backup the database with PRIMARY filegroup only.

    And When i restore the backup with RESTORE mydb FILEGROUP=’PRIMARY’ WITH PARTIAL option, i can access the datafiles in primary files. BUT the image table which is in SECONDARY filegroup as i have not backed it and nor restored. I get an error …
    “The query processor is unable to produce a plan for the table or view ‘prodImages’ because the table resides in a filegroup which is not online.”

    My question is, it there any way that i can still access the image table from secondary filegroup EVEN I HAVE RESTORED PRIMARY filegroup only.
    Is it possible in any way..

    Thanks..

  5. I find many DBA’s insist on using FULL recovery models with regular log file backups. Personally I find this is quite often overkill and possibly used by DBA’s is this is what they are taught on most SQL courses. However that is only because there is hardly anything to teach about SIMPLE recovery models and people seem to forget that this is a valid option to use in many cases.

    Naturally it depends on what the database is used for. If used for financial transactions or on a live voting system, then FULL backups are probably more crucial.

    If your SQL server runs a website where you perhaps enter a few comments in your blog once a week and then have others make comments on your blog. Then you could probably afford to go without transaction log backup. In some cases I have even found WCMS systems running their own backups within the Web based system. Because of the complexity of some of these systems, their own backup systems are far better than running a SQL backup as they sometimes span various tables on various databases and if you had to restore one database whilst another is out of synch, the whole website becomes unavailable. Therefore running full backups with transaction log backups merely slows the system down for no benefit as those backups will never realistically be used for restore.

    I think it is therefore usefull to point out when Simple recovery models are suitable and give a few examples of where they can be used.

    In addition to the points you listed above I would add
    – The data is not critical and can easily be recreated
    – The data is used for test or development, or a snapshot for reporting
    – Data is static and does not change
    – Losing any or all transactions since the last backup is not a problem
    – Data is derived and can easily be recreated
    – Customer agrees that nightly backups are suitable

    In some cases when a data loss occured, I have spent many hours (even on finance systems), recovering a database to a point in time only to find that at that point, the database had already suffered the loss of data. I then spent more and more time going to different points in time to find the exact time at which data loss occured. By the time I had finished another day had passed and the amount of transactions lost during this time was more than the number of transactions I would have lost by simply going back to the previous nights full backup, restoring the system and issuing an apology explaing the system failure and asking customers to re-post their transactions.

    The question is therefore truly, is data loss really that bad? Ultimately I think there should be some sort of matrix assessment to establish whether a simple recovery model does not outweigh the performance hit and trouble of running FULL backups with regular log file backups.

    Since must serious systems today tend to be using RAID5, the chances are you are only ever going to restore when data is accidently deleted, rather than when a disk failure occurs.

  6. Pingback: SQL SERVER – Weekly Series – Memory Lane – #033 | Journey to SQL Authority with Pinal Dave

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s