Simple-Talk.com has published my very first article on their site. This article is introducing Change Data Capture – the new concept introduced in SQL Server 2008. Change Data Capture records INSERTs, UPDATEs, and DELETEs applied to SQL Server tables, and makes a record available of what changed, where, and when, in simple relational ‘change tables’ rather than in an esoteric chopped salad of XML. These change tables contain columns that reflect the column structure of the source table you have chosen to track, along with the metadata needed to understand the changes that have been made.
Perhaps, the best thing about technical domain is that most of the things can be executed in more than one ways. It is always useful to know about the various methods of performing a single task. Today, we will observe four different ways to find out recovery model for any database.
Right Click on Database >> Go to Properties >> Go to Option. On the Right side you can find recovery model.
Click on the Database Node in Object Explorer. In Object Explorer Details, you can see the column Recovery Model.
This is a very easy method and it gives all the database information in one script.
SELECT name AS [Database Name], recovery_model_desc AS [Recovery Model] FROM sys.databases
This method provides only one database at a time.
SELECT 'ADVENTUREWORKS' AS [Database Name], DATABASEPROPERTYEX('ADVENTUREWORKS', 'RECOVERY') AS [Recovery Model]
Of all the above methods, I prefer Method 3. It is accurate and can be used in T-SQL code. Besides, I can use it for many other purposes as well.
I would like to know which one is your favorite method .
I maintain a spreadsheet of questions sent by users and from that I single out a topic to write and share my knowledge and opinion. Unless and until I find an issue appealing, I do not prefer to write about it, till the issue crosses the threshold. Today the question that crossed the threshold is – what is the difference between NORECOERY and RECOVERY when restoring database.
The fact is that one can easily go to MSDN and learn the difference, so I head myself to MSDN and read the difference. This is what the Book On Line suggests here.
NORECOVERY specifies that roll back not occur. This allows roll forward to continue with the next statement in the sequence.
In this case, the restore sequence can restore other backups and roll them forward.
RECOVERY (the default) indicates that roll back should be performed after roll forward is completed for the current backup.
Recovering the database requires that the entire set of data being restored (the roll forward set) is consistent with the database. If the roll forward set has not been rolled forward far enough to be consistent with the database and RECOVERY is specified, the Database Engine issues an error.
Book On Line has very clearly explained the same subject and provides succinct explanation of the difference.
In other words –
While doing RESTORE Operation if you restoring database files, always use NORECOVER option as that will keep database in state where more backup file are restored. This will also keep database offline also to prevent any changes, which can create itegrity issues. Once all backup file is restored run RESTORE command with RECOVERY option to get database online and operational.
It is also important to be acquainted with the restore sequence of how full database backup is restored.
First, restore full database backup, differential database backup and all transactional log backups WITH NORECOVERY Option. After that, bring back database online using WITH RECOVERY option.
Following is the sample Restore Sequence
RESTORE DATABASE DATABASE FROM full_database_backup WITH NORECOVERY; RESTORE DATABASE DATABASE FROM differential_backup WITH NORECOVERY; RESTORE LOG DATABASE FROM log_backup WITH NORECOVERY; -- Repeat this till you restore last log backup RESTORE DATABASE DATABASE WITH RECOVERY;
I hope now it is very clear to you all what is restore sequence and the difference between recovery options.
When it comes to storing images in database there are two common methods. I had previously blogged about the same subject on my visit to Toronto. With SQL Server 2008, we have a new method of FILESTREAM storage. However, the answer on when to use FILESTREAM and when to use other methods is still vague in community.
Let us look into two traditional methods first along with their advantage and disadvantages.
Method 1) Store image in filesystem and store pointer in database
This is quite an old method and you can find this implemented in many places, even though SQL Server 2005 has been released 5 years ago.
The advantage of this method is that there is no limit on the size of images file and it is the fastest way to retrieve images from system.
The drawback of this method is that application and database security acts separately and application/OS has to take backup for images.
Method 2) Store images in a database
This was introduced in SQL Server 2005 where images (binary data) can be stored in database.
The advantage of this method is same as the disadvantages of method 1, i.e. security and permissions.
The disadvantage of this method is same as the advantages of method 1, i.e. speed and maximum size is limited to 2 GB.
On observing Method 1 and Method 2, it is very clear that they are complementing each other but both are very powerful methods on their own.
Although both the methods are very good they have limitations, which together only they can provide. This is where FILESTREAM Storage comes into action.
FILESTREAM Storage system can store BLOB (Binary Large Object OBject) or images file, which is larger than 2 GB, and it can store them in database itself. It can be backed up just like regular database backup, and the security of the BLOB is maintained along with database application. After all this, it is as fast as filesystem. Now, this is what makes this superior to any other methods described earlier.
Summary: We can use FILESTREAM Storage to overcome traditional limitations of SQL Server storage of BLOB, varbinary or images.
Standby servers – Standby Server is a type of server that can be brought online in a situation when Primary Server goes offline and application needs continuous (high) availability of the server. There is always a need to set up a mechanism where data and objects from primary server are moved to secondary (standby) server. This mechanism usually involves the process of moving backup from the primary server to the secondary server using T-SQL scripts. Often, database wizards are used to set up this process.
We will now glance at the various types of standby servers.
Hot Standby – Hot Standby can be achieved in SQL Server using SQL Server 2005 Enterprise Edition and the later enterprise versions. SQL Server 2005 has introduced Mirroring of database that can be configured for automatic failover in a disaster situation. In the case of synchronous mirroring database is replicated to both the servers simultaneously. This is a little expensive but provides the best high availability. In this case, both primary and standby servers have same data all the time.
Warm Standby – In Warm Standby automatic failover is not configured. This is usually set up using Log Shipping or asynchronous mirroring. Sometimes warm standby is lagging by a few minutes or seconds, which results into loss of few latest updates when primary server fails and newer server needs to be failed over. Sometimes warm server that is lagging by a few transactions is brought back to the current state by applying recent transaction log.
Cold Standby – Code Standby servers need to be switched manually, and sometimes all the backups as well as the required OS need to be applied. Cold Standby just physically replaces the previous server.
I am curious to know what do you all think about this these three types of standby servers. I am eager to know what kind of server setup you have at your workplace. If you can share these details, it will be very useful to all the readers. If I find your information interesting, I will create a new blog post with due credit to you.
This article revolves around a trivial yet common issue. There might be a set of people for whom the current topic might appear to be insignificant. But I have been asked this question innumerable times, particularly from people who are frequenting using forums or have blog related to storage and highly availability, which instigated me to write this article.
Here goes this frequently asked question.
Question: Is it possible to restore database if one of the files of .mdf (primary data file) or .ndf (secondary data file) is missing?
Answer: In one word the answer is NO. All the .mdf and .ndf files are mandatory to attach or restore database successfully. Even though you have all the transactions stored in .ldf you will not be able to restore the database completely.
I hope this answer removes the cloud of doubt hovering over my targeted readers.
This article is based on a real life experience of the author while working with database backup and restore during his consultancy work for various organizations. We will go over the following important concepts of database backup and restore.
Conventional Backup and Restore
Spilt File Backup and Restore
Mirror File Backup
Understanding FORMAT Clause
Miscellaneous details about Backup and Restore
Conventional and Split File Backup and Restore
Just a day before working on one of the projects, I had to take a backup of one database of 14 GB. My hard drive lacked sufficient space at that moment. Fortunately, I had two 8 GB USB Drives with me. Now, the question was how to take a backup in two equal sizes, each of 7 GB, so I can fit them on each USB drive. Well, conventional backup takes one large backup in one file. However, SQL Server backup command can take backups in two or more split parts.
It goes without saying that Database Backup is the most important task for any Database Administrator (DBA). Naturally, large organizations always have a team of DBAs who execute Database Backup tasks. No matter how big or small an organization is, the importance of database backup remains the same across the board. It’s a common practice in several organizations to upload the backup to their remote location for additional safety. I totally vouch for this safety measure of having their additional backup on remote/satellite location. This redundancy comes in handy whenever a catastrophe of not having proper backup surfaces abruptly.
While I was searching online for a very simple solution for one of my clients, I came across a small-sized but feature-rich application from SqlBackupAndFTP.com. This practical application has been created by Pranas.NET (http://pranas.net) I have made extensive use of this tool for some time now and undoubtedly, this tool is very impressive by all standards!
To put it simply, SQLBackupAndFTPis MS SQL Server backup software. This tool performs several tasks such as running scheduled backups of SQL Server or SQL Server Express databases, zipping the backups, storing them on a network or on a FTP server, removing old backups, and finally, sending an e-mail confirmation on job’s success or failure. Besides, SQLBackupAndFTP Free is freeware, which is free from any spyware. Best of all, this ingeniously simple tool is extremely affordable for all. You just need to install it on the server and you can get up and running really fast.
Given below are some important tasks which this tool can perform using daily scheduler:
• Backup SQL Server Database
• Zip the backups
• Encrypt the backups
• FTP the backups to remove FTP server
• Move file to local area network
• Send final status of tasks in email.
Let us now see how we can take backup and move it to the desired folder.
Image 1: First, connect to the desired server.
Image 2: As you can see, all the settings are very easy as described in the following image. Pay attention to these very simple, one screen settings. From database selection to job scheduling, you can configure everything from this page itself.
Image 3: This screen describes all the steps that are being performed. User can easily view the status of all the jobs – either success or failure – on this page.
Image 4: The following screen displays the settings for advanced users who have to deal with large database.
Image 5: This screen displays email regarding final status of tasks, which is sent to the user after task completion.
Now, a question that might baffle your mind is – how this tool is better than maintenance tasks available in SQL Server. Well, here goes the answer. Maintenance tasks are easy to set up for backups; however, SQLBackupAndFTP have integrated solution for encryption, FTP and email which make it superior to maintenance tasks in every aspect.
I would like to present my unbiased view about this tool. At the same time, I want to give my readers a broad perspective about backup tools in general. On comparing this tool with other backup tools I have experienced that it is not the fanciest tool to manage backups or check their health. This is one tool that smoothly performs the job in a few steps compared to native SQL Server backup, which takes numerous steps to execute the same job.
SQLBackupAndFTP is best suited for developers/DBAs who just want to ensure that they have schedule job that takes care of backup and moves them to appropriate place. In fact, it is ideal for any SQL Server database where backups need to be sent every day to a remote FTP server. It saves maintenance time on any SQL Server version. Besides, it is especially useful for SQL Server Express 2005 and SQL Server Express 2008, as they lack built in tools for backup.
This tool is compatible with almost all the known SQL Server versions. It works with SQL Server 2008 (all versions) and many of the previous versions.
While testing this tool, I noticed that there are few limitations. It does not support differential or transaction backups. I also found that this tool only works on local SQL Server instances and cannot connect to another SQL Server on LAN. When I contacted support, I came to know that they are planning to release its updated version in April 2009, which will not have any such limitations.
I strongly recommend this tool to all the DBAs. They must try it as it is absolutely free and does exactly what it promises. You can download your free copy of the tool from here.
Please share your experience about using this tool. I am eager to receive your feedback regarding this article.
The most interesting thing about writing blog at SQLAuthority.com is follow up question. Just a day before I wrote article about SQL SERVER – Restore Master Database – An Easy Solution, right following it, I received email from user requesting reason for importance of backing up master database.
Master database contains all the system level information of server. Information about all the login account, system configurations and information required to access all the other database are stored in master database. If master database is damaged, it will be difficult to use any other database in SQL Server and that makes it most important database of the SQL Server.
Let us understand the important of the master database using an example. We will take example of SQL Server DBA and follow his timeline. Make sure to understand it correctly, as I have small question at the end of the timeline.
9:00 AM – DBA takes backup of the master database.
10:00 AM – DBA creates new Database named AfterMaster.
11:00 AM – DBA restores the master database backup taken at 9:00 AM.
12:00 PM – I have following two questions for the DBA :
Question 1) What will be the state of the database AfterMaster? If AfterMaster database will be in active state after restoring master database? Question 2) What should be the next step after restoring master database?
Let us understand the answer of question.
Answer 1) Once master database is restored it will have no record of AfterMaster database in its system database and it will not recognize it. Answer 2) If master database is restored from backup all the operation which are done after last master database backup should be repeated in order to bring SQL Server in the current operational state. In our case, the database files (ldf and mdf) of AfterMaster database will still exists on server. They should be reattached to the server. You can search about how to attach mdf and ldf file at Search@SQLAuthority.com.
It is clear from our example that master database contains user login, files, filegroups and server wide settings.
In summary, it is extremely important to take backup of the master database.
Just a day ago I have received following email from Siddhi and I found it interesting so I am sharing with all of you.
I have seen many blogs from you on SQL server and i have always found them useful and easy to understand. Thanks for all the information you provide.
I have one query about shrinking NDF and MDF files.
Can we shrink NDF and MDFfiles?? If you do so is there any data loss?
I have been shrinking the .LDF files every now and then but I am not too sure about NDF and MDF files.
Can you please answer my query.
Waiting for your early response.
Shrinking MDF and NDF file is possible and there is no chance of data loss.
It is not always advisable to shrink those file as those files are usually growing. There are cases when one database is separated in multiple database of any large table is dropped from database MDF and NDF can contain large empty space. This is the time they should be shrank. Shrinking database can be many hours process but it there are very slim chances of data lose.
Following is the script to shrink whole database.
DBCC SHRINKDATABASE (dbName)
Following is the script to shrink single file.
DBCC SHRINKFILE (logicalLogFileName)
To find logicalLogFileName following command has to be ran.
USE dbName EXEC sp_helpfile
Let us understand this using database AdventureWorks.
/* Shrink Whole AdventureWorks Database */ DBCC SHRINKDATABASE (AdventureWorks) GO /* Get the Logical File Name */ USE AdventureWorks EXEC sp_helpfile GO /* Shrink MDF File of AdventureWorks Database */ DBCC SHRINKFILE (AdventureWorks_Data) GO
Following image of the same process show when whole process is done there will be resultset with information about the new states of the database files.