[Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications. As wikipedia says – It features a fast and flexible data warehousing tool used for data extraction, transformation, and loading (ETL). The tool may also be used to automate maintenance of SQL Server databases and updates to multidimensional cube data.
In this episode of the Notes from the Field series I asked SSIS Expert Andy Leonard a very crucial question – How to learn SSIS data flow task? Andy was very kind to answer the questions and provides plenty of information about how a novice developer can learn SSIS from the beginning and become expert in the technology.
If you know SQL Server, you’re likely aware of SQL Server Integration Services (SSIS). What you might not realize is that SSIS is a development platform that allows you to create and perform some interesting Control Flow Tasks. In the first blog post in this series, I showed how to use the Execute SQL Task. Now, let’s look at the Data Flow Task. When developing solutions with SSIS, I use a handful of Control Flow tasks:
Execute SQL Task
Data Flow Task
Execute Package Task
File System Task
Execute Process Task
This list is a good approximation of which tasks I use most, too – from most-used to least-used. In this article I provide a basic example of configuring the SSIS Data Flow Task, shown in Figure 1:
Figure 1: SSIS Data Flow Task
The SSIS Data Flow Task is a very special task. It is the only task to have its own tab in the Integrated Development Environment (IDE) as shown in Figure 2:
Figure 2: The Data Flow Tab
If you click on the tab, you will note a new SSIS Toolbox containing Data Flow-specific components, as shown in Figure 3:
Figure 3: Data Flow SSIS Toolbox
SSIS Data Flows are typically used to move data from one location to another. The data flow accomplishes data movement by first reading data into Data Flow Buffers. Think of a buffer as a region of memory SSIS uses to hold data rows as the rows are processed by the data flow. In Figure 4, I have configured an OLE DB Source Adapter to pump data rows into the data flow:
Figure 4: Configuring an OLE DB Source Adapter
The data is often transformed while being moved from one location to another. The SSIS data flow components that perform transforming operations are called Transformations, and they are joined to other data flow components by Data Flow Paths. An example of a transformation is the Derived Column Transformation, as shown in Figure 5:
Figure 5: Adding a Derived Column Transformation and a Data Flow Path
You can use transformations to perform many operations (e.g., you can manipulate values of columns in rows, you can remove or redirect rows based on column values, etc.) on the data as it flows through the data flow task. For example, the Derived Column Transformation permits you to manipulate (transform) existing data or to combine existing data to create new columns, as shown in Figure 6:
Figure 6: Creating a New Column with the Derived Column Transformation
I created a new column named “UpperCaseName” in the Derived Column Transformation. I used SSIS Expression Language to define the transform – “UPPER([Name])” in this case.
Now I need to land these rows into a table. I’ll use an OLE DB Destination Adapter – connected from the Derived Column Transformation via data flow path – to accomplish loading our transformed rows into a table, as shown in Figure 7:
Figure 7: Configuring an OLE DB Destination Adapter
Once the OLE DB Destination Adapter is configured, you can execute either the package or the Data Flow Task as shown in Figure 8:
Figure 8: Test Execution Successful!
In this article, I shared an introduction to the SSIS Data Flow Task and some of its functionality. Although I barely scratched the surface of Data Flow capabilities, you should now be able to compose and test your first SSIS Data Flow!
If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server.
[Note from Pinal]: This is a 56th episode of Notes from the Fields series. If you are DBA and Developer there is always a situation when you want to prove yourself by building a small proof of concepts of your idea. However, most of the time, it is way more complicated than we think. Building proof of the concepts required many different resources and skills. Above all there are chances that what we have built is not upto the mark and we have to rebuild the example one more time. Trust me it is never simple those tasks which looks simple from the beginning.
In this episode of the Notes from the Field series database expert John Sterrett(Group Principal at Linchpin People) explains a very common issue DBAs and Developer faces in their career – how to build proof of concepts and how to maximize the power of Azure. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.
Whether you know it or not cloud services are here and they are changing the way we will provide information technology services. For example, in many information technology shops it can take weeks if not months to get an instance of SQL Server up and running. Here are some minimal action items that must be completed before DBA’s get access server to install SQL Server. You have to order a physical server, your procurement team must approve the order, and the server has to be shipped. Once the server is received the server must be racked in the data center, cables must be connected, and the data center team needs to document their changes. Then the operations team needs to install and configure windows. I could keep going but there are a lot of things that should be done to a server before the DBA team gets its hands on it. What are you going to do if you’re a DBA and you need instance up in 30 minutes for a proof of concept? It’s becoming more common that the cloud is the answer.
Every time I need a server for a proof of concept I jump to Windows Azure. I can quickly build a Windows Azure Machine with SQL Server provided within 30 minutes. In this tip, I am going to walk through the steps to create your first Windows Azure Machine.
On the left hand side, click on Virtual Machines and then the add button on the bottom of the left side of the screen. This will load our wizard for creating our first virtual machine.
Now that wizard is loaded as you can see below we can select virtual machine and create it from the gallery. In the Gallery we will be able to select one of many images used that already includes SQL Server baked in.
Looking at the SQL Server images you will see you can access Enterprise Edition, Standard Edition and Web Edition for SQL 2014 down to SQL 2008 R2.
Next you can customize your image by release date. This will allow you to have different service packs or CUs. You can also select between two different tiers and sizes. You will have to create a user name and password and you will want to keep this credential as it will be your first account.
Next you will be able to select more machine configuration options. You will get to determine where the Azure Virtual Machine is located. Below you will see I am using my MSDN Subscription.
Finally, you will get to configure more configuration extensions to help automate or secure your virtual machine.
Finally, you will see your server being provisioned. What once use to take weeks or months can now be done in the cloud in minutes.
Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server.
The best way one can learn SQL Server is by trying out things on their own and I am no different. I constantly am trying to explore the various options one can use when working with SQL Server. In the same context, when I was playing around with backup restore commands, I made a mistake and unfortunately restarted SQL Server. After that I was unable to start SQL Service. If I start the service, it doesn’t give any error but gets stop automatically.
Whenever I have any weird problems with SQL, I always look at ERRORLOG files for that instance. If you don’t know the location of Errorlog, you should refer Balmukund’s blog (Help : Where is SQL Server ErrorLog?)
This is what I found in ERROLROG just before the stop.
2014-10-28 002039.02 spid9s Starting up database 'model'.2014-10-28 002040.01 spid9s The database 'model' is marked RESTORING and is in a state that does not allow recovery to be run.2014-10-28 002040.04 spid9s Error 927, Severity 14, State 2.2014-10-28 002040.04 spid9s Database 'model' cannot be opened. It is in the middle of a restore.
The error and behavior which I am seeing makes sense because to start SQL Server, we need master, model and tempdb database. You might think that MSDB is also a system database and would be needed for SQL Engine? Well, you might have been tricked. MSDB is needed for SQL Server Agent Service, not SQL Server Service. So, my master is fine, model has some problem. Every new database is created using model, including TempDB so SQL Service is refusing to start. Since the model database is not recovered successfully, SQL Server cannot create the tempdb database, and the instance of SQL Server does not start understandably.
So I called up Balmukund – these are the perks of having a good friend to rely. He never says “no” but he also doesn’t give complete solution to the problem. He gives hint and asks me to research further. This time also magical words were – “use trace flag 3608 and restore model with recovery”.
I followed his advice and performed below steps.
1. Start SQL Server with trace flag 3608 using net start command
Net Start MSSQL$SQL2014 /T3608
In my case SQL2014 is the name of the instance. If you have defaultinstance then service name would be MSSQLServer. For named instance, it is MSSQL$InstanceNameHere
2. After starting with trace flag 3608, I verified the same from Errorlog as well.
Further, I also found below message in ERRORLOG.
Recovering only master database because traceflag 3608 was specified. This is an informational message only. No user action is required.
3. Connected to SQL Instance using SQLCMD by below command.
SQLCMD -S .\SQL2014 -E
You can read parameter of SQLCMD at Books online here
“1>” means we are connected to SQL Instance and then Executed below command (hit enter at end of each line)
RESTORE DATABASE Model WITH RECOVERY
4. Once the command is executed successfully, we will come back to “1>” again. We can type exit to come out of SQLCMD
5. Now stop SQL Service
Net Stop MSSQL$SQL2014
6. And start again without trace flag.
Net Start MSSQL$SQL2014
Now my SQL instance came up happily and I was unblocked. After sometime I got call from Balmukund asking if SQL is started and I told that my next blog is ready on the same topic. He finally asked, how did that happen? And my answer was – I ran wrong command. My command was
BACKUP DATABASE model TO DISK = 'Full.bak' GO BACKUP LOG model TO DISK = 'Log.trn' WITH NORECOVERY
My request to my reader is that please DONOT run the above command in your SQL instance and restart SQL else you need to follow the steps in production server. Learning never stops when working with SQL Server.
During my recent visit to customer site for a session on backups, they asked me to find the cause of the error while restoring a differential backup. Though this seemed to be completely an admin related topic and I had gone for some other session, I took the challenge head-on. These are wonderful ways to explore and learn SQL Server better. The error they showed me was:
Msg 3136, Level 16, State 1, Line 39This differential backup cannot be restored because the database has not been restored to the correct earlier state.Msg 3013, Level 16, State 1, Line 39RESTORE DATABASE is terminating abnormally.
Over there, I have explained details and co-relation of the various backup type i.e. Full, Differential and Transaction Log backups. I will refrain from rehashing them here again.
Recently, one of my friends asked about if we have differential backup, how we can find the full backup on which differential backup can be restored. If we go back to basics, the differential backup has all the changes in the database made since last full backup was taken.
Let us understand this concept using an example:
CREATE DATABASE SQLAuthority
GO USE SQLAuthority
GO CREATE TABLE t1 (iINT) GO BACKUP DATABASE SQLAuthority TO DISK = 'E:\temp\F1.bak' GO INSERT INTO t1 VALUES (1) GO BACKUP DATABASE SQLAuthority TO DISK = 'E:\temp\D1.bak' WITH DIFFERENTIAL
GO INSERT INTO t1 VALUES (2) GO BACKUP DATABASE SQLAuthority TO DISK = 'E:\temp\D2.bak' WITH DIFFERENTIAL
GO INSERT INTO t1 VALUES (3) GO BACKUP DATABASE SQLAuthority TO DISK = 'E:\temp\F2.bak' GO INSERT INTO t1 VALUES (4) GO BACKUP DATABASE SQLAuthority TO DISK = 'E:\temp\D3.bak' WITH DIFFERENTIAL
Once the script has been run we have below backups.
Looking at the backup chain, it is clear that D3 is valid for F2. On the other hand D1 and D2 are valid and restorable on top of F1. Let us drop the database and try to restore D3 on top of F1.
USE MASTER GO DROP DATABASE SQLAuthority
GO RESTORE DATABASE SQLAuthority FROM DISK = 'E:\temp\F1.bak' WITH NORECOVERY
GO RESTORE DATABASE SQLAuthority FROM DISK = 'E:\temp\D3.bak' WITH NORECOVERY
Here is the output.
Processed 296 pages for database 'SQLAuthority', file 'SQLAuthority' on file 1.
Processed 6 pages for database 'SQLAuthority', file 'SQLAuthority_log' on file 1.
RESTORE DATABASE successfully processed 302 pages in 0.213 seconds (11.076 MB/sec).
Msg 3136, Level 16, State 1, Line 43This differential backup cannot be restored because the database has not been restored to the correct earlier state.Msg 3013, Level 16, State 1, Line 43RESTORE DATABASE is terminating abnormally.
This means that first restore was successful and next one has error which means that this is not a valid differential backup to be restored. How would we figure out the correct sequence of restore? Well, there are multiple ways.
1. Have a look at SQL Server ErrorLog where we have successful backup messages. Here is what we saw in ERRORLOG while running above backups.
As highlighted above, we can find the full back up LSN from the message of differential backup.
2. Have a look at Standard Reports to find previous backup events.
Hope fully this blog demystifies and tells you usefulness of the messages in ERRORLOG and logging capability of SQL Server. Do let me know if you have ever encountered these errors in your environments.
Errors are the best way to learn how SQL Server works and as DBA’s we are bound to see many of them from time to time. One of the primary functions of a DBA would include creating backups and most importantly trying to automate the same using jobs and maintenance plans.
Here is a typical scenario which a DBAs can encounter. One fine day they notice that some backup jobs are failing for no reason. Normal troubleshooting always starts with an error message. Recently, one of my blog readers sent an email to me which was worth a look.
I am getting below error. What is the cause and solution?
Msg 3023, Level 16, State 2, Line 1
Backup, file manipulation operations (such as ALTER DATABASE ADD FILE) and encryption changes on a database must be serialized. Reissue the statement after the current backup or file manipulation operation is completed.
I pinged him on twitter and asked more details. He informed that they have a job which runs and fails with the error described above. I asked him to get more details about the job and post back. I also asked him to check details from my good friend Balmukund’s blog – query to find what is running at the same time when job runs. He didn’t come back to me – that means his issue might be resolved.
But that left me curious to find the possible causes of the error Msg 3023, Level 16, State 2. Reading the message again, it looks like two parallel backups would cause error. So I ran two parallel backup command for a database which was little big in size (100GB). As soon as two full backups started, I could see that only one backup was making progress (session id 57) and another (session id 58) was waiting for first one to finish.
Which means the error is not raised and backup is waiting. But as soon as I cancelled the query (session 58), I got below message.
Another possible reason of the error is that if we perform shrink operation in parallel to backup operation. (Shrink is NOT something which I recommend, but people would never listen)
Here is the text
Msg 3140, Level 16, State 5, Line 1
Could not adjust the space allocation for file 'SQLAuthority'.Msg 3023, Level 16, State 2, Line 1Backup, file manipulation operations (such as ALTER DATABASE ADD FILE) and encryption changes on a database must be serialized. Reissue the statement after the current backup or file manipulation operation is completed.
Depending on who came first, here is the behavior. If a backup is started when either add or remove file operation is in progress, the backup will wait for a timeout period, then fail. If a backup is running and one of these operations is attempted, the operation fails immediately.
Solution: Find out the conflicting operation and retry your operation after stopping or finishing conflicting operation.
Learning using error messages is a great way to understand what happens inside SQL Server. Do let me know in the recent past, what have you learnt from error messages in your environments.
While doing community work I travel a lot, speak at various conferences and get a chance to meet many new faces. The best part is that I get a chance to hear a variety of issues which people face while using SQL Server. This blog post is an outcome of one such interaction with a DBA from one of the organizations I had to meet.
After the conference a young guy came to me and said – “I found a bug in SQL Server Restore”. I was amazed with his confidence and asked him to tell more before concluding. He said that to restore a database from backup, you need to have same database created before restore. I told that there is something which is not right with the test which he is performing because that doesn’t sound correct. I gave him my email address and asked to contact me to find more. I was eagerly waiting for his mail as this was on top of my mind and I was restless for two days. Finally the mail landed–
He sent an email repro steps.
Create new database.
Take a backup of the database.
Detach the database.
Restore from backup taken in step 2. This step would fail.
I followed the same steps
CREATE DATABASE SQLAuth
GO BACKUP DATABASE SQLAuth TO DISK = 'C:\Temp\SQLAuth.bak' WITH FORMAT
GO sp_detach_db 'SQLAuth' GO RESTORE DATABASE SQLAuth FROM DISK = 'C:\Temp\SQLAuth.bak' GO
As soon as I run the last commandof restore, I get below error
Msg 3142, Level 16, State 1, Line 7File "SQLAuth" cannot be restored over the existing "E:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\SQLAuth.mdf".
Reissue the RESTORE statement using WITH REPLACE to overwrite pre-existing files, or WITH MOVE to identify an alternate location.
Msg 3142, Level 16, State 1, Line 7File "SQLAuth_log" cannot be restored over the existing "E:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\SQLAuth_log.ldf".
Reissue the RESTORE statement using WITH REPLACE to overwrite pre-existing files, or WITH MOVE to identify an alternate location.
Msg 3119, Level 16, State 1, Line 7
Problems were identified while planning for the RESTORE statement. Previous messages provide details.
Msg 3013, Level 16, State 1, Line 7RESTORE DATABASE is terminating abnormally.
Error Message is very clear about the cause of restore failure. Since we detached the database, the mdf and ldf files are still available at the location when the database was created. It’s good that SQL is not over writing the files by itself unless we specify explicitly.
If you want to over write the files then we can use the “WITH REPLACE” clause to the command as shown below.
RESTORE DATABASE SQLAuth FROM DISK = 'C:\Temp\SQLAuth.bak' WITH REPLACE
If we don’t detach the database and perform restore on top of existing database like below
CREATE DATABASE SQLAuth
GO BACKUP DATABASE SQLAuth TO DISK = 'C:\Temp\SQLAuth.bak' WITH FORMAT
GO RESTORE DATABASE SQLAuth FROM DISK = 'C:\Temp\SQLAuth.bak' GO
Then we will get a slightly different message as shown below:
Msg 3159, Level 16, State 1, Line 5
The tail of the log for the database "SQLAuth" has not been backed up. Use BACKUP LOG WITH NORECOVERY to backup the log if it contains work you do not want to lose. Use the WITH REPLACE or WITH STOPAT clause of the RESTORE statement to just overwrite the contents of the log.
Msg 3013, Level 16, State 1, Line 5
RESTORE DATABASE is terminating abnormally.
Again, this is a safety mechanism where a user has to confirm their actions. Recall the situation when you have an existing file in windows and you paste same file at same location – you always get warning. SQL Server is no different and I was pleasantly relived with the fact that this was not a bug inside SQL Server. I am glad the DBA did send me this information because it made me revalidate and play around with backups with SQL Server.
Once upon a time there was a SharePoint consultant named Pepper. She wanted to learn about SQL Server Administration/DBA work. While she was a master SharePoint consultant at her office in Bethel she grew curious to learn SQL Server. She wanted to be independent when dealing with SQL server chores like creating maintaining database, taking backups, creating jobs and things alike. Being busy as she is at her current task on working with SharePoint she would sometimes squeeze in some time to poke around on SQL server and learn about it. Our coffee discussions would turn out to be SQL discussions and how SharePoint uses SQL as its backend.
Last week, I got a call from Pepper at noon timeframe … Now that does not happen and I was sure there was something fishy…
Pepper: Tony, is it normal for database to get in RESTORING state after backup? I said in my usual tone of ‘romcasim’
Me: depends on what state you wanted it to be in
Pepper: I am in no mood to play ‘take-my-hint’ I need to get back to testing something please tell me what did I do wrong.
Me:OK OK.. hummmm tell me what all you did so I can help
Pepper: Let me ping you.
Here is what I learnt from her chat messages. Pepper wanted to test a few things which involved her taking database backup and transaction log backup. So she scrambled on the net and got the syntax quick without bothering to read much. She did her test runs or code and ran the backup.
BACKUP DATABASE [BKP] TO DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Backup\full_backup.bak' WITH NOFORMAT, NOINIT, NAME = N'BKP-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10
Then again some testing from SharePoint and now Ran below to take Tlog backup
BACKUP LOG [BKP] TO DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Backup\tlog_backup.trn' WITH NO_TRUNCATE, NOFORMAT, NOINIT, NAME = N'BKP-TLog Backup', SKIP, NOREWIND, NOUNLOAD, NORECOVERY,STATS = 10
Now this is where I figured the mistake the “NORECOVERY” clause. As per MSDN documentation
Backs up the tail of the log and leaves the database in the RESTORING state. NORECOVERY is useful when failing over to a secondary database or when saving the tail of the log before a RESTORE operation.
To perform a best-effort log backup that skips log truncation and then take the database into the RESTORING state atomically, use the NO_TRUNCATE and NORECOVERY options together.
After having explained what that clause does, the obvious question pops out.
Pepper: Tony, all that is nice to know. How do I get out of the RESTORING state?
I gave her the following command, which she happily executed and voila DB was back online.
RESTORE DATABASE BKP WITH RECOVERY
Pepper: I owe you a coffee. Starbucks @ 5?
I was thinking in my head – One more midday coffee is going to be over the Tail Log backup discussion. As expected, she asked questions and I decided to share them here.
Question #1: Can this happen from user interface also?
Answer #1: Yes. Here is the option in SQL Server Management Studio (SSMS)
Question #2: But why would someone do that? Why would someone take production database into restoring state?
Answer #2: These kind of log backups are called as tail log backups. Imagine a situation where DBA has configured log shipping. As a part of DR drill, application team wants to move production workload to secondary server. One DR drill is complete, the old primary should again take primary role. If we don’t want to reinitialize the log-shipping via full backup then here are the steps.
Disable all log shipping jobs (on primary and secondary)
Restore all pending transaction logs which are not yet applied on secondary with norecovery.
Take tail log backup with norecovery. This would leave primary database in restoring state.
Restore this tail log backups on secondary database using “with recovery” clause.
This would bring secondary open for read/write activities and testing.
Once testing completes, take a tail log back-up from current primary (secondary initially)
Restore that backup with recovery on current secondary (primary initially)
Enabled all log shipping jobs which were disabled in first step.
That was an eye-opener for Pepper and at least, she paid for my coffee.
Backup is extremely important for any DBA. Think of any disaster and backup will come to rescue users in adverse situation. Similarly, it is very critical that we keep our backup safe as well. If your backup fall in the hands of bad people, it is quite possible that it will be misused and become serious data integrity issue. Well, in this blog post we will see a practical scenario where we will see how we can use Backup Encryption to improve security of the bakcup.
Database Backup Encryption is a brand new and long expected feature that is available now in SQL Server 2014. You can create an encrypted backup file by specifying the encryption algorithm and the encryptor (either a Certificate or Asymmetric Key).
The ability to protect a backup file with the password has been existing for many years. If you use SQL Server for a long time, you might remember the WITH PASSWORD option for the BACKUP command. The option prevented unauthorized access to the backup file.
However this approach did not provide reliable protection. In that regard, Greg Robidoux noted on MSSQLTIPS: “Although this does add a level of security if someone really wants to crack the passwords they will find a way, so look for additional ways to secure your data.“
To protect a backup file, SQL Server 2008 introduced the transparent data encryption (TDE) feature. Thus, a database had to be transparently encrypted before backup. Therefore, start with SQL Server 2012 the PASSWORD and MEDIAPASSWORD parameters are not used while creating backups. Even so, data encryption and backup files encryption are two different scenarios.
In case a database is stored locally, there is no need to encrypt it before backup. Fortunately in SQL Server 2014 there are two independent processes. Along with data encryption it is possible to encrypt a backup file based on a certificate or an asynchronous key. Supported encryption algorithms are:
To illustrate above mentioned, I will create an encrypted backup of the Adventureworks database. Also, you can back up directly to Azure. If needed, you may restore the encrypted back up file on Azure.
To protect the backup file we need to create an encryptor: either a Certificate or Asymmetric Key. Then, we need to pass this encryptor to the target SQL Server to restore the backup. For this, the encryptor must be exported from the source SQL Server and imported to the target SQL Server. There are no problems with the certificate in this regard. It is more complicated with asymmetric keys.
Taking into account that the BACKUP ASYMMETRIC KEY command is not available, and we can not just create a duplicate for an asymmetric key (compared to symmetric key), the only approach is to create the asymmetric key outside the SQL Server. Then we can use the sn.exe utility to transfer it inside SQL Server (CREATE ASYMMETRIC KEY ‘keyname‘ FROM FILE = ‘filename.snk‘). After that we can use this asymmetric key to encrypt the backup file on the source instance. Further we need to use the same *.snk file to create the asymmetric key on the target instance (and restore the backup file).
In our example we will not use asymmetric keys. We will use a certificate. Moreover the certificate (behind the scene) is the pair of open/closed keys.
Let’s create the server certificate and use it to encrypt the backup file.
The certificate will be protected with the database master key, because we didn’t specify the ENCRYPTION BY statement.
This is exactly what we need. Only certificates signed with the database master-key can be used for the encryption purposes. Otherwise, If we for instance, protect the certificate with the password ENCRYPTION BY PASSWORD = ‘strongpassword‘, the following error appears while attempting to encrypt the backup file:
“Cannot use certificate ‘CertName’, because its private key is not present or it is not protected by the database master key.”
Encrypted backups (along with usual backups) can be traditionally created locally on the hard drive and in Azure Storage.
Instead of writing tons of SQL code I will use the convenient dbForge Studio for SQL Server Back Up wizard. The wizard allows to create the database backup in several clicks.
Step 1: Setup the DB Connection and the backup file location.
Step2: Setup mediaset
Step 3: Select the encryption algorithm and certificate.
In case you don’t want to pay extra attention to transferring the backup file to the Windows Azure, you can backup directly to Azure.
After the script execution in the required container the blob (with the backup) appears.
In case you had already created a backup with the same name in the same container, you can get the following error: There is currently a lease on the blob and no lease ID was specified in the request.
Further, you can restore the back up file on the Windows Azure.
Obviously, it is a good practice to encrypt a backup file while transferring. This, for instance, allows to avoid data leak while transferring backups from one DPC to another.