SQL SERVER – What is Biml and How Can it Help Me with SQL Server BI? – Notes from the Field #036

[Note from Pinal]: This is a 36th episode of Notes from the Field series. One of the common question I receive every other day is – I have learned BI but I feel that there is no automation in the BI field, what should I do? How do I go beyond learning BI? How can I fill the gap between BIDS and SSDT? If you have been reading this blog, when it is about BI, I always take help from LinchPin People who are BI experts. I requested Reeves from LinchiPin people to help me answer this unique question.

In this episode of the Notes from the Field series database expert Reeves Smith explains what is Biml and How Can it Help Me with SQL Server BI?. Read the experience of Reeves in his own words.


With all of the new technologies to learn and implement, I wanted to introduce you to some of the benefits of spending the time to learn Business Intelligence Markup Language (Biml). You can use this XML-based language to create and automate Microsoft SQL Server Business Intelligence (BI) objects. Biml currently supports BI objects like SSIS packages and SSAS dimensions, measures, and cubes.

After a quick overview of Biml, I’ll demonstrate how to use it by providing a walkthrough example.

Overview

You can use Biml to create tables, flat files, SSIS packages, and SSAS models. This language is human readable, unlike the XML that is represented within dtsx files and SSAS objects. This readability can help reduce the learning curve required for most programming languages.

Biml will not teach you SQL Server Integration Services (SSIS) or SQL Server Analysis Services (SSAS) but can enhance and increase your productivity with these tools with certain use cases. Biml is not the silver bullet that can solve all of the SSIS/SSAS development project problems, but I will discuss some of the scenarios where Biml excels.

The Biml languages is supported within two Integrated Development Environments (IDEs): Business Intelligence Development Studio (BIDS) and SQL Server Data Tools (SSDT), with the BIDS Helper add-in (free) or the Mist application from Varigence (purchased product).  Currently BIDS Helper is supported with SQL Server 2005, 2008, and 2008 R2 BIDS, and SQL Server 2012 SQL Server Data Tools. This article will focus on the BIDS Helper add-in and its capabilities with SSIS.

Simple Walk Through

This walkthrough will show how to create a Biml file that creates an SSIS package to move data from the AdventureWorks database to a staging database. The following script file will create all of the needed objects. If you have the AdventureWorks database installed and a table called HumanResources.Department, you can skip Listing 1.

CREATE DATABASE [AdventureWorks]
GO
USE [AdventureWorks]
GO
CREATE SCHEMA [HumanResources] AUTHORIZATION [dbo]
GO
CREATE TABLE [HumanResources].[Department]
(
[DepartmentID] [smallint] IDENTITY(1,1) PRIMARY KEY NOT NULL,
[Name] [nvarchar](50) NOT NULL,
[GroupName] [nvarchar](50) NOT NULL,
[ModifiedDate] [datetime] DEFAULT (GETDATE()) NOT NULL
)
ON [PRIMARY]
GO
INSERT INTO [HumanResources].[Department] ([Name], [GroupName]) VALUES (N'Engineering',N'Research and Development')
INSERT INTO [HumanResources].[Department] ([Name], [GroupName]) VALUES (N'Tool Design',N'Research and Development')
INSERT INTO [HumanResources].[Department] ([Name], [GroupName]) VALUES (N'Sales',N'Sales and Marketing')
INSERT INTO [HumanResources].[Department] ([Name], [GroupName]) VALUES (N'Marketing',N'Sales and Marketing')
INSERT INTO [HumanResources].[Department] ([Name], [GroupName]) VALUES (N'Purchasing',N'Inventory Management')
INSERT INTO [HumanResources].[Department] ([Name], [GroupName]) VALUES (N'Research and Development',N'Research and Development')
INSERT INTO [HumanResources].[Department] ([Name], [GroupName]) VALUES (N'Production',N'Manufacturing')
INSERT INTO [HumanResources].[Department] ([Name], [GroupName]) VALUES (N'Production Control',N'Manufacturing')
INSERT INTO [HumanResources].[Department] ([Name], [GroupName]) VALUES (N'Human Resources',N'Executive General and Administration')
INSERT INTO [HumanResources].[Department] ([Name], [GroupName]) VALUES (N'Finance',N'Executive General and Administration')
GO

asdfas
Listing 1 – AdventureWorks Database Objects

CREATE DATABASE [AdventureWorksStage]
GO
USE [AdventureWorksStage]
GO
CREATE SCHEMA [HumanResources] AUTHORIZATION [dbo]
GO
CREATE TABLE [HumanResources].[Department]
(
[DepartmentID] [smallint] PRIMARY KEY NOT NULL,
[Name] [nvarchar](50) NOT NULL,
[GroupName] [nvarchar](50) NOT NULL,
[ModifiedDate] [datetime] NOT NULL
)
ON [PRIMARY]
GO

Listing 2 – AdventureWorksStage Database Objects

To check that you have BIDS Helper installed within your Visual Studio environment, select the Tools menu option and select Options.The BIDS Helper option will display on the left dropdown window if it is installed, as you can see in Figure 1. If you need help installing the BIDS Helper add-in see: Step 2 in the Stairway to Biml: Biml Basics – Level 2 (link).

Figure 1 – Options windows with BIDS Helper installed

After verifying the BIDS Helper is installed, open a new SSIS Project from the File menu, and select New, then Project as Figure 2 shows.

Figure 2 – New Integration Services Project

Due to a Visual Studio limitation, right clicking on the Miscellaneous folder will not bring up the Biml context menu, which you see in Figure 3. Instead, from the Project Window, select the Project folder or the SSIS Packages folder, right click and select Add New Biml File. A new Biml file (BimlScript.biml) is added to the Miscellaneous folder.

Figure 3 – Biml Context Menu

Replace all of the code within the Biml File with the code from Listing 3.

<Biml
 xmlns="http://schemas.varigence.com/biml.xsd">
 <Connections>
 <OleDbConnection Name="AdventureWorks" ConnectionString="Provider=SQLNCLI10;Integrated Security=SSPI;Initial Catalog=AdventureWorks;Data Source=localhost;" />
 <OleDbConnection Name="AdventureWorksStage" ConnectionString="Provider=SQLNCLI10;Integrated Security=SSPI;Initial Catalog=AdventureWorksStage;Data Source=localhost;" />
 </Connections>
 <Packages>
 <Package Name="HumanResources_Department_Biml" ConstraintMode="Linear" >
 <Tasks>
 <Dataflow Name="Data Flow Task">
 <Transformations>
 <OleDbSource Name="OLE DB Source" ConnectionName="AdventureWorks">
 <ExternalTableInput Table="[HumanResources].[Department]" />
 </OleDbSource>
 <OleDbDestination Name="OLE DB Destination" ConnectionName="AdventureWorksStage">
 <InputPath OutputPathName="OLE DB Source.Output" />
 <ExternalTableOutput Table="[HumanResources].[Department]" />
 </OleDbDestination>
 </Transformations>
 </Dataflow>
 </Tasks>
 </Package>
 </Packages>
</Biml>

Listing 3 – Sample Biml code

Note: The connection information in the Biml file must point to the SQL Server instances that contain the AdventureWorks and AdventureWorksStage databases. If you are using another version of SQL Server the provider information might need to be changed: Provider=SQLNCLI10;

After replacing the code in the Biml file with the code in Listing 3, right click the file and choose the Check Biml for Errors menu option shown in Figure 4.

Figure 4 – Check Biml for Errors

If everything is configured correctly, you will receive the message shown in See Figure 5.

Figure 5 – No errors

If you receive any errors, you will need to correct them before selecting the Generate SSIS Package menu option.

Select the Generate SSIS Package menu option to create a new integration services package. A working package will be created within the Packages folder named: HumanResources_Department_Bimlas defined within the Package Name parameter within the Biml file.

Additional Walk-Through Examples

To add an Execute SQL task to truncate the stage table prior to the load add the following code after the <Tasks> node and before the <DataflowName=”Data Flow Task”>

<ExecuteSQLName="Truncate Table"ConnectionName="AdventureWorksStage">
 <DirectInput>TRUNCATE TABLE [HumanResources].[Department]</DirectInput>
</ExecuteSQL>

At this point in the walk-through Biml has not saved you much effort, but what if you wanted to create additional SSIS packages for all for the HumanResources tables within the Adventure works database? How much development effort would it take to update the Biml file to point to the each of the tables in the HumanResources schema and select Generate SSIS Package? Would those packages be consistent with the packages built prior? Would the prior testing of the previous package have a lot of benefit to the newly created package?

At this point copy and paste the Biml code within a new Biml file and move data from other tables. All of the table objects will need to exist, so you might have to add some staging tables to your stage database.

Practical Use

The first question that comes up after starting a discussion on Biml is, “Why would I us Biml?” I can create all of the SSIS packages I need in the current IDE.

One use case for Biml is the automation of similar design patterns like adding more packages to the walkthrough above. SSIS packages typically move data from a source to a destination. This pattern is repeated for each source and destination and can produce multiple SSIS packages. With some SSIS design patterns, the only items that change are the source table name and the destination table name.  What would it be like to spend the time to properly design and test a package to move data from one table to another and then duplicate that package with ease and only change the relevant information?  Biml enables this type of development.  With Biml, you are able to focus the development effort on design and remove some of the repetitive work.

As the variation of the packages increases between each package, Biml can become a less viable option. With Biml Script (a scripting language within Biml), you are able to create designs that vary and can adapt to changing Extract-Transform-Load (ETL) requirements.  Biml Script enables you to programmatically change items within the file without using cut and paste. (Biml Script was not demonstrated within this article.)

Looking Ahead

Biml is much more robust than what was demonstrated in this article but I wanted to start the discussion with a simple example to get you familiar with Biml. Adding Biml Script can create programmatic solutions that can automate SSIS package development. In a future article I will demonstrate how to add Biml Script to the existing code to increase package automation.

To enable follow along with the upcoming articles, install the AdventureWorks database from the following URL:. This database will also give you additional options to test and experiment with.

Side Note: The Mist IDE is able to extend Biml even further by updating multiple SSIS packages. This enables a maintenance option that has never been available within the SSIS development environment. It’s really cool, but something I will have to discuss in the future article

Stay Tuned.

If you want to get started with BIML with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – Round Up From Notes from the Field of Blog Posts of Tim Radney

We have completed 35 episodes of the series Notes from the Fields. I have been blogging for over 8 years and I have blogged about pretty much everything SQL and lots of other concepts. Though, I have extensive experience with SQL and Databases, it is always a good idea that we consult experts for their advice and opinion. Following the same thought process, I have started this new series of Notes from the Fields. In this series, we have notes from various experts in the database world.

My friends at Linchpin People have graciously decided to support me in my new initiation.  Linchpin People are database coaches and wellness experts for a data driven world. This series has been excellent From the very first episode of the Notes from the Fields series, we have received tremendous response. We are also very fortunate that database expert Tim Radney (partner at Linchpin People) has shared many insights about a very daily issue DBA and Developers face in their career. By the way, Congratulations to Tim for Microsoft MVP award.

Tim has been amazing in supporting this series. Here is the time when we can help out Tim. Tim would like to know which of his Notes from the Field is your most favorite Tim. 

Gift from Pinal

Here is a small contest. I have my favorite blog posts from following list. If your favorite post and my favorite choice are the same – I will give you USD 50 Amazon Card.

Remember, you need to tell me your favorite post and the reason why you liked it! The contest ends at Midnight GMT on July 16.

Please leave your answer in comment area.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Automate Database Operations for Success – Notes from the Field #035

[Note from Pinal]: This is a 35th episode of Notes from the Field series. Doing this faster and easier is always our goal. We all want to do things which generates maximum return of investment with least efforts. This is a catch 22 situation quite often when it is about database administrators.

In this episode of the Notes from the Field series database expert Brian Kelley explains a how to automate various database administrator tasks for success of business and our efforts. Read the experience of Brian in his own words.


In the Linchpin People mindset, it’s not about how busy you are, but how valuable you are. You don’t increase your value to your organization by your busyness. You do so by accomplishing the work. There are two parts to this:

  • Accomplish more work.
  • Accomplish more important work.

Initially, if your path follows most people, you’ll be asked to do more. This is your opportunity to accomplish more work. As you succeed with the additional work that you receive, you will likely be given opportunities to take on more important work. If this never happens, you’re probably at the wrong organization. Let’s assume you are at a good one and you’re given more important tasks. Obviously, if you succeed at the more important work, you’ll continue to be given more opportunities. And you’ll likely be given tasks and projects that are more and more important to your organization. This is how you become a linchpin.

So how do you complete more work? One part of the answer is automation. Since we’re IT professionals, automation should be near and dear to us. I recently wrote about being a “lazy” DBA. I used the word lazy was to indicate there are some manual tasks we don’t want to repeat. A “lazy” engineer or IT pro tries to automate these tasks in order to reduce the time spent with these tasks.  That frees up the IT pro to have more time for the more important work.

Let’s look at some things that we should automate as DB Pros:

Database Administration:

Build scripts that can do the following:

  • Check disk space on servers.
  • Check database available space.
  • Check security on key servers and databases.
  • Verify backups were taken properly.
  • Perform test restores of key backups.
  • Parse the SQL error log on each SQL Server for important information (failed logins, use of sp_configure, etc.).

For instance, if I want to check backups to ensure a full backup has run on every database within the last 24 hours, I might automate running the following query to report back the datbases where I do not have the proper backup:

SELECT D.name,
BS.database_name,
Isnull(CONVERT(VARCHAR, BS.lastbackup, 106), ‘No recent backup’) AS
LastBackup
FROM   master.sys.databases AS D
LEFT JOIN (SELECT database_name,
Max(backup_finish_date) AS LastBackup
FROM   msdb.dbo.backupset
WHERE  type = ‘D’
GROUP  BY database_name) ASBS
ON D.name = BS.database_name
WHERE  BS.database_name IS NULL
OR BS.lastbackup < ( Dateadd(hour, -24, Getdate()))
ORDER  BY D.name; 

We should also use automation like policy based management or custom scripts to enforce settings. Some examples that we should consider:

  • database ownership
  • recovery models
  • membership in key roles (sysadmin, securityadmin, db_owner, etc.)

And here if I knew every user database on a given server should be in full recovery mode, I can ensure that if I schedule the following script:

DECLARE cursdbs CURSOR fast_forward FOR
SELECT name
FROM   sys.databases
WHERE  state_desc = ‘ONLINE’
AND recovery_model_desc <> ‘FULL’
AND name NOT IN ( ‘master’, tempdb, msdb, ‘model’);
DECLARE @DBName SYSNAME;
DECLARE @SQL NVARCHAR(max);
OPEN cursdbs;
FETCH next FROM cursdbs INTO @DBName;
WHILE ( @@FETCH_STATUS = 0)
BEGIN
PRINT ‘ALTERING DATABASE: ‘ + @DBName;
SET @SQL = ‘ALTER DATABASE [' + @DBName
+ '] SET RECOVERY FULL;’;
EXEC (@SQL);
FETCH next FROM cursdbs INTO @DBName;
END
CLOSE cursdbs;
DEALLOCATE cursdbs; 

You do want to review that output. After all, if you just switched the DB to full recovery mode, you want to ensure you restart the log backup chain with a full or differential database backup.

Database Development:

Encourage continuous integration methods to include database code. This will require tests to validate no new code “breaks the build.” Make sure that these builds come directly from source control.

If you are doing tests that require restores of databases and the application of scripts, write the automation to do these tasks. It makes the tasks repeatable, it reduces the possibility of error, and it frees you up so you don’t have to manually run each step.

With that said, write scripts for anything you will have to repeat when developing a solution. For instance, you might need scripts to:

  • Add, delete, or change data.
  • Change security on a database or server.
  • Encrypt / decrypt setup data.

Can you automate too much?

Yes, you can. Note that in both cases I did include some reporting. If you build automation where you’re not doing any checking, that’s wrong. Automation eliminates you from having to do tedious steps. It doesn’t remove your responsibility/accountability. If you don’t have anything to check, you don’t actually know if the work was completed successfully. Don’t assume. Check.

If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSIS Component Error Outputs – Notes from the Field #034

[Notes from Pinal]: Error confuses people and stops the operations. Developer search more on error online rather than best practices. Lots of people think that SSIS is all about arranging various operations together in one logical flow. Well, the understanding is absolutely correct, but the implementation of the same is not as easy as it seems.

Linchpin People are database coaches and wellness experts for a data driven world. In this 34th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to understand more about SSIS component errors.


When working in the SSIS data flow, you’ll notice that many sources and transformations and some destinations have a built-in output to handle errors.  The error output allows the SSIS developer to create a separate path through which error rows can be directed.  In this post, we’ll briefly discuss the essentials and design patterns for using error outputs on data sources and transformations.

Overview

Error outputs are a secondary path through which the data flow can send rows that do not conform to data type, length, or transformation standards defined by the ETL developer.  That’s a lengthy way to say that it’s where you can send your junk data.  In the SSIS designer, clicking on a source or transformation will often show not one but two possible outputs: the primary output (the “good” data, indicated by the blue line) and the error output (identified by the red line).  As shown on the flat file source below, when selecting a source or transformation, those that have an available error output will appear with both output connectors ready for selection.

Using this design will allow you to handle errors such as:

  • Incorrectly typed data
  • String data longer than its specification
  • Invalid transformation logic
  • Relational constraint errors at the destination

Error outputs provide a great way to handle two things at once: providing a secondary path through which bad data can be either triaged or cleaned up, and preventing the package from failing based on just a few bad rows of data.

When connecting an error output to a downstream component, you’ll be prompted with a configuration window in which you can select the error output options.  The most important thing to remember here is that default behavior is to fail the component upon any error, even when you connect the source or transformation’s error output.  The default options are shown below.

As mentioned, you must explicitly set the behavior to Redirect row for the column(s) that will be handled by the error output.  You can do this on a column-by-column basis, if you want to handle error conditions on that level, but in most cases I see that all columns are set to either fail or redirect as a group.  In the screenshot below, I’ve set all columns to redirect upon error or truncation.

Practical use

So what happens when we connect a component’s error output?  In most cases, any rows in error would be redirected to that output, which we can use as a secondary path for cleanup or triage.  Assuming we change the error and truncation behavior to Redirect row, errors that are captured by the error output would not result in a failure of that source.

There are several design patterns surrounding the use of error outputs on sources or transformations.  Among the most common:

Cleanse inline.  If possible, the data can be sanitized inline using other SSIS components.  If the ETL can be built to handle most of the known data deficiencies, this is usually the cleanest way to handle error rows.  Very often, data quality tools such as SQL Server Data Quality Services can be useful for inline cleansing.

Send to triage.  If the package does not have cleansing logic, or there are rows that still cannot be cleansed using that logic, the bad data can be sent to triage for manual review or cleansing.  Though this pattern does require manual intervention, it also allows us to audit and review the reasons for those failures by preserving the data.

Ignore. It is possible to redirect error rows into the bit bucket, resulting in a loss of data.  In rare cases, this is acceptable; however, in most cases, there should be some measure of data capture to improve the quality of the ETL process.

Which is the best pattern to use?  As always, it depends.  Factors such as the criticality of the data, the types of errors expected, the frequency of the ETL process, and many others must be used to decide how, if at all, to use the error paths in SSIS.

Caveats

As with any ETL function, there are a few caveats and cautions around using error outputs.  First of all, I don’t recommend adding an error output path to every component that supports it simply because it’s available.  Redirecting error rows is not always appropriate – sometimes it makes sense to fail the package rather than trying to programmatically clean up or triage error data.

Additionally, you’ll need to familiarize yourself with the different types of error outputs generated by each component.  For example, if you use the error output of the flat file source, it will not break apart the data into individual columns (even for those elements not in error for that row) – because the error output is a relatively low-level function, what you get is a single string with all of the data in it, and if you have processes that consume this data downstream of the error output, you’ll need to do some transformation on the errored rows before you can do any cleanup.

Finally, it is important to note that not all components support error outputs.  Because of the differences in the types of operations performed by each source, destination, or transformation component, some of them do not include an error output path.

Conclusion

In this post, we’ve briefly covered the error output facilities for the SSIS data flow, along with a few design patterns for implementing these in a package.  In a future post, we’ll talk more about implementing an end-to-end solution for handling error rows in an SSIS data flow.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Keeping MSDB System Database Lean and Fit – Notes from the Field #033

[Notes from Pinal]: I am working with SQL Server for almost 10 years. I have seen the most ignored databases on any database server is system database. We hardly pay attention to them. Just for the sake of it – try to run queries in your master database and see how many user databases already exists there. I am sure unless and until you have strict policies in the place, there will be a handful of the tables available there. Similarly try to ask DBA question about what is the exact importance of the MSDB database and you will find it that there are so many of the DBA without its knowledge. I often talk about this with my friend Tim. I asked him if he can help us out  with the tricks to keep MSDB system database at its optimal settings.

Linchpin People are database coaches and wellness experts for a data driven world. In this 33rd episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple word how to keep MSDB system database at optimal settings. 


When doing a review of a client’s database server, one of the checks I perform is checking to see how much backup history is being kept within MSDB. SQL Server does a fantastic job logging all the backups and restores we perform and depending how your routine is configured you are probably only keeping enough backup files to meet your SLA’s. What most DBA’s or Admins don’t think about is all that backup and restore history being retained with MSDB.

The tables within msdb that hold this history include:

  • restorefile
  • restorefilegroup
  • restorehistory
  • backupfile
  • backupfilegroup
  • backupmediafamily
  • backupmediaset
  • backupset

Microsoft SQL Server has a system stored procedure that helps us maintain the history however it is not scheduled to run by default. We have to create a process to run the stored procedure on a routine basis. You will need to know how much history is enough to keep for your organization. I have seen this range from 90 days to 14 days.

The stored procedure is sp_delete_backupshistory within the MSDB database.

This stored procedure takes a single parameter of date. This date is the cutoff date of history to keep. Any history past the date provided will be deleted.

Over the years I have found MSDB ranging from several megabytes to nearly 20 GB.  I recall one client who had a database server with hundreds of databases that were being log shipped to a secondary with 15 minute log backup and the system had been in place for many years. MSDB had grown to almost 20 GB. The client had complained that if they ever tried to restore a database within the GUI that it would lock up SSMS.

A simple script to execute to purge this history older than 30 days is below.

USE msdb
GO
DECLARE @CutOffDate DATETIME
SET
@CutOffDate = CONVERT(VARCHAR(10), DATEADD(dd, -30,GETDATE()), 101)
EXEC sp_delete_backuphistory @CutOffDate
GO

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SQL Server High Availability Options – Notes from the Field #032

[Notes from Pinal]: When it is about High Availability or Disaster Recovery, I often see people getting confused. There are so many options available that when the user has to select what is the most optimal solution for their organization they are often confused. Most of the people even know the salient features of various options, but when they have to figure out one single option to use they are often not sure which option to use. I like to give ask my dear friend time all these kinds of complicated questions. He has a skill to make a complex subject very simple and easy to understand.

Linchpin People are database coaches and wellness experts for a data driven world. In this 26th episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words the best High Availability Option for your SQL Server. 


Working with SQL Server a common challenge we are faced with is providing the maximum uptime possible.  To meet these demands we have to design a solution to provide High Availability (HA). Microsoft SQL Server depending on your edition provides you with several options.  This could be database mirroring, log shipping, failover clusters, availability groups or replication.

Each possible solution comes with pro’s and con’s.  Not anyone one solution fits all scenarios so understanding which solution meets which need is important.  As with anything IT related, you need to fully understand your requirements before trying to solution the problem.  When it comes to building an HA solution, you need to understand the risk your organization needs to mitigate the most.

I have found that most are concerned about hardware failure and OS failures. Other common concerns are data corruption or storage issues.  For data corruption or storage issues you can mitigate those concerns by having a second copy of the databases. That can be accomplished with database mirroring, log shipping, replication or availability groups with a secondary replica.  Failover clustering and virtualization with shared storage do not provide redundancy of the data.

I recently created a chart outlining some pros and cons of each of the technologies that I posted on my blog.

I like to use this chart to help illustrate how each technology provides a certain number of benefits.  Each of these solutions carries with it some level of cost and complexity.  As a database professional we should all be familiar with these technologies so we can make the best possible choice for our organization.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Planned and Unplanned Availablity Group Failovers – Notes from the Field #031

[Note from Pinal]: This is a new episode of Notes from the Fields series. AlwaysOn is a very complex subject and not everyone knows many things about this. The matter of the fact is there is very little information available on this subject online and not everyone knows everything about this. This is why when a very common question related to AlwaysOn comes, people get confused.

In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very common issue DBAs and Developer faces in their career and is related to Planned and Unplanned Availablity Group Failovers. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


Whenever a disaster occurs it will be a stressful scenario regardless of how small or big the disaster is. This gets multiplied when it is your first time working with newer technology or the first time you are going through a disaster without a proper run book. Today, were going to help you establish a run book for creating a planned failover with availability groups.

To make today’s session simple were going to have two instances of SQL Server 2012 included in an availability group and walk through the steps of doing an unplanned failover.  We will focus on using the user interface and T-SQL to complete the failovers. We are going to use a two replica Availability Group where each replica is in another location. Therefore, we will be covering Asynchronous (non automatic failover) the following is a breakdown of our availability group utilized today.

Seeing the following screen might be scary the first time you come across an unplanned failover.  It looks like our test database used in this Availability Group is not functional and it currently isn’t. The database status is not synchronizing which makes sense because the primary replica went down so it couldn’t synchronize. With that said, we can still failover and make it functional while we troubleshoot why we lost our primary replica.

To start we are going to right click on the availability group that needs to be restarted and select failover.

This will bring up the following wizard, which will walk you through several steps needed to complete the failover using the graphical user interface provided with SQL Server Management Studio (SSMS).

You are going to see warning messages simply because we are in Asynchronous commit mode and can not guarantee ‘no data loss’ when we do failover.

Just incase you missed it; you get another screen warning you about potential data loss because we are in Asynchronous mode.

Next we get to connect to the specific replica we want to become the primary replica after the failover occurs. In our case, we only have two replicas so this is trivial.

In order to failover, it’s required to connect to the replica that will become primary.  The following screen shows that the connection has been made successfully.

Next, you will see the final summary screen. Once again, this reminds you that the failover action will cause data loss as were using Asynchronous commit mode due to the distance between instances used for disaster recovery.

Finally, once the failover is completed you will see the following screen.

If you followed along this long you might be wondering what T-SQL scripts are generated for clicking through all the sections of the wizard. If you have used Database Mirroring in the past you might be surprised.  It’s not too different, which makes sense because the data is being replicated via SQL Server endpoints just like the good old database mirroring.

Now were going to take a look at how to do a failover with just T-SQL. First, were going to need to open a new query window and run our query in SQLCMD mode. Just incase you haven’t used SQLCMD mode before we will show you how to enable it below.

Now you can run the following statement. Notice, we connect to the replica we want to become primary after failover and specify to force failover to allow data loss. We can use the following script to failback over when our primary instance comes back online.

-- YOU MUST EXECUTE THE FOLLOWING SCRIPT IN SQLCMD MODE.
:Connect SQL2012PROD1
ALTER AVAILABILITY GROUP [AGSQL2] FORCE_FAILOVER_ALLOW_DATA_LOSS;
GO

Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)