SQL SERVER – SSIS Parameters in Parent-Child ETL Architectures – Notes from the Field #040

[Notes from Pinal]: SSIS is very well explored subject, however, there are so many interesting elements when we read, we learn something new. A similar concept has been Parent-Child ETL architecture’s relationship in SSIS.

Linchpin People are database coaches and wellness experts for a data driven world. In this 40th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to understand SSIS Parameters in Parent-Child ETL Architectures.


In this brief Notes from the Field post, I will review the use of SSIS parameters in parent-child ETL architectures.

A very common design pattern used in SQL Server Integration Services is one I call the parent-child pattern.  Simply put, this is a pattern in which packages are executed by other packages.  An ETL infrastructure built using small, single-purpose packages is very often easier to develop, debug, and troubleshoot than large, monolithic packages.  For a more in-depth look at parent-child architectures, check out my earlier blog post on this topic.

When using the parent-child design pattern, you will frequently need to pass values from the calling (parent) package to the called (child) package.  In older versions of SSIS, this process was possible but not necessarily simple.  When using SSIS 2005 or 2008, or even when using SSIS 2012 or 2014 in package deployment mode, you would have to create package configurations to pass values from parent to child packages.  Package configurations, while effective, were not the easiest tool to work with.  Fortunately, starting with SSIS in SQL Server 2012, you can now use package parameters for this purpose.

In the example I will use for this demonstration, I’ll create two packages: one intended for use as a child package, and the other configured to execute said child package.  In the parent package I’m going to build a for each loop container in SSIS, and use package parameters to pass in a value – specifically, a ClientID – for each iteration of the loop.  The child package will be executed from within the for each loop, and will create one output file for each client, with the source query and filename dependent on the ClientID received from the parent package.

Configuring the Child and Parent Packages

When you create a new package, you’ll see the Parameters tab at the package level.  Clicking over to that tab allows you to add, edit, or delete package parameters.

As shown above, the sample package has two parameters.  Note that I’ve set the name, data type, and default value for each of these.  Also note the column entitled Required: this allows me to specify whether the parameter value is optional (the default behavior) or required for package execution.  In this example, I have one parameter that is required, and the other is not.

Let’s shift over to the parent package briefly, and demonstrate how to supply values to these parameters in the child package.  Using the execute package task, you can easily map variable values in the parent package to parameters in the child package.


The execute package task in the parent package, shown above, has the variable vThisClient from the parent package mapped to the pClientID parameter shown earlier in the child package.  Note that there is no value mapped to the child package parameter named pOutputFolder.  Since this parameter has the Required property set to False, we don’t have to specify a value for it, which will cause that parameter to use the default value we supplied when designing the child pacakge.

The last step in the parent package is to create the for each loop container I mentioned earlier, and place the execute package task inside it.  I’m using an object variable to store the distinct client ID values, and I use that as the iterator for the loop (I describe how to do this more in depth here).  For each iteration of the loop, a different client ID value will be passed into the child package parameter.


The final step is to configure the child package to actually do something meaningful with the parameter values passed into it.  In this case, I’ve modified the OleDB source query to use the pClientID value in the WHERE clause of the query to restrict results for each iteration to a single client’s data.  Additionally, I’ll use both the pClientID and pOutputFolder parameters to dynamically build the output filename.

As shown, the pClientID is used in the WHERE clause, so we only get the current client’s invoices for each iteration of the loop.

For the flat file connection, I’m setting the Connection String property using an expression that engages both of the parameters for this package, as shown above.

Parting Thoughts

There are many uses for package parameters beyond a simple parent-child design pattern.  For example, you can create standalone packages (those not intended to be used as a child package) and still use parameters.  Parameter values may be supplied to a package directly at runtime by a SQL Server Agent job, through the command line (via dtexec.exe), or through T-SQL.

Also, you can also have project parameters as well as package parameters.  Project parameters work in much the same way as package parameters, but the parameters apply to all packages in a project, not just a single package.

Conclusion

Of the numerous advantages of using catalog deployment model in SSIS 2012 and beyond, package parameters are near the top of the list.  Parameters allow you to easily share values from parent to child packages, enabling more dynamic behavior and better code encapsulation.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

Developer’s Life – Disaster Lessons – Notes from the Field #039

[Note from Pinal]: This is a 39th episode of Notes from the Field series. What is the best solution do you have when you encounter a disaster in your organization. Now many of you would answer that in this scenario you would have another standby machine or alternative which you will plug in. Now let me ask second question – What would you do if you as an individual faces disaster? 

In this episode of the Notes from the Field series database expert Mike Walsh explains a very crucial issue we face in our career, which is not technical but more to relate to human nature. Read on this may be the best blog post you might read in recent times.


Howdy! When it was my turn to share the Notes from the Field last time, I took a departure from my normal technical content to talk about Attitude and Communication.(http://blog.sqlauthority.com/2014/05/08/developers-life-attitude-and-communication-they-can-cause-problems-notes-from-the-field-027/)

Pinal said it was a popular topic so I hope he won’t mind if I stick with Professional Development for another of my turns at sharing some information here. Like I said last time, the “soft skills” of the IT world are often just as important – sometimes more important – than the technical skills. As a consultant with Linchpin People – I see so many situations where the professional skills I’ve gained and use are more valuable to clients than knowing the best way to tune a query.

Today I want to continue talking about professional development and tell you about the way I almost got myself hit by a train – and why that matters in our day jobs. Sometimes we can learn a lot from disasters. Whether we caused them or someone else did. If you are interested in learning about some of my observations in these lessons you can see more where I talk about lessons from disasters on my blog.

For now, though, onto how I almost got my vehicle hit by a train…

The Train Crash That Almost Was….

My family and I own a little schoolhouse building about a 10 mile drive away from our house. We use it as a free resource for families in the area that homeschool their children – so they can have some class space. I go up there a lot to check in on the property, to take care of the trash and to do work on the property. On the way there, there is a very small Stop Sign controlled railroad intersection. There is only two small freight trains a day passing there. Actually the same train, making a journey south and then back North. That’s it. This road is a small rural road, barely ever a second car driving in the neighborhood there when I am. The stop sign is pretty much there only for the train crossing.

When we first bought the building, I was up there a lot doing renovations on the property. Being familiar with the area, I am also familiar with the train schedule and know the tracks are normally free of trains. So I developed a bad habit. You see, I’d approach the stop sign and slow down as I roll through it. Sometimes I’d do a quick look and come to an “almost” stop there but keep on going. I let my impatience and complacency take over. And that is because most of the time I was going there long after the train was done for the day or in between the runs. This habit became pretty well established after a couple years of driving the route. The behavior reinforced a bit by the success ratio. I saw others doing it as well from the neighborhood when I would happen to be there around the time another car was there.

Well. You already know where this ends up by the title and backstory here. A few months ago I came to that little crossing, and I started to do the normal routine. I’d pretty much stopped looking in some respects because of the pattern I’d gotten into.  For some reason I looked and heard and saw the train slowly approaching and slammed on my brakes and stopped. It was an abrupt stop, and it was close. I probably would have made it okay, but I sat there thinking about lessons for IT professionals from the situation once I started breathing again and watched the cars loaded with sand and propane slowly labored down the tracks…

Here are Those Lessons…

  • It’s easy to get stuck into a routine – That isn’t always bad. Except when it’s a bad routine. Momentum and inertia are powerful. Once you have a habit and a routine developed – it’s really hard to break that. Make sure you are setting the right routines and habits TODAY. What almost dangerous things are you doing today? How are you almost messing up your production environment today? Stop doing that.
  • Be Deliberate – (Even when you are the only one) – Like I said – a lot of people roll through that stop sign. Perhaps the neighbors or other drivers think “why is he fully stopping and looking… The train only comes two times a day!” – they can think that all they want. Through deliberate actions and forcing myself to pay attention, I will avoid that oops again. Slow down. Take a deep breath. Be Deliberate in your job. Pay attention to the small stuff and go out of your way to be careful. It will save you later.
  • Be Observant – Keep your eyes open. By looking around, observing the situation and understanding what your servers, databases, users and vendors are doing – you’ll notice when something is out of place. But if you don’t know what is normal, if you don’t look to make sure nothing has changed – that train will come and get you. Where can you be more observant? What warning signs are you ignoring in your environment today?

In the IT world – trains are everywhere. Projects move fast. Decisions happen fast. Problems turn from a warning sign to a disaster quickly. If you get stuck in a complacent pattern of “Everything is okay, it always has been and always will be” – that’s the time that you will most likely get stuck in a bad situation. Don’t let yourself get complacent, don’t let your team get complacent. That will lead to being proactive. And a proactive environment spends less money on consultants for troubleshooting problems you should have seen ahead of time. You can spend your money and IT budget on improving for your customers.

If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Andy Defines Basic RDBMS: Isolation in Processes – Notes from the Field #038

[Note from Pinal]: This is a new episode of Notes from the Field series. Every time I give an introductory note, however, this time there is no need of intro note. This note is from Andy and as we all know he is amazing person when we have to understand the fundamentals. He has written this blog post with such an interesting way that you must read it to understand the very basic fundamental of Isolation.


When I think about SQL Server isolation in processes, it reminds me of eggs. Our family raises chickens for their eggs. Fresh eggs are very tasty, but there’s always the risk of a bad egg.

In the image above, I am preparing to scramble five eggs. I have cracked and opened five eggs (you can tell by the number of eggshells), but only four egg yolks are shown. “Why are there only four yolks, Andy?” I’m glad you asked.

My process for opening the eggs involves first dropping the contents of the egg into the mug, examining the contents, then – if satisfactory – adding them to the bowl for mixing. You don’t have to do this to make five scrambled eggs; you can crack the eggs right over the bowl.

But what happens when you open a bad egg? You risk ruining the entire batch of mostly good eggs. If you crack the eggs over the bowl, you have to pick five good eggs in a row to get a batch that’s ready to scramble. You may get lucky and it may only require five eggs. But the risk of a bad egg is ever present. You could get one good egg, followed by a bad egg. Now you have to throw both the good and bad egg out and begin again. The next time you may get three good eggs and then one bad egg. Now you’ve wasted three good eggs.

Isolating the eggs is a good practice. If I first empty the egg into a mug, I have constrained the process so that I only risk one egg at a time. In so doing, I have successfully mitigated risk to the least possible unit. I will only throw out the bad eggs without risking contamination of the good eggs in the bowl.

Isolation is generically defined as, “the process of separating somebody or something from others, or the fact of being alone and separated from others.”

In database terms, isolation is one of the four defining properties (i.e., atomic, consistent, isolated, durable—remember these by using the acronym ACID) of a Relational Database Management System (RDBMS). Similar to isolating bad eggs from the good, RDBMS isolation keeps individual database transactions from intermingling with each other during execution. It’s not that other transactions are bad, we just want to keep them separated so that data from one transaction doesn’t corrupt data from another transaction.

Are isolated transactions completely unaffected by each other? No, unlike the example of completely isolating a bad egg from the mix of good eggs, RDBMS isolation doesn’t prevent one transaction from influencing or impeding another transaction. An example of influence is resource contention; an example of impedance is locking. Isolation simply guarantees the results of the transaction will not be affected by concurrently executing transactions.

You can learn more about these properties from this awesome post by my friend, Pinal Dave: SQL Server – ACID (Atomicity, Consistency, Isolation, Durability).

If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Beginning Contained Databases – Notes from the Field #037

[Note from Pinal]: This is a new episode of Notes from the Fields series. Login and Users are very confused concept. Just yesterday I wrote about a difference between Login and User. In Latest version of SQL Server now we can also have a User without Login. This concept is not easy to understand and needs a clear example.

In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very common issue DBAs and Developer faces related to Login and Users. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


One of the downfalls with database mirroring is synchronizing database users with logins. If you were using SQL Server accounts instead of windows domain accounts it could be a tougher challenge to ensure that the unique identifiers (SID) match up. In SQL Server 2012 and up we can leverage contained databases to mitigate this problem with Availability Groups. Contained database users are stored inside the database instead of leveraging the instance logins. This allows the database itself to manage authentication and authorization for the login for the database.  Therefore, you don’t need to fixed orphan users when you failover.

Here is how you can enable and leverage Contained Databases.

Step One: Enable Contained Database on the instances that will host the database. In an Availability Group this would be all the replicas for the Availability Group.

This can be configured in SSMS or via T-SQL as shown below.

EXEC sp_CONFIGURE 'show advanced options',1
GO
RECONFIGURE
GO
EXEC sp_CONFIGURE 'contained database authentication',1
GO
RECONFIGURE
GO

Step Two: Enable Contained Database on the primary replica. This is also known as the read/write replica.

This can be done via SSMS or T-SQL

USE [master]
GO
ALTER DATABASE [AdventureWorks2012] SET CONTAINMENT = PARTIAL WITH NO_WAIT
GO

Step Three: Create a Contained User
In order to have a contained user were going to have to create a new user inside the database and create it without a login. This will make the user a contained user. You will also need to add the needed security for your account. For the purpose of this weekly tip we will skip this part. Creating the contain login also known as SQL user without login can be done with SSMS or T-SQL as shown below.

USE [AdventureWorks2012]
GO
CREATE USER [MyContainedUser] WITH PASSWORD=N'!LPPeople!', DEFAULT_SCHEMA=[dbo]
GO

Step Four: Test connectivity.

Finally, we can test connectivity. This will be done utilizing SSMS as shown below.  Make sure you change the default database to the database where the login is contained as shown below.


Now, make sure you select the Connection Properties tab and select the database where you created the contained login. In our example, this will be AdventureWorks2012.

 

When connection is successful with a contained database user you should see the login and the database right next to the instance in object explorer as shown below.

 

Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – What is Biml and How Can it Help Me with SQL Server BI? – Notes from the Field #036

[Note from Pinal]: This is a 36th episode of Notes from the Field series. One of the common question I receive every other day is – I have learned BI but I feel that there is no automation in the BI field, what should I do? How do I go beyond learning BI? How can I fill the gap between BIDS and SSDT? If you have been reading this blog, when it is about BI, I always take help from LinchPin People who are BI experts. I requested Reeves from LinchiPin people to help me answer this unique question.

In this episode of the Notes from the Field series database expert Reeves Smith explains what is Biml and How Can it Help Me with SQL Server BI?. Read the experience of Reeves in his own words.


With all of the new technologies to learn and implement, I wanted to introduce you to some of the benefits of spending the time to learn Business Intelligence Markup Language (Biml). You can use this XML-based language to create and automate Microsoft SQL Server Business Intelligence (BI) objects. Biml currently supports BI objects like SSIS packages and SSAS dimensions, measures, and cubes.

After a quick overview of Biml, I’ll demonstrate how to use it by providing a walkthrough example.

Overview

You can use Biml to create tables, flat files, SSIS packages, and SSAS models. This language is human readable, unlike the XML that is represented within dtsx files and SSAS objects. This readability can help reduce the learning curve required for most programming languages.

Biml will not teach you SQL Server Integration Services (SSIS) or SQL Server Analysis Services (SSAS) but can enhance and increase your productivity with these tools with certain use cases. Biml is not the silver bullet that can solve all of the SSIS/SSAS development project problems, but I will discuss some of the scenarios where Biml excels.

The Biml languages is supported within two Integrated Development Environments (IDEs): Business Intelligence Development Studio (BIDS) and SQL Server Data Tools (SSDT), with the BIDS Helper add-in (free) or the Mist application from Varigence (purchased product).  Currently BIDS Helper is supported with SQL Server 2005, 2008, and 2008 R2 BIDS, and SQL Server 2012 SQL Server Data Tools. This article will focus on the BIDS Helper add-in and its capabilities with SSIS.

Simple Walk Through

This walkthrough will show how to create a Biml file that creates an SSIS package to move data from the AdventureWorks database to a staging database. The following script file will create all of the needed objects. If you have the AdventureWorks database installed and a table called HumanResources.Department, you can skip Listing 1.

CREATE DATABASE [AdventureWorks]
GO
USE [AdventureWorks]
GO
CREATE SCHEMA [HumanResources] AUTHORIZATION [dbo]
GO
CREATE TABLE [HumanResources].[Department]
(
[DepartmentID] [smallint] IDENTITY(1,1) PRIMARY KEY NOT NULL,
[Name] [nvarchar](50) NOT NULL,
[GroupName] [nvarchar](50) NOT NULL,
[ModifiedDate] [datetime] DEFAULT (GETDATE()) NOT NULL
)
ON [PRIMARY]
GO
INSERT INTO [HumanResources].[Department] ([Name], [GroupName]) VALUES (N'Engineering',N'Research and Development')
INSERT INTO [HumanResources].[Department] ([Name], [GroupName]) VALUES (N'Tool Design',N'Research and Development')
INSERT INTO [HumanResources].[Department] ([Name], [GroupName]) VALUES (N'Sales',N'Sales and Marketing')
INSERT INTO [HumanResources].[Department] ([Name], [GroupName]) VALUES (N'Marketing',N'Sales and Marketing')
INSERT INTO [HumanResources].[Department] ([Name], [GroupName]) VALUES (N'Purchasing',N'Inventory Management')
INSERT INTO [HumanResources].[Department] ([Name], [GroupName]) VALUES (N'Research and Development',N'Research and Development')
INSERT INTO [HumanResources].[Department] ([Name], [GroupName]) VALUES (N'Production',N'Manufacturing')
INSERT INTO [HumanResources].[Department] ([Name], [GroupName]) VALUES (N'Production Control',N'Manufacturing')
INSERT INTO [HumanResources].[Department] ([Name], [GroupName]) VALUES (N'Human Resources',N'Executive General and Administration')
INSERT INTO [HumanResources].[Department] ([Name], [GroupName]) VALUES (N'Finance',N'Executive General and Administration')
GO

asdfas
Listing 1 – AdventureWorks Database Objects

CREATE DATABASE [AdventureWorksStage]
GO
USE [AdventureWorksStage]
GO
CREATE SCHEMA [HumanResources] AUTHORIZATION [dbo]
GO
CREATE TABLE [HumanResources].[Department]
(
[DepartmentID] [smallint] PRIMARY KEY NOT NULL,
[Name] [nvarchar](50) NOT NULL,
[GroupName] [nvarchar](50) NOT NULL,
[ModifiedDate] [datetime] NOT NULL
)
ON [PRIMARY]
GO

Listing 2 – AdventureWorksStage Database Objects

To check that you have BIDS Helper installed within your Visual Studio environment, select the Tools menu option and select Options.The BIDS Helper option will display on the left dropdown window if it is installed, as you can see in Figure 1. If you need help installing the BIDS Helper add-in see: Step 2 in the Stairway to Biml: Biml Basics – Level 2 (link).

Figure 1 – Options windows with BIDS Helper installed

After verifying the BIDS Helper is installed, open a new SSIS Project from the File menu, and select New, then Project as Figure 2 shows.

Figure 2 – New Integration Services Project

Due to a Visual Studio limitation, right clicking on the Miscellaneous folder will not bring up the Biml context menu, which you see in Figure 3. Instead, from the Project Window, select the Project folder or the SSIS Packages folder, right click and select Add New Biml File. A new Biml file (BimlScript.biml) is added to the Miscellaneous folder.

Figure 3 – Biml Context Menu

Replace all of the code within the Biml File with the code from Listing 3.

<Biml
 xmlns="http://schemas.varigence.com/biml.xsd">
 <Connections>
 <OleDbConnection Name="AdventureWorks" ConnectionString="Provider=SQLNCLI10;Integrated Security=SSPI;Initial Catalog=AdventureWorks;Data Source=localhost;" />
 <OleDbConnection Name="AdventureWorksStage" ConnectionString="Provider=SQLNCLI10;Integrated Security=SSPI;Initial Catalog=AdventureWorksStage;Data Source=localhost;" />
 </Connections>
 <Packages>
 <Package Name="HumanResources_Department_Biml" ConstraintMode="Linear" >
 <Tasks>
 <Dataflow Name="Data Flow Task">
 <Transformations>
 <OleDbSource Name="OLE DB Source" ConnectionName="AdventureWorks">
 <ExternalTableInput Table="[HumanResources].[Department]" />
 </OleDbSource>
 <OleDbDestination Name="OLE DB Destination" ConnectionName="AdventureWorksStage">
 <InputPath OutputPathName="OLE DB Source.Output" />
 <ExternalTableOutput Table="[HumanResources].[Department]" />
 </OleDbDestination>
 </Transformations>
 </Dataflow>
 </Tasks>
 </Package>
 </Packages>
</Biml>

Listing 3 – Sample Biml code

Note: The connection information in the Biml file must point to the SQL Server instances that contain the AdventureWorks and AdventureWorksStage databases. If you are using another version of SQL Server the provider information might need to be changed: Provider=SQLNCLI10;

After replacing the code in the Biml file with the code in Listing 3, right click the file and choose the Check Biml for Errors menu option shown in Figure 4.

Figure 4 – Check Biml for Errors

If everything is configured correctly, you will receive the message shown in See Figure 5.

Figure 5 – No errors

If you receive any errors, you will need to correct them before selecting the Generate SSIS Package menu option.

Select the Generate SSIS Package menu option to create a new integration services package. A working package will be created within the Packages folder named: HumanResources_Department_Bimlas defined within the Package Name parameter within the Biml file.

Additional Walk-Through Examples

To add an Execute SQL task to truncate the stage table prior to the load add the following code after the <Tasks> node and before the <DataflowName=”Data Flow Task”>

<ExecuteSQLName="Truncate Table"ConnectionName="AdventureWorksStage">
 <DirectInput>TRUNCATE TABLE [HumanResources].[Department]</DirectInput>
</ExecuteSQL>

At this point in the walk-through Biml has not saved you much effort, but what if you wanted to create additional SSIS packages for all for the HumanResources tables within the Adventure works database? How much development effort would it take to update the Biml file to point to the each of the tables in the HumanResources schema and select Generate SSIS Package? Would those packages be consistent with the packages built prior? Would the prior testing of the previous package have a lot of benefit to the newly created package?

At this point copy and paste the Biml code within a new Biml file and move data from other tables. All of the table objects will need to exist, so you might have to add some staging tables to your stage database.

Practical Use

The first question that comes up after starting a discussion on Biml is, “Why would I us Biml?” I can create all of the SSIS packages I need in the current IDE.

One use case for Biml is the automation of similar design patterns like adding more packages to the walkthrough above. SSIS packages typically move data from a source to a destination. This pattern is repeated for each source and destination and can produce multiple SSIS packages. With some SSIS design patterns, the only items that change are the source table name and the destination table name.  What would it be like to spend the time to properly design and test a package to move data from one table to another and then duplicate that package with ease and only change the relevant information?  Biml enables this type of development.  With Biml, you are able to focus the development effort on design and remove some of the repetitive work.

As the variation of the packages increases between each package, Biml can become a less viable option. With Biml Script (a scripting language within Biml), you are able to create designs that vary and can adapt to changing Extract-Transform-Load (ETL) requirements.  Biml Script enables you to programmatically change items within the file without using cut and paste. (Biml Script was not demonstrated within this article.)

Looking Ahead

Biml is much more robust than what was demonstrated in this article but I wanted to start the discussion with a simple example to get you familiar with Biml. Adding Biml Script can create programmatic solutions that can automate SSIS package development. In a future article I will demonstrate how to add Biml Script to the existing code to increase package automation.

To enable follow along with the upcoming articles, install the AdventureWorks database from the following URL:. This database will also give you additional options to test and experiment with.

Side Note: The Mist IDE is able to extend Biml even further by updating multiple SSIS packages. This enables a maintenance option that has never been available within the SSIS development environment. It’s really cool, but something I will have to discuss in the future article

Stay Tuned.

If you want to get started with BIML with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Round Up From Notes from the Field of Blog Posts of Tim Radney

We have completed 35 episodes of the series Notes from the Fields. I have been blogging for over 8 years and I have blogged about pretty much everything SQL and lots of other concepts. Though, I have extensive experience with SQL and Databases, it is always a good idea that we consult experts for their advice and opinion. Following the same thought process, I have started this new series of Notes from the Fields. In this series, we have notes from various experts in the database world.

My friends at Linchpin People have graciously decided to support me in my new initiation.  Linchpin People are database coaches and wellness experts for a data driven world. This series has been excellent From the very first episode of the Notes from the Fields series, we have received tremendous response. We are also very fortunate that database expert Tim Radney (partner at Linchpin People) has shared many insights about a very daily issue DBA and Developers face in their career. By the way, Congratulations to Tim for Microsoft MVP award.

Tim has been amazing in supporting this series. Here is the time when we can help out Tim. Tim would like to know which of his Notes from the Field is your most favorite Tim. 

Gift from Pinal

Here is a small contest. I have my favorite blog posts from following list. If your favorite post and my favorite choice are the same – I will give you USD 50 Amazon Card.

Remember, you need to tell me your favorite post and the reason why you liked it! The contest ends at Midnight GMT on July 16.

Please leave your answer in comment area.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Automate Database Operations for Success – Notes from the Field #035

[Note from Pinal]: This is a 35th episode of Notes from the Field series. Doing this faster and easier is always our goal. We all want to do things which generates maximum return of investment with least efforts. This is a catch 22 situation quite often when it is about database administrators.

In this episode of the Notes from the Field series database expert Brian Kelley explains a how to automate various database administrator tasks for success of business and our efforts. Read the experience of Brian in his own words.


In the Linchpin People mindset, it’s not about how busy you are, but how valuable you are. You don’t increase your value to your organization by your busyness. You do so by accomplishing the work. There are two parts to this:

  • Accomplish more work.
  • Accomplish more important work.

Initially, if your path follows most people, you’ll be asked to do more. This is your opportunity to accomplish more work. As you succeed with the additional work that you receive, you will likely be given opportunities to take on more important work. If this never happens, you’re probably at the wrong organization. Let’s assume you are at a good one and you’re given more important tasks. Obviously, if you succeed at the more important work, you’ll continue to be given more opportunities. And you’ll likely be given tasks and projects that are more and more important to your organization. This is how you become a linchpin.

So how do you complete more work? One part of the answer is automation. Since we’re IT professionals, automation should be near and dear to us. I recently wrote about being a “lazy” DBA. I used the word lazy was to indicate there are some manual tasks we don’t want to repeat. A “lazy” engineer or IT pro tries to automate these tasks in order to reduce the time spent with these tasks.  That frees up the IT pro to have more time for the more important work.

Let’s look at some things that we should automate as DB Pros:

Database Administration:

Build scripts that can do the following:

  • Check disk space on servers.
  • Check database available space.
  • Check security on key servers and databases.
  • Verify backups were taken properly.
  • Perform test restores of key backups.
  • Parse the SQL error log on each SQL Server for important information (failed logins, use of sp_configure, etc.).

For instance, if I want to check backups to ensure a full backup has run on every database within the last 24 hours, I might automate running the following query to report back the datbases where I do not have the proper backup:

SELECT D.name,
BS.database_name,
Isnull(CONVERT(VARCHAR, BS.lastbackup, 106), ‘No recent backup’) AS
LastBackup
FROM   master.sys.databases AS D
LEFT JOIN (SELECT database_name,
Max(backup_finish_date) AS LastBackup
FROM   msdb.dbo.backupset
WHERE  type = ‘D’
GROUP  BY database_name) ASBS
ON D.name = BS.database_name
WHERE  BS.database_name IS NULL
OR BS.lastbackup < ( Dateadd(hour, -24, Getdate()))
ORDER  BY D.name; 

We should also use automation like policy based management or custom scripts to enforce settings. Some examples that we should consider:

  • database ownership
  • recovery models
  • membership in key roles (sysadmin, securityadmin, db_owner, etc.)

And here if I knew every user database on a given server should be in full recovery mode, I can ensure that if I schedule the following script:

DECLARE cursdbs CURSOR fast_forward FOR
SELECT name
FROM   sys.databases
WHERE  state_desc = ‘ONLINE’
AND recovery_model_desc <> ‘FULL’
AND name NOT IN ( ‘master’, tempdb, msdb, ‘model’);
DECLARE @DBName SYSNAME;
DECLARE @SQL NVARCHAR(max);
OPEN cursdbs;
FETCH next FROM cursdbs INTO @DBName;
WHILE ( @@FETCH_STATUS = 0)
BEGIN
PRINT ‘ALTERING DATABASE: ‘ + @DBName;
SET @SQL = ‘ALTER DATABASE [' + @DBName
+ '] SET RECOVERY FULL;’;
EXEC (@SQL);
FETCH next FROM cursdbs INTO @DBName;
END
CLOSE cursdbs;
DEALLOCATE cursdbs; 

You do want to review that output. After all, if you just switched the DB to full recovery mode, you want to ensure you restart the log backup chain with a full or differential database backup.

Database Development:

Encourage continuous integration methods to include database code. This will require tests to validate no new code “breaks the build.” Make sure that these builds come directly from source control.

If you are doing tests that require restores of databases and the application of scripts, write the automation to do these tasks. It makes the tasks repeatable, it reduces the possibility of error, and it frees you up so you don’t have to manually run each step.

With that said, write scripts for anything you will have to repeat when developing a solution. For instance, you might need scripts to:

  • Add, delete, or change data.
  • Change security on a database or server.
  • Encrypt / decrypt setup data.

Can you automate too much?

Yes, you can. Note that in both cases I did include some reporting. If you build automation where you’re not doing any checking, that’s wrong. Automation eliminates you from having to do tedious steps. It doesn’t remove your responsibility/accountability. If you don’t have anything to check, you don’t actually know if the work was completed successfully. Don’t assume. Check.

If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)