SQL SERVER – The Easy Functions of SQL Server – Notes from the Field #062

[Note from Pinal]: In this episode of the Notes from the Field series database expert Kathi Kellenberger explains about easy and amazing functions of SQL Server. Kathi is an amazing instructor, she was the SQL author I have read in my early career. The reason, I love SQL Server because her writing has instigated love for this technology in me. Today she brings a relatively unknown topic for database experts. Read the experience of  Kathi in her own words.


Have you ever tried to format a datetime value in T-SQL? I have seen some pretty crazy code that looked something like this:

DECLARE @date DATETIME = GETDATE();
SELECT CAST(YEAR(@date) AS CHAR(4)) + '/' +
RIGHT('0' + CAST(MONTH(@date) AS VARCHAR(2)),2) + '/' +
RIGHT('0' + CAST(DAY(@date) AS VARCHAR(2)),2) + ' ' +
RIGHT('0' + CAST(DATEPART(HOUR,@date) AS VARCHAR(2)),2) + ':' +
RIGHT('0' + CAST(DATEPART(MINUTE,@date) AS VARCHAR(2)),2);

Starting with SQL Server 2012, there is an even easier method: The FORMAT function. The FORMAT function, can do the same thing as the previous code, but it is so much easier to write. The FORMAT function is one of five functions added with SQL Server 2012 that I like to call The Easy Functions. The Easy Functions are listed in the table below.

The Easy Functions
Category Function Definition
Logical CHOOSE Select one expression from a list
  IIF Inline IF. Evaluate an expression and substitute another expression for true or false
String CONCAT Add strings together
  FORMAT Format a string including “C” for currency and “P” for percent
Date EOMONTH Returns the last day of the month

The following script demonstrates the logical functions.

USE AdventureWorks2014; --Or 2012
GO
SELECT  CustomerID, COUNT(*) AS OrderCount,
CHOOSE(COUNT(*), 'Bronze','Silver','Gold','Platinum') AS MemberLevel,
AVG(TotalDue) AS AvgOrder,
IIF(AVG(TotalDue) > 1000, 'High','Low') AS Priority
FROM Sales.SalesOrderHeader
WHERE CustomerID IN (11060, 11062, 11065, 11068, 11070, 12165)
GROUP BY CustomerID;

The query returns the count of orders for several customers. A MemberLevel is assigned based on the number of orders the customer has placed, and a Priority is assigned based on the average order. If the average order is over $1000, then the customer has a high priority.

Take a look at the CHOOSE expression. The first argument is an integer, the count of the orders. After the order count, there is a list of values. The list of values is actually a 1-based array, and the first argument is an index into the array.

The IIF function requires three arguments. The first argument is an expression to check, the average order. The second argument is a value to return if the expression is TRUE. The third argument is a value to return for FALSE.

The next example demonstrates the string functions.

SELECT C.CustomerID, P.FirstName, P.MiddleName, P.LastName,
CONCAT(P.FirstName, ' ' + P.MiddleName, ' ', P.LastName) AS FullName,
FORMAT(SUM(TotalDue),'C') AS TotalSales,
FORMAT(SUM(TotalDue)/SUM(SUM(TotalDue)) OVER(),'P') AS PercentOfSales,
FORMAT(MIN(OrderDate),'yyyy.MM.dd') AS FirstOrderDate
FROM Sales.SalesOrderHeader AS SOH
JOIN Sales.Customer AS C ON SOH.CustomerID = C.CustomerID
JOIN Person.Person AS P ON P.BusinessEntityID = C.PersonID
WHERE C.CustomerID IN (11060, 11062, 11065, 11068, 11070, 12165)
GROUP BY C.CustomerID, P.FirstName, P.MiddleName, P.LastName;

The query uses the CONCAT function to build a FullName column. The CONCAT function ignores NULL values. Notice that rows 3 and 4 returned the FullName even though the MiddleName values are NULL. To make sure an extra space doesn’t show up when the MiddleName is missing, the space is combined with MiddleName as one of the arguments. NULL added to a space is NULL, which will be ignored.

There are three examples of FORMAT. The first example formats TotalSales as currency, in this case USD. The next example, PercentOfSales, formats a complex expression comparing each customer’s sales to the overall total. It uses a window aggregate function to calculate the total sales for the query. (Window aggregates is a great topic for another post!) The final example formats the FirstOrderDate in a custom format.

Here is one last easy function: EOMONTH.

SELECT CustomerID, MIN(OrderDate) AS FirstOrderDate,
EOMONTH(MIN(OrderDate)) AS LastDayOfMonth,
EOMONTH(MIN(OrderDate),1) AS LastDayOfNextMonth
FROM Sales.SalesOrderHeader
WHERE CustomerID IN (11060, 11062, 11065, 11068, 11070, 12165)
GROUP BY CustomerID;

The EOMONTH function returns the last day of the month for the supplied date argument. The second example, LastDayOfNextMonth, uses the optional parameter that adds another month.

I hope that using The Easy Functions will make your life easier!

If you want to get started with BIML with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Iterating over Metadata with BimlScript – Notes from the Field #061

[Note from Pinal]: This is a 61st episode of Notes from the Field series. One of the common question I receive every other day is – I have learned BI but I feel that there is no automation in the BI field, what should I do? How do I go beyond learning BI? How can I fill the gap between BIDS and SSDT? If you have been reading this blog, when it is about BI, I always take help from LinchPin People who are BI experts. I requested Reeves from LinchiPin people to help me answer this unique question.

In this episode of the Notes from the Field series database expert Reeves Smith explains what is Biml and How Can it Help Me with SQL Server BI?. Read the experience of Reeves in his own words.


In previous articles (What is Biml and The Why of Biml), I’ve discussed Business Intelligence Markup Language (Biml) and given some examples of projects that would benefit from its use. I also demonstrated a quick way to get introduced to Biml without learning a lot of C#. As you grow in your understanding of Biml and BimlScript you will want to make your Biml files more dynamic with a programmatic metadata-driven approach. This article will build on the previous articles and add looping with metadata to build more dynamic Biml projects.

Overview

Starting where the previous article left off, I’ll demonstrate how to add a little C# to make a more automated method of metadata retrieval. If you did not walk through the other articles, I would suggest that you review them before you read this one. This article builds on the Biml in Listing 1 and lets you create multiple integration services packages from one Biml file.

Listing 1 – Semi-Automated Common Source Package Biml

Example – Automated Source Packages

This example shows you how to connect to a database and select the metadata to iterate over. Then you can create multiple SQL Server Integration Services (SSIS) packages from one Biml file. The example in the previous article manually updated the metadata and reran each Biml file. This is an effective approach, but you can build on it to create a more robust solution. Take a look at the C# code in Listing 2.

Listing 2 – C# Replacement Code

Replace the first line of code from Listing 1 with the code in Listing 2, which is an excerpt from Listing 3.

  • The first line will add the namespace to the Biml file and enable the calling of functions without a fully qualified name.
  • The second line creates a string variable that contains a connection string of the database with the metadata.
  • The third line creates a connection object (actually an AstOleDbConnection node) that will be used by the ImportDB function.
  • The forth line creates a result set with the metadata from the connection object. This object contains all of the metadata from the AdventureWorks2012 database. The ImportDB function has three parameter. The first parameter is a schema filter, and in the example the filter is on the Person schema. The second parameter is a table filter and is blank in the example, so it will be ignored. The last parameter is a list of ImportOptions, and the views have been excluded. To add more import options use the “|” followed by additional options. Example of multiple options:

ImportOptions.ExcludeForeignKey | ImportOptionsExcludeViews

The example includes an object that contains all of the metadata that you will loop over. You have to place the loop within the <Packages> node because a Biml file can only have one Packages collection. The foreach loop will create multiple <Package> nodes.

A tableName string was built below the foreach loop to reduce the code that you have to alter from the example in the previous article.

Listing 3 – Complete Code

Note: The code in listing 3 has the ConnectionString truncated to save space.

If you receive the error shown in Figure 1, it means that the tables do not exist in the stage database. To check this, add the Person filter to the table parameter in the ImportDB function and return metadata for only one table. The prior articles used this table, so it should exist.

Figure 1 – Error Message

Summary

This article added some more C# to help you refine your Biml files. I hope you can start to see the power of Biml and how it can help remove the repetitive work from your next Integration Services project.

If you want to get started with BIML with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Year 2014 Reflections by Tim Radney – Notes from the Field #060

[Notes from Pinal]: New Year’s Resolutions is a funny word. Everybody makes it on January 1st but breaks it in very few days. I used to make resolutions too, but for a while, I have stopped this practice. Tim Radney is my great friend and I asked him that what can replace resolutions for the new year. He immediately said – Reflections on Previous Years. I really like how promptly he answered me. If you think, it does make sense to look back and see how much we have accomplished in this one year. We can build the next year’s plan based on what we have done and what we should have done. I think it is a great way to start a new year. I requested him to if he can give any example of Reflections on Previous Years.

On this new year, I also would like to congratulate Tim for joining SQLskills and beginning a new chapter in his career. I will repeat one of the statements which I told my friend – Tim is not like my brother, he is my brother. 


With the beginning of every year, many of us reflect back on the previous year and also while doing so set goals or resolutions for the upcoming New Year. I am hopeful that for all of you reading this that you get to reflect back to a prosperous year full of good health and knowledge gain.

With me being fairly active on social media such as Facebook and Twitter, each year I see resolutions people make for themselves ranging from getting in better health, writing a book, reading more books, blogging more, learning a new skill or finding a better job. Whatever goals or resolutions you may have for yourself, I recommend establishing smaller milestones to help you measure your success of reaching your goal. I have found success by treating these goals as a bunch of smaller task. That makes them much more manageable.

The year 2014 has been very good for me. I have made many new friends in the SQL community, I was awarded SQL Server MVP from Microsoft, released my third book, spoke at numerous SQL conferences including the PASS Summit, attended the Microsoft MVP Summit, was award D&B MVP, ran a half marathon, and decided to change employers for the first time in 17 years.

None of those things would have been possible without the SQL community and without setting goals for myself. I am one LUCKY guy, but at the same time I work hard and seize opportunities that are given to me. I have been lucky and blessed that others have invested their time into me. Others have seen things in me that they could train, encourage and mentor and have helped push me to be become stronger in my skill. I encourage you all to do the same for others.

Let 2015 be the year that you find a mentor of your own if you don’t have one, and start mentoring others if you aren’t already. We have a saying among the SQL speakers. If you want to learn something really well, get up to speak about it. That same can be said about helping others grow. If you want to really grow in your career, help others grow in theirs as well. The rewards are endless.

I don’t know what all 2015 will hold for me, but I will continue to study, read and learn as much as possible. As I learn new tips and tricks I will be blogging and tweeting to share my finds. I encourage you to do the same and help continue to make our SQL community great.

A special thank you to my very good friend Pinal Dave for helping to make 2014 a stellar year for me. I value our friendship a great deal and it was great to finally meet the rest of your family in Seattle.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Performance Tuning – Is It Really A Top Skills for a SQL Server Consultant? – Notes from the Field #059

[Note from Pinal]: This is a 59th episode of Notes from the Field series. I get over 100 emails everyday and out of 100 emails nearly 90 of them are really related to performance tuning. The emails I get is from various different IT professionals. There are quite a few application developers like (Dotnet, Java, PHP etc) who do application development everyday, but they are not always expert of SQL Server. When I read those emails I feel that the world indeed need plenty of SQL Server Performance Tuning consultants. I asked these questions to SQL Server Guru Brian Moran. He is a unique blend of person who knows the consultant’s business from technology and business perspective.

Today is Christmas Day and we have received a wonderful gift from Brian!

In this episode of the Notes from the Field series database expert Brian Moran explains in very simple words the reality of the SQL Server Consultant world. Read the experience of Brian in his own words.


December and January are times of the year when many people reflect on personal and career goals. It would be silly and naïve for me to pretend that I know what’s best for your career so take this post with a grain of salt. Maybe you want to be a fireman or an astronaut? I don’t have many words of wisdom if that is your hope.

But, I do have a lot of knowledge for data professionals who aspire to be independent consultants.

I focus more on the business side of running my consulting business these days so I’m not as technical as I used to be. But I was a SQL MVP for many years and was a pretty good tuning architect. I’ve been on the PASS Board twice and have been involved in the SQL Server space for almost 25 years. Over the years I’ve been part of the leadership team that built two successful consulting companies and I’ve coached and mentored dozens of technologists over the past two decades as they got their start in consulting.

This post is for folks who love the engine side of being a database pro and want to be an independent consultant. I’m not talking to the BI folks or the other myriad of specialties that exist in #SQLFamily. But do you love the engine? I mean really love working with the core engine? Do you want to be a consultant one day?

World class and amazing performance tuning skills are consistently the on the short list of skills that I’ve seen successful independent SQL Server consultants have over the past 20 years. Are there other skills out there that allow a consultant to have a great career? Of course. You can do well with any specialty that is hard to master and that companies absolutely must have access to in order to be successful. HA/DR, consolidation, virtualization, security, and a wide variety of other topics come to mind. But here is one element of performance tuning that that tends to create opportunities for consultants to maximize their rates and have a successful career.

Top performance tuning consultants can have a pretty quick impact on a problem and companies often need access to the skill on short notice for critical needs. Sometimes tuners can solve a problem in minutes or hours. And long term performance tuning work is often measured in days or weeks rather than weeks or months that some other types of projects require.

Here is some basic math to think about. Let’s say you want to bill $250/hr. Is that expensive? Well, like any SQL question the answer is it depends. Let’s look at it this way.

  • Assume you can solve a critical problem for a customer in 20 hours. $250/hr for 20 hours is $5,000.
  • Imagine you are competing against someone who says they can solve the problem for $125/hr.
  • 50% less sounds like a big savings for the customer, right? But, it’s really only a net savings of $2,500.

Do you think many companies care about a difference of $2,500 when they desperately need to solve an urgent problem impacting critical business systems? Probably not. They probably care more about picking the person they believe is the best fit and most likely to solve the problem.

But now let’s imagine that you are talking to a potential customer about a project that might take 6 months to solve working full time. That’s about 1000 billable hours. Let’s make the same assumptions that you are charging $250/hr and someone else is going to offer $125.hr. Now the 50% differential is rate yields a net savings of $125,000.

Will many companies care about saving $125,000 if they can? Yep. They sure do.

I find that many folks never fully think through this side of the consulting equation early in their career. IE, within reason companies, mostly don’t care about the rate for performance tuners as long as you are awesome and can get the job done. Don’t get me wrong. It’s quite possible to charge premium rates for long term projects and you can certainly be a successful consultant while, focusing on some other aspect of SQL Server. But almost 25 years in the business have taught me that loving performance tuning and wanting to be an independent consultant can be a match made in heaven.

If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Creating the SSIS Catalog – Notes from the Field #058

[Notes from Pinal]: SSIS is very well explored subject, however, there are so many interesting elements when we read, we learn something new. A similar concept has been Creating the SSIS catalog.

Linchpin People are database coaches and wellness experts for a data driven world. In this 58th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to Creating the SSIS Catalog.


With the release of SQL Server 2012, Integration Services (SSIS) received a significant overhaul.  The most notable change was the addition of the SSIS catalog, a new deployment and storage mechanism that helps to centralize and standardize the storage, execution, and logging of SSIS packages.  In this post, I will briefly illustrate how to set up the SSIS catalog on a new installation of SQL Server 2012 or 2014.

The SSIS Catalog

When connecting to an instance of SQL Server using SSMS 2012 or 2014, you’ll notice that there is a new node in the tree, entitled Integration Services Catalogs.  By default, this node will be empty as shown below.

Before we can begin deploying SSIS packages to the catalog on this instance, we need to create the catalog.  It is worth noting here that, although the plural node name of Integration Services Catalogs implies that we may have more than one catalog, we can only create one SSIS catalog per SQL Server instance.

Creating the SSIS Catalog

To create the SSIS catalog, simply right click the Integration Services Catalogs node and choose Create Catalog.

Next, you’ll be presented with the Create Catalog dialog, which allows you to set the options for creating the catalog.  There are really only two things that you need to configure here:

  • Enable automatic execution of Integration Services stored procedure at SQL Server startup. This option will enable the automatic execution of a cleanup stored procedure when SQL Server starts.  This stored procedure will check for and update any packages whose status in the catalog was left in limbo due to an unexpected shutdown of SQL Server.  I don’t know of any downsides to enabling this, so I recommend that you leave this box checked.
  • Encryption password. Because you can store sensitive information (such as database or FTP passwords) in the SSIS catalog, this database requires an encryption password to properly secure those assets.

You’ll also notice that the name of the catalog database (SSISDB) is presented in this window, but it is not editable.  In the current versions (2012/2014) of SSIS, the name of the catalog database must be set to SSISDB.  Also, the SSIS catalog requires enabling CLR integration, so this option is shown but is also not editable.

Once you’ve configured the encryption password and selected the option of whether to allow the cleanup stored procedure to run, you’re ready to create the catalog.  Simply click the OK button in this window to create the catalog.  After so doing, you can browse to the Object Explorer in SSMS and see that we now have the SSISDB catalog listed under the Integration Services Catalogs node.  The catalog is now ready for use!

One final note on the SSIS catalog: If you browse the list of databases after creating the catalog, you’ll notice that there is a new user database named SSISDB in the list of databases.  The SSIS catalog is contained in its own database on the SQL Server instance on which it is installed.  Unlike prior versions of SSIS which stored the SSIS assets in MSDB, the SSIS catalog uses a dedicated user database to better isolate the SSIS storage.

Conclusion

With a new installation of SQL Server 2012 or 2014, the SSIS catalog is not installed by default.  This brief post demonstrated how to easily and quickly set up the SSIS catalog on a new installation of SQL Server.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Hey DBA – Baselines and Performance Monitoring – Why? – Notes from the Field #058

[Note from Pinal]: This is a 58th episode of Notes from the Field series. Mike Walsh is an amazing person to meet and know. I am fortunate to be his friend. Every single time I meet him, I learn something as a human. He is always looking for an opportunity to improve the life of DBAs and Developers.

In one of the recent communication with Mike, I had asked him a question that what is actually Baselines and Performance Monitoring? How do we define what is normal and what is not normal? Where do we draw lines when we talk about performance? Database expert Mike Walsh decided to help us with the answer of this question.

Read the entire story in his own words.


Last time it was my time to post here on SQL Authority, I suggested that DBAs make a proactive investment in their environments.

I can’t stress this enough – procrastination is our enemy as technologists. Especially as DBAs. We have things we should do on a regular basis to keep our environments moving along. I gave a few suggestions of places to start in that post three months ago.  If you’ve not started down the road on those things, stop reading this and do those things first.

If you’ve started being proactive in those areas already? Keep reading and let’s talk about being proactive in another area:

Baselines and Performance Monitoring – Why?

It’s really important to know what “normal” looks like for a lot of reasons when it comes to performance in your environment.

The answers to these questions all require you to know what normal looks like:

  • When will this environment exhaust the resources available in our current setup?
  • Things are slow – what’s the cause?
  • Can we consolidate onto fewer servers?

Without having a sense of what normal looks like for server performance – you can’t accurately answer those questions. For some of them you need to watch what normal looks like over time. And normal can look different on different servers.

Regularly looking at your performance metrics and comparing them on a regular basis is an important step in being proactive. The process will teach you about your environment and let you see trends develop over time. It will also teach you about SQL Server performance more as you dig into the various counters and understand what they mean.

Baselines and Performance Monitoring – An Approach                 

There are a lot of approaches to take here. You can run your favorite perfmon counters on a regular basis and look at the results in Excel. You can use a performance and event monitoring tool like SQL Sentry – and look at the data it collects over time. Or any number of approaches in between.

While I often advise my consulting clients to use several of these approaches – a good place for you to start learning and doing is with a free tool called PAL – Performance Analysis of Logs. This tool is available on Codeplex and I describe how to use this tool in a couple blog posts and videos on the Linchpin People blog (http://www.linchpinpeople.com/sql-server-performance-baseline-presentation-seacoast-sql/ is a presentation I give on baselines and baseline metric collection; http://www.linchpinpeople.com/how-create-sql-server-baseline-using-pal/ shows a couple videos on how to use the PAL tool and create a perfmon template).

Rather than regurgitate what those resources, and the posts I link to in them, describe – I’ll suggest a few action items that you can follow right now. Those links will help you on your journey here. Even if your environment has no issues, the baseline can prove valuable to you in the future – and the process is guaranteed to teach you something about SQL Server performance you didn’t know before you started. I’ve been working with SQL server for 16 years and I still learn something every time I engage in serious baseline study.

Baselines and Performance Monitoring – Action Plan

Some suggested next steps you can take to get on the road to being proactive about performance:

  • Learn about the PAL tool – use the links here to start.
  • Collect “Fake” Performance Data – Start with a test environment if you aren’t sure, watch the impact and get comfortable with the process.
  • Collect Real Performance Data – Once you are comfortable with the approach, do the steps in production. Track some data for a couple days, a day or a week depending on the sample interval you go with.
  • Analyze The Data – Run the performance metrics through the PAL tool and look for alerts. Keep in mind that not all alerts require immediate action, but look at the alerts and learn about them. Read the tips in the PAL tool, look at blogs like this one to understand what is going on and see if you have issues.
  • Rinse and Repeat – Try this again in a month or a few months. What’s changed? Are you doing more activity (maybe batches/second or transactions/sec or user count) and seeing your resources still performing well? That’s great. Do you see performance slowing down but activity not really that much higher? Dig in and see why you are falling off your baseline.

The point is – get started looking at your baselines. You may not have an urgent and immediate need for a good set of baselines today – but when that day comes, you’ll wish you took the time to collect that data. I can’t tell you how often we get called in to help a customer experiencing performance concerns and a baseline would help at least get a sense for what changed from when things were good. We can still help get them out of their mess all the same – but if we had a few baselines over time we can sometimes more quickly figure out where the most beneficial changes can come.

If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – The Basics of the SSIS Data Flow Task – Notes from the Field #057

[Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications. As wikipedia says – It features a fast and flexible data warehousing tool used for data extraction, transformation, and loading (ETL). The tool may also be used to automate maintenance of SQL Server databases and updates to multidimensional cube data.

In this episode of the Notes from the Field series I asked SSIS Expert Andy Leonard a very crucial question – How to learn SSIS data flow task? Andy was very kind to answer the questions and provides plenty of information about how a novice developer can learn SSIS from the beginning and become expert in the technology.


If you know SQL Server, you’re likely aware of SQL Server Integration Services (SSIS). What you might not realize is that SSIS is a development platform that allows you to create and perform some interesting Control Flow Tasks. In the first blog post in this series, I showed how to use the Execute SQL Task. Now, let’s look at the Data Flow Task. When developing solutions with SSIS, I use a handful of Control Flow tasks:

  • Execute SQL Task
  • Data Flow Task
  • Script Task
  • Execute Package Task
  • File System Task
  • Execute Process Task

This list is a good approximation of which tasks I use most, too – from most-used to least-used. In this article I provide a basic example of configuring the SSIS Data Flow Task, shown in Figure 1:


Figure 1: SSIS Data Flow Task

The SSIS Data Flow Task is a very special task. It is the only task to have its own tab in the Integrated Development Environment (IDE) as shown in Figure 2:


Figure 2: The Data Flow Tab

If you click on the tab, you will note a new SSIS Toolbox containing Data Flow-specific components, as shown in Figure 3:


Figure 3: Data Flow SSIS Toolbox

SSIS Data Flows are typically used to move data from one location to another. The data flow accomplishes data movement by first reading data into Data Flow Buffers. Think of a buffer as a region of memory SSIS uses to hold data rows as the rows are processed by the data flow. In Figure 4, I have configured an OLE DB Source Adapter to pump data rows into the data flow:


Figure 4: Configuring an OLE DB Source Adapter

The data is often transformed while being moved from one location to another. The SSIS data flow components that perform transforming operations are called Transformations, and they are joined to other data flow components by Data Flow Paths. An example of a transformation is the Derived Column Transformation, as shown in Figure 5:


Figure 5: Adding a Derived Column Transformation and a Data Flow Path

You can use transformations to perform many operations (e.g., you can manipulate values of columns in rows, you can remove or redirect rows based on column values, etc.) on the data as it flows through the data flow task. For example, the Derived Column Transformation permits you to manipulate (transform) existing data or to combine existing data to create new columns, as shown in Figure 6:


Figure 6: Creating a New Column with the Derived Column Transformation

I created a new column named “UpperCaseName” in the Derived Column Transformation. I used SSIS Expression Language to define the transform – “UPPER([Name])” in this case.

Note: SSIS Expression Language is very powerful and very difficult to learn. For more information, please see Linchpin People’s SSIS Expression Language series.

Now I need to land these rows into a table. I’ll use an OLE DB Destination Adapter – connected from the Derived Column Transformation via data flow path – to accomplish loading our transformed rows into a table, as shown in Figure 7:


Figure 7: Configuring an OLE DB Destination Adapter

Once the OLE DB Destination Adapter is configured, you can execute either the package or the Data Flow Task as shown in Figure 8:


Figure 8: Test Execution Successful!

In this article, I shared an introduction to the SSIS Data Flow Task and some of its functionality. Although I barely scratched the surface of Data Flow capabilities, you should now be able to compose and test your first SSIS Data Flow!

:{>

If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)