SQL SERVER – The Basics of the File System Task, Part 1 – Notes from the Field #071

[Note from Pinal]: This is a new episode of Notes from the Field series. Every time I give an introductory note, however, this time there is no need of intro note. This note is from Andy and as we all know he is amazing person when we have to understand the fundamentals. He has written this blog post with such an interesting way that you must read it to understand the very basic of the file system task.

andyleonard SQL SERVER   The Basics of the File System Task, Part 1   Notes from the Field #071


Many data integration scenarios involve reading data stored in flat files or performing extracts from a relational (or legacy) system into flat files. Learning how to configure and use the SQL Server Integration Services (SSIS) File System Task will support your efforts when loading data to and from flat files. Remember: SSIS is a software development platform. With “SQL Server” included in the name, it is easy for people to confuse SSIS as a database tool or accessory, but Control Flow Tasks put that confusion to rest.

SSIS provides several Control Flow tasks. Here is a list that provides a good approximation of which tasks I use most, from most-used to least-used:

In this article I provide a basic example of configuring the SSIS File System Task, shown in Figure 1:
image001 SQL SERVER   The Basics of the File System Task, Part 1   Notes from the Field #071
Figure 1: SSIS File System Task

The File System Task provides one way to implement an SSIS Design Pattern for source file archival. When you first open the File System Task Editor, you will note several properties in the property grid. Whenever you see an Operation property in an SSIS task editor, know that that property drives the other property selections. Options for the Operation property of the SSIS File System Task are shown in Figure 2:
image002 SQL SERVER   The Basics of the File System Task, Part 1   Notes from the Field #071
Figure 2: SSIS File System Task Operation Property Options

The Operation options are:

  • Copy directory
  • Copy file (default)
  • Create directory
  • Delete directory
  • Delete directory content
  • Delete file
  • Move directory
  • Move file
  • Rename file
  • Set Attributes

I stated the Operation property drives the other property selections. Take a look at the File System Task Editor when I change the Operation option from “Copy file” (Figure 2) to “Delete file” as shown in Figure 3:
image003 SQL SERVER   The Basics of the File System Task, Part 1   Notes from the Field #071
Figure 3: The File System Task Editor with the “Delete file” Operation Selected

See? There are less properties required for the “Delete file” operation. The available properties are even more different for the “Set Attributes” operation, shown in Figure 4:
image004 SQL SERVER   The Basics of the File System Task, Part 1   Notes from the Field #071
Figure 4: The File System Task Editor with the “Set Attributes” Operation Selected

The Operation property changes the editable properties, exposing some and hiding others. With flexibility come complexity. Even though the File System Task is complex, I’ve found the task is stable and extremely useful. Let’s look at a practical example; using the File System Task to archive a flat file.

To begin configuring the SSIS File System Task for file archival, select the “Move file” operation as shown in Figure 5:
image005 SQL SERVER   The Basics of the File System Task, Part 1   Notes from the Field #071
Figure 5: SSIS File System Task with the “Move file” Operation Selected

Using the IsSourcePathVariable and IsDestinationPathVariable properties extends the flexibility of the File System Task and further changes the list of available properties in the property grid, as shown in Figure 6:
image006 SQL SERVER   The Basics of the File System Task, Part 1   Notes from the Field #071
Figure 6: Opting to Use Variables for Source and Destination Paths

Note the SourceConnection and DestinationConnection properties are hidden and the SourceVariable and DestinationVariable properties are available in their place. For the sake of simplicity, we’re not going to use SSIS Variables to define the source and destination paths in this article. Reset the IsSourcePathVariable and IsDestinationPathVariable properties to False. Click the SourceConnection property dropdown, and click “<New connection…>” as shown in Figure 7:
image007 SQL SERVER   The Basics of the File System Task, Part 1   Notes from the Field #071
Figure 7: Selecting a New Connection for the SSIS File System Task SourceConnection Property

The File Connection Manager Editor displays with the “Usage type” property set to “Existing file” as shown in Figure 8:
image008 SQL SERVER   The Basics of the File System Task, Part 1   Notes from the Field #071
Figure 8: File Connection Manager Editor

Click the “Browse…” button and select a flat file you wish to move, as shown in Figure 9:
image009 SQL SERVER   The Basics of the File System Task, Part 1   Notes from the Field #071
Figure 9: Selecting the Source File

Once the File Connection Manager Editor is configured, it should appear similar to that shown in Figure 10:
image010 SQL SERVER   The Basics of the File System Task, Part 1   Notes from the Field #071
Figure 10: A Configured File Connection Manager Editor

Click the OK button to close the File Connection Manager Editor. Next, click the DestinationConnection property in the SSIS File System Task Editor. As with the SourceConnection property, click the dropdown and select “<New connection…>”. When the File Connection Manager Editor displays, select the “Existing folder” Usage type, as shown in Figure 11:
image011 SQL SERVER   The Basics of the File System Task, Part 1   Notes from the Field #071
Figure 11: Configuring the Destination File Connection Manager

Note the warning at the bottom of the File Connection Manager Editor window: “Folder name must be specified.” Click the “Browse…” button and navigate to the directory where you wish to archive the source file as shown in Figure 12:
image012 SQL SERVER   The Basics of the File System Task, Part 1   Notes from the Field #071
Figure 12: Configuring the Location of the DestinationConnection File Connection

Click the OK button in the “Browse For Folder” dialog to return to the File Connection Manager Editor, as shown in Figure 13:
image013 SQL SERVER   The Basics of the File System Task, Part 1   Notes from the Field #071
Figure 13: A File Connection Manager Configured for the DestinationConnection Property

Click the OK button to close the File Connection Manager Editor and return to the File System Task Editor, as shown in Figure 14:
image014 SQL SERVER   The Basics of the File System Task, Part 1   Notes from the Field #071
Figure 14: An SSIS File System Task Configured to Archive a File

The SSIS File System Task is now configured to archive a file. Let’s test it! Click the OK button to close the File System Task Editor. Press the F5 key or select SSIS->Start Debugging to test your work. My result is shown in Figure 15:
image015 SQL SERVER   The Basics of the File System Task, Part 1   Notes from the Field #071
Figure 15: Successful Test Execution of the SSIS File System Task

Viewing the source and destination directories, we see the file was successfully moved – shown in Figure 16:
image016 SQL SERVER   The Basics of the File System Task, Part 1   Notes from the Field #071
Figure 16: The File, Moved!

One tricky part when configuring the SSIS File System Task to move a file is realizing that you need to select the actual file for the source and the directory for the destination.

As I stated earlier, the SSIS File System Task is powerful, flexible, and robust. This article has demonstrated one way you can use the File System Task to archive files. Archiving files after loading the data they contain is a common practice in data integration.

If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Using Raw Files in SSIS – Notes from the Field #070

[Notes from Pinal]: SSIS is very well explored subject, however, there are so many interesting elements when we read, we learn something new. A similar concept has been Using Raw Files in SSIS.

Tim Mitchell SQL SERVER   Using Raw Files in SSIS   Notes from the Field #070Linchpin People are database coaches and wellness experts for a data driven world. In this 70th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to use raw files in SSIS.


SQL Server Integration Services is well designed for retrieving and processing data on the fly, directly in the data flow pipeline. However, there are circumstances that occasionally require the persistence of result sets in SSIS for use during package execution. For these such cases, one option is to use SSIS raw files.

The raw file in SSIS is a special kind of binary file, optimized for and intended for use only through SSIS. This special file is intended to store temporary result sets on disk to allow reuse of data across multiple packages and multiple executions.

Although these are not tools one would likely use on a daily basis, the raw file tools in SSIS are helpful in a number of scenarios:

  • The same result set is reused several times as part of the different data flow (or even different packages)
  • Due to size or query complexity, retrieving a particular set of reference data takes a significant amount of time
  • Due to business rules, one or more interim result set is required during the ETL process

In cases such as this, using raw files to store result sets can be a handy solution.

To create and use raw files, SSIS comes with two components – a raw file source and a raw file destination – to allow reading from and writing to these files. I’ll start by showing the destination (ordinarily I would start with the source, but since we need to create a raw file with the destination component before we can use the source component, I’m changing the typical demonstration order). As shown below, this example uses a data flow with an OleDB source – connected to AdventureWorks – for sending sales data into a raw file destination.

notes 70 1 SQL SERVER   Using Raw Files in SSIS   Notes from the Field #070

On the raw file destination, we’ve got a handful of configuration options, including:

  • The type of file name (either direct or from an SSIS variable)
  • A selector to choose the file name or the variable supplying that file name
  • An option to select the write method (create, overwrite, append, etc.)
  • A column pane to allow source-to-target mappings

notes 70 2 SQL SERVER   Using Raw Files in SSIS   Notes from the Field #070

Note that on the Write Option selection, there are some limitations with respect to whether the specified output file already exists. For example, if you choose the Create Once option, you’ll see a design-time error if the file already exists. Similarly, if you choose either the Append or the Truncate and Append option, you’ll see an error if the file does not yet exist. To make the design-time experience easier, there is a button in the UI of the destination to create the initial raw file in case you want to use either of the Append options without having previously created the file.

The raw file source, the simpler of the two raw file components, has just a couple of configuration options: the type of file name, a selector to choose the file name or the variable supplying that file name, and a data grid to allow the selection of the columns to be included from the raw file.

notes 70 3 SQL SERVER   Using Raw Files in SSIS   Notes from the Field #070

As shown in use below, the raw file source feeds the data flow pipeline, ultimately sending its data into the OleDB destination.

notes 70 4 SQL SERVER   Using Raw Files in SSIS   Notes from the Field #070

Final thoughts

One thing you’ll want to keep in mind is that there is no automatic purge process to empty or remove the raw files after use. Therefore, if there is a concern about persisting the files, either because of disk space or data sensitivity, there should be a follow-on process that truncates or deletes the raw files after they are no longer needed. Also, if you want to track how recently the raw file was loaded, you could either check the update stamp of the file (using a script task), or use a date stamp column in the process that loads the raw file to allow storage of the load date in the data within the file.

Conclusion

The raw file source and destination in SSIS can be used to address situations where temporary file system storage of result sets is required in an ETL process. Although not an everyday set of tools, this source and destination are useful tools to know how to use in cases where persisting interim result sets is needed.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Hey DBA – Protecting a Proactive Attitude – Notes from the Field #069

[Note from Pinal]: This is a 69th episode of Notes from the Field series. Mike Walsh is an amazing person to meet and know. I am fortunate to be his friend. Every single time I meet him, I learn something as a human. He is always looking for an opportunity to improve the life of DBAs and Developers.

mikewalsh Hey DBA   Protecting a Proactive Attitude   Notes from the Field #069Being proactive is a good thing. There are few things which can not go wrong and being proactive is one it. Being proactive is not all about SQL scripts and deployments. It is also about attitude and mindset. If you know Mike, you must be aware of that he is the person who is well aware of human psychology. He knows what DBA and Developer things and how their attitude towards technology. I asked him one question – “How to protect a proactive attitude?” and database expert Mike Walsh decided to guide me with the answer of this question.

Read the entire story in his own words.


My posts lately here have been talking about being proactive. I am going to continue on that theme today, because it is that important. We perform a lot of WellDBA Exams™ – where we go into a client shop, review their existing setup, look at performance and write up our findings.

Each time we do one of these, we find the same types of findings almost everywhere. In fact we have a free resource available where we guide you through a checklist where you can evaluate your own environment against the 7 most common findings we encounter and see detailed information on what to do about those findings.

You can grab that free resource here.

In this post I want to zoom in on a proactive attitude again, and some ways to protect your proactive time. Then I will suggest a plan using that checklist to go through your environment and ensure the big things are taken care of.

Protecting a Proactive Attitude

I know I talk about this in just about every post I share here, but I am trying to cement this thought – a good DBA is a proactive DBA. You cannot just wait for a problem to arise, fight the problem, and then sit back and wait for the next problem.

I know you don’t do that. I know most DBAs don’t do that, at least not on purpose. But in this interrupt driven world, users and other IT teams are constantly clawing at their DBA for requests. These requests have to be met, because customers are expecting them to be done. But I challenge you that many requests don’t need to be done as fast as most DBAs do them.

You should develop a plan to protect your proactive time.

A few tips on how to preserve that time:

  • Schedule it! Make a block of time on your calendar each day, each week, whatever you can do. Guard that time and shut down your e-mail and IM clients (if you are allowed) and focus on doing proactive tasks to make your life easier.
  • Talk to management. When you are reactive you are spending more time, you are less effective and mistakes happen. If you get management to guard your time and people’s access to you for a bit for some proactive time, it will make you more efficient.
  • Work from home if you can. A lot of times when I was a full-time DBA – I could be more proactive when working from home, as the “drop by” visits can’t happen as easily that way.
  • Consider a WellDBA exam from Linchpin People, or a health check from another firm. Sometimes having an outside firm come in, audit your environment and tell you about your misses is the best way to get proactive time prioritized. Plus that approach helps you know where to focus your proactive time.
  • Just do it – Sounds harsh – but sometimes you just have to do it. If you don’t start trying to be more proactive, you’ll always be reactive and that is when mistakes happen. This is when we can come in and do an Exam and uncover the findings that the DBAs will say “yeah I know.. But I didn’t have time”

A Proactive Plan

Download our checklist, or find other SQL Server diagnostic scripts and approaches out there. There are several. Our checklist is less focused on scripts, and more focused on how, what and why you should care about these 7 areas, then suggests an approach to resolve the findings.

Here is the link again, it really is free and not some gimmick to get you on a “to be harassed” list.

Then start going through the checklist. We’ve ordered it in terms of our typical priorities.  For example – if you aren’t able to recover your environment, the other 6 points don’t matter as much.. Make sure you are really secure in each area, make sure you really understand the how and why in each area.

Move on to the next question and repeat the process. It is that simple. Some of the tasks will take more time than others to investigate. Some will take more time than others to fix. But this is a place to get started. And sometimes a place to start is all that is needed.

So, get some time set aside for being proactive, review your environment and get to the point where at least those 7 critical items are being taken care of! All the best on your journey!

If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Finding Top Offenders in SQL Server 2012 – Notes from the Field #068

[Note from Pinal]: This is a 68th episode of Notes from the Fields series. Performance tuning gets easier with SQL Server 2012. For the first time, you can capture a workload and find your top offenders inside of Management Studio without having to write a single line of T-SQL or use any third party tools.

JohnSterrett SQL SERVER   Finding Top Offenders in SQL Server 2012   Notes from the Field #068

In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very common issue DBAs and Developer faces in their career – how to find top offenders in SQL Server. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words.


One of the biggest mistakes with performance tuning is not knowing where to start.  Sometimes performance tuning can be like finding a needle in a haystack.  This is because you might not have a great process to help you find your top offenders. In this short five minute video below you will be able to utilize extended events to aggregate and group completed statements so you can find your top offenders without having to write any T-SQL code.

Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – The Basics of the Execute Package Task – Notes from the Field #067

[Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications.

andyleonard SQL SERVER   The Basics of the Execute Package Task   Notes from the Field #067

In this episode of the Notes from the Field series I asked SSIS Expert Andy Leonard a very crucial question – What are the Basics of the Execute Package Task and where do we start with it? Andy was very kind to answer the questions and provides plenty of information about how a novice developer can learn SSIS from the beginning and become expert in the technology.


Learning how to configure and use the SQL Server Integration Services (SSIS) Execute Package Task gives you a great starting point for understanding SSIS package orchestration. I advocate writing small, functional SSIS packages that perform a unit of work. That’s a great idea for many reasons. But it begs the question: “How do I string together the execution of these packages?” Remember: SSIS is a software development platform. With “SQL Server” included in the name, it is easy for people to confuse SSIS as a database tool or accessory, but Control Flow Tasks put that confusion to rest.

SSIS provides several Control Flow tasks. Here is a list that provides a good approximation of which tasks I use most, from most-used to least-used:

In this article I provide a basic example of configuring the SSIS Execute Package Task, shown in Figure 1:

notd 66 1 SQL SERVER   The Basics of the Execute Package Task   Notes from the Field #067
Figure 1: SSIS Execute Package Task

The Execute Package Task provides one way to implement an SSIS Design Pattern for SSIS package execution known as the Parent-Child pattern. When an SSIS package uses the Execute Package Task to start another SSIS package, the package with the Execute Package Task is called the Parent and the package started by the Execute Package Task is called the Child.

The Execute Package Task changed between SQL Server 2008 R2 Integration Services and SQL Server 2012 Integration Services. The changes support the SSIS 2012 (and 2014) Catalog. There is a new property called ReferenceType which defaults to “Project Reference” in packages executing in Project Deployment Mode (the default mode for building SSIS 2012 and SSIS 2014 SSIS packages), as shown in Figure 2:

notd 66 2 SQL SERVER   The Basics of the Execute Package Task   Notes from the Field #067
Figure 2: Options for the Execute Package Task ReferenceType Property

Project Reference is used to execute an SSIS package – a Child package – in the same SSIS project with the package that contains the Execute Package Task – the Parent package. When Project Reference is selected, the next property in the property grid is PackageNameFromProjectReference, a dropdown containing a list of all the SSIS package in the SSIS project.

Setting the ReferenceType property to External Reference is a way to execute SSIS packages that are stored in the file system or the msdb database. When External Reference is selected the next properties in the property grid change to reflect this backwards-compatible functionality, as shown in Figure 3:

notd 66 3 SQL SERVER   The Basics of the Execute Package Task   Notes from the Field #067
Figure 3: Setting the ReferenceType Property to External Reference

The Location and Connection properties are used to specify an OLE DB Connection (to the msdb database) or a File Connection (to the location of the dtsx file). SQL Server locations also require the name of the SSIS package; the File System option does not because the SSIS package is the only thing in the file. The External Reference ReferenceType setting is useful for importing SSIS solutions that contain earlier versions of the Execute Package Task because External Reference behaves exactly like previous versions of the Execute Package Task.

The remaining properties in the property grid – Password and ExecuteOutOfProcess – are shared between ReferenceType options. Password is used if the SSIS package is password-protected (if the ProtectionLevel property of the SSIS package is set to either EncryptSensitiveWithPassword or EncryptAllWithPassword). ExecuteOutOfProcess is a setting that determines whether the package will be executed as part of the current process or a new process.

When executing SSIS packages in Project Deployment Mode (using the Project Reference ReferenceType), Parameter Bindings are enabled and allow values to be passed from the Parent package to the Child package, as shown in Figure 4:

notd 66 4 SQL SERVER   The Basics of the Execute Package Task   Notes from the Field #067
Figure 4: Binding a Parent Package Parameter to a Child Package Parameter

Parameter Bindings are disabled when the ReferenceType property is set to External Reference.

Once configuration is complete, click the OK button to close the Execute Package Task Editor. You can test execution by pressing the F5 key or selecting “Start Debugging” from the SSIS dropdown menu. A successfully-executed Execute Package Task will appear as shown in Figure 5:

notd 66 5 SQL SERVER   The Basics of the Execute Package Task   Notes from the Field #067
Figure 5: A Successful Execution!

The Execute Package Task drives a powerful data integration architecture pattern: Parent-Child execution. Using SSIS Precedent Constraints and Sequence Containers with the Execute Package Task, a data integration developer can develop SSIS “driver” packages that call Child SSIS packages in any combination of parallel and serial orders of execution.

If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Invoking a Stored Procedure from Azure Mobile Services – Notes from the Field #066

[Note from Pinal]: This is a 66th episode of Notes from the Field series. Azure Mobile Services is a very critical aspect and not many people know about it. When I read this article, I find it humorous at points and some of the examples also reminded me of my past experience. If you are in data security, you will have a great time reading these notes, but if you are not, you will still love it.

In this episode of the Notes from the Field series database expert Kevin Hazzard explains Invoking a Stored Procedure from Azure Mobile Services. Read the experience of Kevin in his own words.


KevinHazzard SQL SERVER   Invoking a Stored Procedure from Azure Mobile Services   Notes from the Field #066

One of the questions I’m often asked is whether it’s possible to call SQL stored procedures from Azure Mobile Services. The answer is yes and it’s probably easier than you think. In case you don’t know, Azure Mobile Services is a way to very simply expose an HTTP service over data stored in an Azure SQL database. By default, Azure Mobile Services exposes the SQL tables directly as resources. So the HTTP methods GET, POST, PUT and DELETE will essentially be mapped to SQL operations on the underlying tables.

While this simple mapping mechanism is good for resource-oriented access to the tables, the logic to produce a usable Web API is often a bit more complex than that. Stored procedures can provide an interesting abstraction layer that allows us to use the efficiencies of the SQL Server query engine to reduce round trips to and from Internet clients, for example. Or perhaps stored procedures might be used to hide normalization peculiarities from clients or perhaps to use advanced parameter handling logic. Whatever the case may be, it would be helpful from time to time to be able to invoke stored procedures from the HTTP API that Azure Mobile Services provides.

Let’s start by assuming that an Azure Mobile Service exists with some data that we would like to expose via a stored procedure. For the purposes of this example, my service is called MobileWeatherAlert which contains a backing table in an Azure SQL Database named [MobileWeatherAlert_db]. It’s really helpful that Azure Mobile Services uses schema separation in the underlying database to manage all of its data. That schema separation allows us to expose many separate middle-tier services from one common database if needed. So, in my weather database, there’s a schema called [MobileWeatherAlert] corresponding perfectly to the name of the service that it supports. For the purposes of this example, that schema contains a table called [Observation] which is used to collect weather data by [City].

notes66 1 SQL SERVER   Invoking a Stored Procedure from Azure Mobile Services   Notes from the Field #066

Figure 1 shows a very simple stored procedure called [GetObservationsForCity] that I’d like to be able to call from the service API.

There are a number of places where this procedure might be invoked. For this example, I’ll implement a custom API in the mobile service called observation.

notes66 2 SQL SERVER   Invoking a Stored Procedure from Azure Mobile Services   Notes from the Field #066

Figure 2 shows the dialog in the Azure management console where the custom API will be created.

For this simple example, I’ll only implement the HTTP GET method in the API to invoke the stored procedure. For simplicity of the example, I’ll open up access to everyone to avoid having to pass any sort of credentials. Now I can add a bit of JavaScript to the API to make the stored procedure call.

notes66 3 SQL SERVER   Invoking a Stored Procedure from Azure Mobile Services   Notes from the Field #066

Figure 3 demonstrates adding that JavaScript to the API via the Azure management console.

Lines 1 through 9 in the script encompass the get function that will be invoked when the HTTP GET method is used to call the service. The parameters passed to the JavaScript function are the request and response objects. From the request object, line 2 shows how to obtain a reference to the mssql object which exposes a query function for making calls into the database. Line 3 demonstrates how to call the query function to execute the [GetObservationsForCity] stored procedure, passing a single parameter for the City by which to filter. It’s important to note here that the schema in which the stored procedure resides is not named in the EXEC call. This is counter-intuitive, in my opinion, and is likely to trip up novices as they experiment with this functionality. Since we are invoking the GET method for the MobileWeatherAlert service, there’s an implicit assumption used in the preparation of the SQL statement that objects will reside in a similarly-named database schema.

Notice also on Line 3 that the request object passed into the JavaScript function exposes a query property that conveniently contains an object named city which will be parsed directly from the URL.

notes66 4 SQL SERVER   Invoking a Stored Procedure from Azure Mobile Services   Notes from the Field #066

Figure 4 shows how that URL might be passed from PostMan, a really excellent Google Chrome plug in that allows the invocation of nearly any sort of HTTP-oriented web service or API.

Finally, lines 4 through 6 of the JavaScript method, the success function that process the results of the SQL query logs the results and returns them to the caller with an HTTP 201 (OK) response. I’ve included a called to the console.log() function to show how easy it is to log just about anything when you’re debugging your JavaScript code in Azure Mobile Services. After invoking an API or custom resource method that logs something, check out the logs tab of the mobile service in the management console to see what got saved. Of course, you’ll want to do minimal logging in production but while you’re testing and debugging, the log is a valuable resource.

In studying the URL and its output in Figure 4, remember that the JavaScript for the observation API didn’t have to do any special parsing of the row set returned by SQL Server to produce this result. Simply returning that data from SQL Server caused the API to emit JavaScript Object Notation (JSON) which has arguably become the lingua franca of the Internet for expressing data.

In closing, I’ll share a couple of thoughts. If you’re interested in building a simple query interface on top of a mobile service, you don’t have to use stored procedures as shown here. Azure Mobile Services implements fairly rich OData support directly on table resources. With OData, filtering, sorting and pagination of SQL data are built in, so to speak. Also, the web way of doing services (sometimes called RESTful based on Dr. Roy Fielding’s dissertation and the HTTP standards that flowed from it), assume that we’ll use HTTP in the way it was intended: accessing and linking resources at a more basic level, using the HTTP methods GET, POST, PUT, and DELETE as a complete, fully-functional language for accessing those resources. Database people inherently understand and respect this access pattern better than many programmers working in traditional programming languages like C# and Java. After all, we’re accustomed to using four basic methods to manipulate data in our databases: SELECT, INSERT, UPDATE, and DELETE. Yet, as database people, we also know that giving software developers strict table-level access can cause all sorts of performance problems. For those situations, where you know that some complex database operation could be performed much more efficiently with a bit of T-SQL code, a stored procedure or a view may be just the prescription your developers need. Hopefully, this article has helped you understand how to invoke programmatic resource in a SQL Azure database and perhaps it will help you along the way to making the correct architectural choices in the design of your modern, data-driven web applications.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – The Basics of the SSIS Script Task – Notes from the Field #065

[Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications.

andyleonard SQL SERVER   The Basics of the SSIS Script Task   Notes from the Field #065

In this episode of the Notes from the Field series I asked SSIS Expert Andy Leonard a very crucial question – What are the basics of the SSIS script task and where do we start with it? Andy was very kind to answer the questions and provides plenty of information about how a novice developer can learn SSIS from the beginning and become expert in the technology.


Learning how to configure and use the SQL Server Integration Services (SSIS) Script Task gives you a great starting point for learning how to use SSIS Control Flow tasks. You’ll find that SQL Server Control Flow tasks provide powerful development capabilities that may surprise you. Let there be no mistake, SSIS is a software development platform. With “SQL Server” included in the name, it is easy for people to confuse SSIS as a database tool or accessory, but Control Flow Tasks put that confusion to rest.

SSIS provides several Control Flow tasks. Here is a list that provides a good approximation of which tasks I use most, from most-used to least-used:

This article will focus on a basic example of configuring the SSIS Script Task, shown in Figure 1. People often think the SSIS Script Task is just for generating log messages, but let’s look at some of its more powerful capabilities.

ntf 65 1 SQL SERVER   The Basics of the SSIS Script Task   Notes from the Field #065

Figure 1: SSIS Script Task

The Script Task supports Microsoft Visual C# and Microsoft Visual Basic languages. You can choose which language you will use prior to clicking the Edit Script button. Once you’ve clicked the Edit Script button, you cannot change the Script Language property.

At the heart of the Script Task is the ability to read a variable value from the SSIS package into the script and then write a message out of the Script Task. Because the Script Task can interact with SSIS variables, you can use .Net code to manipulate and respond to variable values.

For the purpose of our demonstration, we will add code to the Script Task to create a message for SSIS logging.

Messaging is a key component of enterprise architecture. Messages are generated (published, raised, produced, etc.) by Senders and received (subscribed, consumed, requested, etc.) by Listeners. Messaging is a complex topic. Entire books are dedicated to the topic.

Add variables to the ReadOnlyVariables and ReadWriteVariables properties by clicking the ellipsis inside the value textbox for each property. When you click the ellipsis, a Select Variables dialog appears, as you see in Figure 2.

ntf 65 2 SQL SERVER   The Basics of the SSIS Script Task   Notes from the Field #065

Figure 2: Adding Read Only Variables to the Script Task

SSIS variables use a two-part naming convention: <Namespace>::<VariableName>. I added the SSIS variables System::TaskName and System::PackageName. The selected variables are in the System namespace and contain the name of the task and package, respectively.

Click the Edit Script button to open the script editing window as shown in Figure 3:

ntf 65 3 SQL SERVER   The Basics of the SSIS Script Task   Notes from the Field #065

Figure 3: The Script Task Script Editor

Add the following code beneath the instruction to Add your code here:

Dim sPackageName As String = _ Dts.Variables("System::PackageName").Value.ToString
Dim sMsg As String = "I am " & sPackageName
Dts.Events.Fireinformation(1001, sPackageName, sMsg, "", 0, True)

Listing 1: VB.Net Code

The first line of the code in Listing 1 creates a VB.Net variable named sPackageName and initializes the value of this variable, setting it to the value of the SSIS Variable named System::PackageName. This is one way to read SSIS Variable values into a Script Task’s script.

The second line of the code in Listing 1 creates a VB.Net variable named sMsg and initializes the value of this variable with the text, “I am” plus the value contained in the sPackageName VB.Net variable.

The third line of the code shown in Listing 1 calls the FireInformation method in the Dts.Events assembly. FireInformation accepts six arguments:

  1. InformationCode – an integer you can use to identify custom messages sent from the SSIS package.
  2. SubComponent – a string you can use to populate the source of the message.
  3. Description – a string containing the message you wish to send.
  4. HelpFile – a string containing the location of a Help file (should a help file exist).
  5. HelpContext – an integer pointing to the help context (should a help context exist).
  6. FireAgain – a Boolean value indicating whether the event should subsequently fire.

My VB code includes a reference to the FireAgain argument. Note: This MSDN post states: “Because firing of an event may be expensive, the run-time engine provides a mechanism for suppressing events that you are not interested in. Every event firing method has a FireAgain parameter. If the value of this variable is false, after the method returns, the caller will not fire this event again for the duration of the current execution.” I understand this to mean setting FireAgain to False would suppress future FireInformation events in my Script Task, but testing indicates this is not correct. This is important for you to know because you may opt to use the FireAgain argument to manage when you raise events. I do not use the FireAgain argument for this purpose. I typically manage the decision to raise events in conditional logic surrounding the Event-raising code.

A test execution of the package reveals an Information message recorded on the Progress / Execution Results tab as shown in Figure 4.

ntf 65 4 SQL SERVER   The Basics of the SSIS Script Task   Notes from the Field #065

Figure 4: the Information Message on the Progress / Execution Results Tab

Information messages generated from Script Tasks will appear in SSIS logs and the SSIS Catalog. With this information, you can monitor the progress of SSIS execution while the package is running, or view the historical status of variable values or package messages after the package execution has completed.

The SSIS Script Task can accomplish much more than generating log messages. This brief introduction and basic example have demonstrated how to get started configuring and using the SSIS Script Task. As you can see, SSIS Script Tasks give you development capabilities you may not have been aware of.

If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – The DBA goes to MARS – Maintenance, Architecture, Research and Support – Notes from the Field #063

[Note from Pinal]: This is a 63rdth episode of Notes from the Field series. Storytelling is an art very few have mastered in their life. When I requested Stuart to share his ideas about his famous blog about MARS, I really had not expected the response which I have received in this episode of Notes from the field. It is very critical for any DBA to be proactive in their operation, however are we proactive enough? Well, you need to read this entire episode of notes from the field to grasp the concept.

In this episode of the Notes from the Field series database expert Stuart Ainsworth explains about MARS in this very captivating story.


 SQL SERVER   The DBA goes to MARS   Maintenance, Architecture, Research and Support   Notes from the Field #063

Database administration can be a complex job, particularly in large scale environments; DBA’s are responsible for a variety of tasks, and there’s a wealth of knowledge on how to administer and support SQL Server.  For this post, however, I wanted to focus not on a specific code-related or configuration tip, but rather on a broader perspective of work in general.  As a manager and a consultant, most SQL Server shops have a common problem: we focus too much on reacting to problems and not enough on preventing them.

In the past, I’ve blogged about what I call the MARS paradigm: Maintenance, Architecture, Research and Support.  Most system administration tasks can be broadly categorized into one of these four areas, and each of these four areas can be conceptually defined in terms of Actions (Proactive\Reactive) and Results (Delayed\Immediate):

Think about any given day, and how much time you spend working in each of the 4 areas; which ones are given priority?  If you’re like most shops, you probably spend the most amount of time working support cases (“my query is slow”, or “can you show me where to find this information in the database?”) instead of focusing on maintenance or architecture.   A healthy IT shop will find ways to balance the need for immediate reaction to the need for proactive long-term results; however, the question that needs to be answered is: why do we react, and how do we become more proactive?

Why Database Administration Is Usually Reactive:

In part, the reactive nature of most IT departments is a cultural thing.  Fighting fires can be fun; people who solve problems are usually lauded as heroes, while people who prevent problems often go unnoticed.  Think about it this way; if a database is corrupted and a restore is required, who gets the credit for saving the day?  The DBA who did the restore or the DBA who set up the backup plan years ago (assuming that they’re not the same DBA)?  Administrators that are quick to act are viewed as being responsive, even though they’re usually built on a foundation that was established invisibly some time ago.

Furthermore, most database administrators are thorough in their work; support issues are usually an opportunity for them to dig into a problem and find the best solution.  Maintenance issues, on the other hand, are addressing problems that don’t yet exist, and therefore don’t yet need a solution.  However, constantly providing support pulls resources away from building a better foundation.  If a DBA is spending time constantly fighting fires, they’re not spending time on monitoring the environment to avoid future problems.   So how do you move from a reactive environment to a proactive environment?

Moving From a Reactive Approach to a Proactive Approach:

First, track your work efforts for some period of time (30 days).  Find a way to categorize your work into one of the four broad areas defined above:

  • Maintenance: efforts needed to keep system performance and stability
  • Architecture: efforts to add new features, functionality, or capability
  • Research: efforts to learn “something new”
  • Support: efforts to sustain the existing functionality or customer requests

Ideally, your work is being tracked in some form of service desk software, but if not, then I’d suggest you start with something simple (like a whiteboard with sticky notes).

Second, figure out a metric for work effort; it could be time spent on an issue, or the number of issues, or some other measure of complexity.  The goal is to have a reasonable method of comparison between categories.  Based on your collection efforts, you should be able to figure out where you’re spending most of your time; if more than 50% of your time is being spent in Support or Research, it’s time to consider becoming more proactive.

Becoming more proactive can be a never-ending journey, but I have a few suggestions on where to start:

  1. Prioritize accordingly. Not every support case deserves to be treated as your top priority; make sure that maintenance and architecture issues are included in your escalation plans.
  2. Set it, but don’t forget it. SQL Server offers a lot of tools for automating maintenance; make sure that you allocate time to review successful executions as well as failures.  Understanding when a system is performing outside of spec is nearly as critical as being notified when it fails.
  3. Communicate with interested parties. Let your manager and your peers know about your maintenance an architecture efforts; the best metric would be a reduction in support calls and cases.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – The Easy Functions of SQL Server – Notes from the Field #062

Kathi SQL SERVER   The Easy Functions of SQL Server   Notes from the Field #062[Note from Pinal]: In this episode of the Notes from the Field series database expert Kathi Kellenberger explains about easy and amazing functions of SQL Server. Kathi is an amazing instructor, she was the SQL author I have read in my early career. The reason, I love SQL Server because her writing has instigated love for this technology in me. Today she brings a relatively unknown topic for database experts. Read the experience of  Kathi in her own words.


Have you ever tried to format a datetime value in T-SQL? I have seen some pretty crazy code that looked something like this:

DECLARE @date DATETIME = GETDATE();
SELECT CAST(YEAR(@date) AS CHAR(4)) + '/' +
RIGHT('0' + CAST(MONTH(@date) AS VARCHAR(2)),2) + '/' +
RIGHT('0' + CAST(DAY(@date) AS VARCHAR(2)),2) + ' ' +
RIGHT('0' + CAST(DATEPART(HOUR,@date) AS VARCHAR(2)),2) + ':' +
RIGHT('0' + CAST(DATEPART(MINUTE,@date) AS VARCHAR(2)),2);

Starting with SQL Server 2012, there is an even easier method: The FORMAT function. The FORMAT function, can do the same thing as the previous code, but it is so much easier to write. The FORMAT function is one of five functions added with SQL Server 2012 that I like to call The Easy Functions. The Easy Functions are listed in the table below.

The Easy Functions
Category Function Definition
Logical CHOOSE Select one expression from a list
  IIF Inline IF. Evaluate an expression and substitute another expression for true or false
String CONCAT Add strings together
  FORMAT Format a string including “C” for currency and “P” for percent
Date EOMONTH Returns the last day of the month

The following script demonstrates the logical functions.

USE AdventureWorks2014; --Or 2012
GO
SELECT  CustomerID, COUNT(*) AS OrderCount,
CHOOSE(COUNT(*), 'Bronze','Silver','Gold','Platinum') AS MemberLevel,
AVG(TotalDue) AS AvgOrder,
IIF(AVG(TotalDue) > 1000, 'High','Low') AS Priority
FROM Sales.SalesOrderHeader
WHERE CustomerID IN (11060, 11062, 11065, 11068, 11070, 12165)
GROUP BY CustomerID;

notes 62 1 SQL SERVER   The Easy Functions of SQL Server   Notes from the Field #062

The query returns the count of orders for several customers. A MemberLevel is assigned based on the number of orders the customer has placed, and a Priority is assigned based on the average order. If the average order is over $1000, then the customer has a high priority.

Take a look at the CHOOSE expression. The first argument is an integer, the count of the orders. After the order count, there is a list of values. The list of values is actually a 1-based array, and the first argument is an index into the array.

The IIF function requires three arguments. The first argument is an expression to check, the average order. The second argument is a value to return if the expression is TRUE. The third argument is a value to return for FALSE.

The next example demonstrates the string functions.

SELECT C.CustomerID, P.FirstName, P.MiddleName, P.LastName,
CONCAT(P.FirstName, ' ' + P.MiddleName, ' ', P.LastName) AS FullName,
FORMAT(SUM(TotalDue),'C') AS TotalSales,
FORMAT(SUM(TotalDue)/SUM(SUM(TotalDue)) OVER(),'P') AS PercentOfSales,
FORMAT(MIN(OrderDate),'yyyy.MM.dd') AS FirstOrderDate
FROM Sales.SalesOrderHeader AS SOH
JOIN Sales.Customer AS C ON SOH.CustomerID = C.CustomerID
JOIN Person.Person AS P ON P.BusinessEntityID = C.PersonID
WHERE C.CustomerID IN (11060, 11062, 11065, 11068, 11070, 12165)
GROUP BY C.CustomerID, P.FirstName, P.MiddleName, P.LastName;

notes 62 2 SQL SERVER   The Easy Functions of SQL Server   Notes from the Field #062

The query uses the CONCAT function to build a FullName column. The CONCAT function ignores NULL values. Notice that rows 3 and 4 returned the FullName even though the MiddleName values are NULL. To make sure an extra space doesn’t show up when the MiddleName is missing, the space is combined with MiddleName as one of the arguments. NULL added to a space is NULL, which will be ignored.

There are three examples of FORMAT. The first example formats TotalSales as currency, in this case USD. The next example, PercentOfSales, formats a complex expression comparing each customer’s sales to the overall total. It uses a window aggregate function to calculate the total sales for the query. (Window aggregates is a great topic for another post!) The final example formats the FirstOrderDate in a custom format.

Here is one last easy function: EOMONTH.

SELECT CustomerID, MIN(OrderDate) AS FirstOrderDate,
EOMONTH(MIN(OrderDate)) AS LastDayOfMonth,
EOMONTH(MIN(OrderDate),1) AS LastDayOfNextMonth
FROM Sales.SalesOrderHeader
WHERE CustomerID IN (11060, 11062, 11065, 11068, 11070, 12165)
GROUP BY CustomerID;

notes 62 3 SQL SERVER   The Easy Functions of SQL Server   Notes from the Field #062

The EOMONTH function returns the last day of the month for the supplied date argument. The second example, LastDayOfNextMonth, uses the optional parameter that adds another month.

I hope that using The Easy Functions will make your life easier!

If you want to get started with BIML with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Iterating over Metadata with BimlScript – Notes from the Field #061

[Note from Pinal]: This is a 61st episode of Notes from the Field series. One of the common question I receive every other day is – I have learned BI but I feel that there is no automation in the BI field, what should I do? How do I go beyond learning BI? How can I fill the gap between BIDS and SSDT? If you have been reading this blog, when it is about BI, I always take help from LinchPin People who are BI experts. I requested Reeves from LinchiPin people to help me answer this unique question.

In this episode of the Notes from the Field series database expert Reeves Smith explains what is Biml and How Can it Help Me with SQL Server BI?. Read the experience of Reeves in his own words.


Reeves Smith SQL SERVER   Iterating over Metadata with BimlScript   Notes from the Field #061

In previous articles (What is Biml and The Why of Biml), I’ve discussed Business Intelligence Markup Language (Biml) and given some examples of projects that would benefit from its use. I also demonstrated a quick way to get introduced to Biml without learning a lot of C#. As you grow in your understanding of Biml and BimlScript you will want to make your Biml files more dynamic with a programmatic metadata-driven approach. This article will build on the previous articles and add looping with metadata to build more dynamic Biml projects.

Overview

Starting where the previous article left off, I’ll demonstrate how to add a little C# to make a more automated method of metadata retrieval. If you did not walk through the other articles, I would suggest that you review them before you read this one. This article builds on the Biml in Listing 1 and lets you create multiple integration services packages from one Biml file.

biml61 1 SQL SERVER   Iterating over Metadata with BimlScript   Notes from the Field #061

Listing 1 – Semi-Automated Common Source Package Biml

Example – Automated Source Packages

This example shows you how to connect to a database and select the metadata to iterate over. Then you can create multiple SQL Server Integration Services (SSIS) packages from one Biml file. The example in the previous article manually updated the metadata and reran each Biml file. This is an effective approach, but you can build on it to create a more robust solution. Take a look at the C# code in Listing 2.

biml61 2 SQL SERVER   Iterating over Metadata with BimlScript   Notes from the Field #061

Listing 2 – C# Replacement Code

Replace the first line of code from Listing 1 with the code in Listing 2, which is an excerpt from Listing 3.

  • The first line will add the namespace to the Biml file and enable the calling of functions without a fully qualified name.
  • The second line creates a string variable that contains a connection string of the database with the metadata.
  • The third line creates a connection object (actually an AstOleDbConnection node) that will be used by the ImportDB function.
  • The forth line creates a result set with the metadata from the connection object. This object contains all of the metadata from the AdventureWorks2012 database. The ImportDB function has three parameter. The first parameter is a schema filter, and in the example the filter is on the Person schema. The second parameter is a table filter and is blank in the example, so it will be ignored. The last parameter is a list of ImportOptions, and the views have been excluded. To add more import options use the “|” followed by additional options. Example of multiple options:

ImportOptions.ExcludeForeignKey | ImportOptionsExcludeViews

The example includes an object that contains all of the metadata that you will loop over. You have to place the loop within the <Packages> node because a Biml file can only have one Packages collection. The foreach loop will create multiple <Package> nodes.

A tableName string was built below the foreach loop to reduce the code that you have to alter from the example in the previous article.

biml61 3 SQL SERVER   Iterating over Metadata with BimlScript   Notes from the Field #061

Listing 3 – Complete Code

Note: The code in listing 3 has the ConnectionString truncated to save space.

If you receive the error shown in Figure 1, it means that the tables do not exist in the stage database. To check this, add the Person filter to the table parameter in the ImportDB function and return metadata for only one table. The prior articles used this table, so it should exist.

biml61 4 SQL SERVER   Iterating over Metadata with BimlScript   Notes from the Field #061

Figure 1 – Error Message

Summary

This article added some more C# to help you refine your Biml files. I hope you can start to see the power of Biml and how it can help remove the repetitive work from your next Integration Services project.

If you want to get started with BIML with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)