SQL SERVER – Adventure Works for SQL Server 2012 RC0 – Samples Database for SQL Server 2012 RC0

Microsoft has just released AdventureWorks database for SQL Server 2012 RC0. I am very happy that now I will be able to play with this new sample database and base my various demo script around the same. Here is the link to download AdventureWorks 2012 RC0 database.

Download Adventure Works for SQL Server 2012 RC0

Please note this is just a data file. You will have to follow my earlier blog post to create database without log file read here.

Additionally this is sample database so install on your development environment. Recently I had seen this on few production database and when I asked production DBA why they exists, they told me it is ‘for practice’. I suggest you do your practice on your development box and not on production box.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

About these ads

SQLAuthority News – Download Whitepaper 5 Tips for a Smooth SSIS Upgrade to SQL Server 2012

Microsoft SQL Server 2012 Integration Services (SSIS) provides significant improvements in both the developer and administration experience. This article provides tips that can help to make the upgrade to Microsoft SQL Server 2012 Integration Services successful. The tips address editing package configurations and specifically connection strings, converting configurations to parameters, converting packages to the project deployment model, updating Execute Package tasks to use project references and parameterizing the PackageName property.

  • TIP #1: Edit Package Configuration and Data Source after upgrading
  • TIP #2: Convert to project deployment model using Project Conversion Wizard
  • TIP #3: Update Execute Package Task to use project reference and use parameter to pass data from parent package to child package
  • TIP #4: Parameterize PackageName property of Execute Package Task to dynamically configure which child package to run at execution time
  • TIP #5: Convert package configuration to parameter when possible

Download Whitepaper Download Whitepaper 5 Tips for a Smooth SSIS Upgrade to SQL Server 2012

Above text is taken from original msdn documentation.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Where Can YOU Get My Books – SQL Server Interview Question and Answers

Earlier month I released by third book SQL Server Interview Question and Answers. The focus of this book is ‘master the basics’. If you rate yourself 10 out of 10 in SQL Server – this book is not for you but if you want to learn fundamentals or want to refresh your fundamentals this book is for YOU. Earlier I was overwhelmed by love you all have shown to this book on release date leading our three digit inventory to run out of stock. Read detail blog post about the subject over here A Real Story of Book Getting ‘Out of Stock’ to A 25% Discount Story Available.

Well, we learn the lesson from the experience and have made sure that the inventory does not run out any more. Since then we are now available on multiple outlets. Pretty much anywhere in USA and India the book is available. Additionally, where ever Amazon ships internationally.

I have created dedicated page where I have listed where one can avail this book from Details of SQL Server Interview Question and Answers. Even though I keep on getting common question like – where one can get this book.

You can get this book from:

USA: Amazon

India: Flipkart | IndiaPlaza | Crossword

In India now you can walk into any crossword store and ask this book, if they do not have it, you can ask them get one for you.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Using expressor Composite Types to Enforce Business Rules

One of the features that distinguish the expressor Data Integration Platform from other products in the data integration space is its concept of composite types, which provide an effective and easily reusable way to clearly define the structure and characteristics of data within your application.  An important feature of the composite type approach is that it allows you to easily adjust the content of a record to its ultimate purpose.  For example, a record used to update a row in a database table is easily defined to include only the minimum set of columns, that is, a value for the key column and values for only those columns that need to be updated.

Much like a class in higher level programming languages, you can also use the composite type as a way to enforce business rules onto your data by encapsulating a datum’s name, data type, and constraints (for example, maximum, minimum, or acceptable values) as a single entity, which ensures that your data can not assume an invalid value.  To what extent you use this functionality is a decision you make when designing your application; the expressor design paradigm does not force this approach on you.

Let’s take a look at how these features are used.  Suppose you want to create a group of applications that maintain the employee table in your human resources database.

Your table might have a structure similar to the HumanResources.Employee table in the AdventureWorks database.  This table includes two columns, EmployeID and rowguid, that are maintained by the relational database management system; you cannot provide values for these columns when inserting new rows into the table.

Additionally, there are columns such as VacationHours and SickLeaveHours that you might choose to update for all employees on a monthly basis, which justifies creation of a dedicated application.

By creating distinct composite types for the read, insert and update operations against this table, you can more easily manage this table’s content.

When developing this application within expressor Studio, your first task is to create a schema artifact for the database table.  This process is completely driven by a wizard, only requiring that you select the desired database schema and table.  The resulting schema artifact defines the mapping of result set records to a record within the expressor data integration application.  The structure of the record within the expressor application is a composite type that is given the default name CompositeType1.  As you can see in the following figure, all columns from the table are included in the result set and mapped to an identically named attribute in the default composite type.

If you are developing an application that needs to read this table, perhaps to prepare a year-end report of employees by department, you would probably not be interested in the data in the rowguid and ModifiedDate columns.  A typical approach would be to drop this unwanted data in a downstream operator.  But using an alternative composite type provides a better approach in which the unwanted data never enters your application.

While working in expressor  Studio’s schema editor, simply create a second composite type within the same schema artifact, which you could name ReadTable, and remove the attributes corresponding to the unwanted columns.

The value of an alternative composite type is even more apparent when you want to insert into or update the table.  In the composite type used to insert rows, remove the attributes corresponding to the EmployeeID primary key and rowguid uniqueidentifier columns since these values are provided by the relational database management system.

And to update just the VacationHours and SickLeaveHours columns, use a composite type that includes only the attributes corresponding to the EmployeeID, VacationHours, SickLeaveHours and ModifiedDate columns.

By specifying this schema artifact and composite type in a Write Table operator, your upstream application need only deal with the four required attributes and there is no risk of unintentionally overwriting a value in a column that does not need to be updated.

Now, what about the option to use the composite type to enforce business rules?  If you review the composition of the default composite type CompositeType1, you will note that the constraints defined for many of the attributes mirror the table column specifications.  For example, the maximum number of characters in the NationaIDNumber, LoginID and Title attributes is equivalent to the maximum width of the target column, and the size of the MaritalStatus and Gender attributes is limited to a single character as required by the table column definition.  If your application code leads to a violation of these constraints, an error will be raised.  The expressor design paradigm then allows you to handle the error in a way suitable for your application.  For example, a string value could be truncated or a numeric value could be rounded.

Moreover, you have the option of specifying additional constraints that support business rules unrelated to the table definition.

Let’s assume that the only acceptable values for marital status are S, M, and D.  Within the schema editor, double-click on the MaritalStatus attribute to open the Edit Attribute window.  Then click the Allowed Values checkbox and enter the acceptable values into the Constraint Value text box.

The schema editor is updated accordingly.

There is one more option that the expressor semantic type paradigm supports.  Since the MaritalStatus attribute now clearly specifies how this type of information should be represented (a single character limited to S, M or D), you can convert this attribute definition into a shared type, which will allow you to quickly incorporate this definition into another composite type or into the description of an output record from a transform operator.

Again, double-click on the MaritalStatus attribute and in the Edit Attribute window, click Convert, which opens the Share Local Semantic Type window that you use to name this shared type.  There’s no requirement that you give the shared type the same name as the attribute from which it was derived.  You should supply a name that makes it obvious what the shared type represents.

In this posting, I’ve overviewed the expressor semantic type paradigm and shown how it can be used to make your application development process more productive.  The beauty of this feature is that you choose when and to what extent you utilize the functionality, but I’m certain that if you opt to follow this approach your efforts will become more efficient and your work will progress more quickly.  As always, I encourage you to download and evaluate expressor Studio for your current and future data integration needs.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – 2012 RC0 Various Resources and Downloads

Microsoft SQL Server 2012 Release Candidate 0 (RC0)
Microsoft SQL Server 2012 RC0 enables a cloud-ready information platform that will help organizations unlock breakthrough insights across the organization.

Microsoft SQL Server 2012 Express RC
Microsoft SQL Server 2012 Express RC0 is a powerful and reliable free data management system that delivers a rich set of features, data protection, and performance for embedded applications, lightweight Web Sites, applications, and local data stores.

Microsoft SQL Server 2012 Semantic Language Statistics RC0
The Semantic Language Statistics Database is a required component for the Statistical Semantic Search feature in Microsoft SQL Server 2012 Semantic Language Statistics RC0.

Microsoft SQL Server 2012 Release Candidate 0 (RC0) Manageability Tool Kit
The Microsoft SQL Server 2012 Release Candidate 0 (RC0) Manageability Tool Kit is a collection of stand-alone packages which provide additional value for Microsoft SQL Server 2012 Release Candidate 0 (RC0).

Microsoft SQL Server 2012 PowerPivot for Microsoft Excel 2010 Release Candidate 0 (RC0)
Microsoft PowerPivot for Microsoft Excel 2010 provides ground-breaking technology; fast manipulation of large data sets, streamlined integration of data, and the ability to effortlessly share your analysis through Microsoft SharePoint

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months.

With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example.

Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary.

EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY
102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000"
101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000"
103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000"
304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000"
333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000"
100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000"
334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000"
400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000"

Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data.

Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts.

First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file.

Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact.

To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table.

I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button.

In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact.

Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value.

Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard.


I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next.
Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact.

Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type.

But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date).

As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file.

I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values.

For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes.

And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed.

And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box.

These entries allow the string to be properly converted into a decimal value.

By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself.

Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator.

Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped.

I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator.

The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu.

The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table.

This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables.

Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQLAuthority News – SQL Server 2008 SP3 Available to Download

This news is one week late but still very useful as per my perspective. Please note this are for SQL Server 2008 and will not work with SQL Server 2008 R2.

SQL Server 2008 Service Pack 3

  • Enhanced upgrade experience from previous versions of SQL Server to SQL Server 2008 SP3. In addition, we have increased the performance & reliability of the setup experience.
  • In SQL Server Integration Services logs will now show the total number of rows sent in Data Flows.
  • Enhanced warning messages when creating the maintenance plan if the Shrink Database option is enabled.
  • Resolving database issue with transparent data encryption enabled and making it available even if certificate is dropped.
  • Optimized query outcomes when indexed Spatial Data Type column is referenced by DTA (Database Tuning Advisor).
  • Superior user experience with Sequence Functions (e.g Row_Numbers()) in a Parallel execution plan.

Microsoft SQL Server 2008 SP3 Express Edition
SQL Server 2008 SP3 Express is a free edition of SQL Server that is ideal for learning and building desktop and small server applications, and for redistribution by ISVs.

Microsoft SQL Server 2008 Service Pack 3 Feature Pack
The Feature Pack is a collection of stand-alone install packages that provide additional value for SQL Server 2008 SP3.

Microsoft SQL Server 2008 SP3 Reporting Services Report Builder
Microsoft SQL Server 2008 Reporting Services Report Builder 2.0 delivers an intuitive, Office-like report authoring environment enabling business and power users to leverage their experience with Microsoft Office 2007 products.

Microsoft SQL Server 2008 SP3 Reporting Services Add-in for Microsoft SharePoint Technologies
The Microsoft SQL Server 2008 SP3 Reporting Services Add-in for Microsoft SharePoint Technologies is a Web download that provides features for running a report server within a larger deployment of Windows SharePoint Services 3.0 or Microsoft Office SharePoint Server 2007.

Reference: Pinal Dave (http://blog.SQLAuthority.com)