SQL SERVER – New SQL Server 2012 Functions – Webinar by Rick Morelan

My friend Rick Morelan is a wonderful speaker and listening to him is very delightful. Rick is one of the speakers who can articulate a very complex subject in very simple words. Rick has attained over 30 Microsoft certifications in applications, networking, databases and .NET development, including MCDBA, MCTS, MCITP, MCAD, MOE, MCSE and MCSE+.

Here is the chance for every one who has not listened Rick Morelan before as he is presenting an online webinar on New SQL Server 2012 Functions. Whether or not you’re a database developer or administrator, you love the power of SQL functions. The functions in SQL Server give you the power to accelerate your applications and database performance. Each version of SQL Server adds new functionality, so come and see Rick Morelan explain what’s new in SQL Server 2012! This webinar will focus on the new string, time and logical functions added to SQL Server 2012.

Register for the webinar now to learn:

  • SQL Server 2012 function basics
  • String, time and logical function details
  • Tools to accelerate the SQL coding process

Tuesday June 11, 2013 
7:00 AM PDT / 10:00 AM EDT
11:00 AM PDT / 2:00 PM EDT

Secret Hint: Here is something I would like to tell everyone that there is a quiz coming up on SQLAuthority.com and those who will attend the webinar will find it very easy to resolve it.

Register for webinar

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – XML Data Type- SQL Queries 2012 Joes 2 Pros Volume 5 – XML Querying Techniques for SQL Server 2012

This chapter is abstract from the Beginning SQL 2012 – Joes 2 Pros Volume 5

Book On Amazon | Book On Flipkart

Kit on Amazon | Kit on Flipkart

Why buy this Book: SQL is full of relationship data with one to many relationships and so is XML. The Marriage between XML and SQL turns out to be far easier than people expected. People often tell me at the end of reading these 5 books they were surprised that this is their favorite.

What will I learn after reading this book:  XML Data Type, Shredding XML, Parsing XML, XQuery Extensions, XPATH, and Binding XML to SQL tabular results, XML Namespaces, and XML Indexes.

All supporting files are available with a free download from the www.Joes2Pros.com web site. This example is from the SQL 2012 series Volume 5 in the file SQLQueries2012Vo5Chapter5.0Setup.sql. If you need help setting up then look in the “Free Videos” section on Joes2Pros under “Getting Started” called “How to install your labs”

XML Data Type

Integers hold numbers with no decimal points, VARCHARs hold strings of varying length, and the Geography data type holds a position on the earth. Introduced in SQL Server 2005, the XML data type holds and understands valid XML strings.

If an XML document is essentially one long character string, then why should there be a separate XML data type? Like the Geography data type, XML has some built-in functions and methods to help with searching and querying the data inside. For example, the XML data type can detect the difference between valid XML strings (either a valid XML fragment or a well-formed XML document) versus strings which aren’t valid XML.

XML as a Field

You can declare a field as an XML data type at the time you create a table in SQL Server, or you can add an XML field later. This would potentially give each record in your table its own well-formed XML data.

How might this capability be a benefit? Suppose you need to store credit history data for each customer. Some customers have one credit reference, some have many, and a few have none at all. These one-to-many relationships could be defined with a new table called dbo.CreditHistory, since a single field does not implement multiple relationships very well. However, XML would allow for this through a series of related tags and without the need to create a separate table.

Our first example will add an XML field to an existing table. The CurrentProducts table (JProCo.dbo.CurrentProducts) has 7 fields and 485 records.

This ALTER TABLE statement code adds a nullable XML field named CategoryCodes to the CurrentProducts table.

ALTER TABLE CurrentProducts
ADD CategoryCodes XML NULL
GO

Messages
Command(s) completed successfully.

0 rows

If we re-run our SELECT statement, we can see the new field showing in the table. We want to add some data to the new column. Run this code to populate the CategoryCodes field for ProductID 1 with a well-formed XML.

UPDATE CurrentProducts
SET CategoryCodes =
'<Root>
<Category ID = "1"/>
<Category ID = "4"/>
</Root>'
WHERE ProductID = 1

Messages
(1 row(s) affected)

0 rows

By again running the SELECT statement, we see the newly populated record in the CategoryCodes field. An XML hyperlink shows for Product 1. If we click the XML hyperlink we see the table entity contains this XML result.

SELECT * FROM CurrentProducts

Recall the situation we described earlier where a credit history could be stored as one field within a customer’s record. Our results here in show how this data could appear and how adding an XML field to an existing table would save us from having to create a separate table to contain each customer’s credit history data.

Book On Amazon | Book On Flipkart

Kit on Amazon | Kit on Flipkart

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Executing Dynamic SQL – SQL Queries 2012 Joes 2 Pros Volume 4 – Query Programming Objects for SQL Server 2012

This chapter is abstract from the Beginning SQL 2012 – Joes 2 Pros Volume 4

Book On Amazon | Book On Flipkart

Kit on Amazon | Kit on Flipkart

Why buy this Book: As you get further into SQL you will discover areas you like more than others. Maybe your good at queries or performance tuning. It all comes down to writing code but that code needs to be in the right place. Code can become a function, stored procedure, Trigger, Cursor, or a script. What type of code is right for the different SQL objects can how to handle errors is what this book is all about.

What will I learn after reading this book:  Constraints, Triggers, Views, Stored Procedure Techniques, Functions, Error Handling, Dynamic SQL, and Cursors.

All supporting files are available with a free download from the www.Joes2Pros.com web site. This example is from the SQL 2012 series Volume 4 in the file SQLQueries2012Vo4Chapter14.0Setup.sql. If you need help setting up then look in the “Free Videos” section on Joes2Pros under “Getting Started” called “How to install your labs”

Exec Dynamic SQL

The words “I want to” make very little sense without context. However the words “I want to have lunch with you” give context and the query makes sense. Just as incomplete sentences don’t make sense in spoken languages, neither do partial SQL statements. The following statement is a complete statement for SQL to understand:

SELECT * FROM MgmtTraining

That previous statement will return the seven records from the MgmtTraining table. Before we dive into Dynamic SQL let’s look at a simple SELECT Statement. If we select these two lines of code one at a time, the query does not run. SQL does not understand incomplete statements. This statement has two lines and we need both of these lines in order to run:

SELECT *

Will not run by itself and neither will:

FROM MgmtTraining

There is a big difference in the next two statements:

SELECT * FROM MgmtTraining ‘SELECT * FROM MgmtTraining’

The first statement will run but the second will not. The single quotes around the second line make it a string and not a complete statement. If we could turn the string of the second statement into what it contains then we could execute the content as a SQL statement. One way to do this is by using the EXEC command. EXEC stands for execute and turns a string into running SQL code when the string is placed in a set of parentheses:

EXEC (‘SELECT * FROM MgmtTraining’)

Dynamic DML statements Many of us have used variables before to change how a predicate is run. For example LastName = @LastName might get a different result set from the Employee table when the value for @LastName changes. We can also use variables to change what table we are going to query from.

Beginning with a simple query let’s convert it to a Dynamic statement. Using the query from the first section of this chapter we will first declare the SELECT * and FROM MgmtTraining as @variables and set them equal to their corresponding lines of code. To see if we can reconstruct the query from variables we will add a print statement that concatenates the two variables into a message.

DECLARE @Select NVARCHAR(50) = ‘SELECT * ‘DECLARE @From NVARCHAR(50) = ‘FROM MgmtTraining’PRINT (@Select + @From)

MESSAGES
SELECT * FROM MgmtTraining

0 rows

This shows the same basic SELECT statement from the MgmtTraining table that we started with. The message is the same as the query. Our goal is to create Dynamic SQL, so simply change the PRINT command to EXEC. Instead of just printing a message SQL executes the (@Select + @From) string as a SQL statement.

DECLARE @Select NVARCHAR(50) = 'SELECT * ' DECLARE @From NVARCHAR(50) = 'FROM MgmtTraining' EXEC (@Select + @From)

  ClassID ClassName ClassDurationHours ApprovedDate
1 1 Embracing Diversity 12 2007-01-01…
2 2 Interviewing 6 2007-01-15…
3 3 Difficult Negotiations 30 2008-02-12…
4 4 Empowering Others 18 2012-08-01…
5 8 Passing Certifications 13 2012-08-01…
6 9 Effective Communications 35 2012-08-01…

 

 

 

 

7 rows

Using sp_executesql

A straight forward observation would be that not all queries are as simple as SELECT * FROM Employee. Sometimes seeing every employee is too much data. We may want to see just the employees with a LastName of Brown, Adams, or Smith:

SELECT * FROM Employee WHERE LastName = 'Brown' SELECT * FROM Employee WHERE LastName = 'Adams' SELECT * FROM Employee WHERE LastName = 'Smith'

Hard coding the LastName into the criteria is not efficient coding when there are constantly changing variable values for LastName. It would better to use a variable for LastName and set that variable to be used by the query. To return all the employees with a last name of Smith we are not going to hard code ‘Smith’, we will use the variable @EmpLastName. We can then change the way this query runs just by changing the value of the variable. The following code shows our idea but is not yet complete and will not run:

DECLARE @SQL NVARCHAR(200) SET @SQL = 'SELECT * FROM Employee WHERE LastName = @EmpLastName' DECLARE @EmpLastName NVARCHAR(30) EXEC (@SQL, @EmpLastName = 'Smith')

Since there is the new variable @EmpLastName it would seem we should use a declare statement and set the value for use by the query. In this case we will call it @SQL as a NVARCHAR (30) data type. We will attempt to pass in the @EmpLastName value of Smith within the EXEC command. The following example shows the EXEC statement did not accept the variable value and resulted in an error message.

DECLARE @SQL NVARCHAR(200) SET @SQL = 'SELECT * FROM Employee WHERE LastName = @EmpLastName' DECLARE @EmpLastName NVARCHAR(30) EXEC (@SQL, @EmpLastName = 'Smith')

Messages
Msg 102, Level 15, State 3, Line 4

Incorrect syntax near ‘)’.

0 rows

The EXEC command can only execute a concatenated string and is not capable of swapping variable values. Our goal is to execute the Dynamic SQL, and swap out the @EmpLastName variable for a set value (such as ‘Smith’).

To swap out variables in real time, use the sp_executesql system stored procedure. Sp_executesql can swap out dynamic parameters right before execution. This code will execute the @SQL query and replace the @EmpLastName with Smith.

DECLARE @SQL NVARCHAR(200) SET @SQL = 'SELECT * FROM Employee WHERE LastName = @EmpLastName' EXEC sp_executesql @SQL, N'@EmpLastName NVARCHAR(50)', @EmpLastName = 'Smith'

  EmpID LastName FirstName HireDate LocationID ManagerID Status
1 11 Smith Sally 1989-04-01… 1 NULL Active
2 14 Smith Janis 2009-10-18… 1 4 Active

 

 

 

 

 

 

 

2 rows

Book On Amazon | Book On Flipkart Kit on Amazon | Kit on Flipkart

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Introduction to GUIDs – SQL Queries 2012 Joes 2 Pros Volume 3 – Advanced Query Tools and Techniques for SQL Server 2012

This chapter is abstract from the Beginning SQL 2012 – Joes 2 Pros Volume 3

Book On Amazon | Book On Flipkart

Kit on Amazon | Kit on Flipkart

Why buy this Book: We often learn good practices from the people that come before us. Sometimes we later learn why those practices make sense. But there are so many exceptions to these rules. Instead of memorizing the exceptions you can learn the techniques of what is really going on with SQL performance and storage. It’s actually a few simple parts that you can test

What will I learn after reading this book: How data types are stored, spatial data types, Indexes and performance, GUIDs, Transactions and Locks, Isolation levels and database concurrency.

All supporting files are available with a free download from the www.Joes2Pros.com web site. This example is from the SQL 2012 series Volume 3 in the file SQLQueries2012Vol3Chapter10.1Setup.sql. If you need help setting up then look in the “Free Videos” section on Joes2Pros under “Getting Started” called “How to install your labs

Introduction to GUIDs

Numbers make great ID fields and for the longest time the integer was the only king of fulfilling this requirement. We often start counting from one when populating data in a table. If we buy one of our competitors sometime in the future, we might want to place their data into the database at our headquarters. If the company that we purchased also started their tables with the number one, how can we merge our tables together and maintain uniqueness?

In hindsight, it would have been nice if the company we purchased had even numbered identity fields and we had all odd numbered identity fields. Well, that might work for two businesses merging. What if a third company was purchased and we needed to add their database to ours as well? There is no telling how big any new project will scale over the next 10 or 20 years.

Here comes the GUID data type to save the day! If our ID field is using the GUID data type and another table we want to merge into a data warehouse is also using this data type, it is guaranteed they will all be unique from as many computers as we are gathering data from.

GUID is an acronym for Global Unique ID(entifier) and is a unique 16 byte number. The term GUID and UNIQUEIDENTIFIER are often interchangeable within the SQL Server community.

Recap of Identity Integers

The UNIQUEIDENTIFIER data type was new to SQL Server 7.0 and often is used with auto generated fields similar to the use of an INT in an IDENTITY column.

The IDENTITY property is going to insert values in an incremental order, unlike the random order produced by a UNIQUEIDENTIFIER data type using the NEWID() function.

Attempting to merge two tables generated by an IDENTITY column into a single table is very likely to end up with gaps, or conflicting values (Duplicates). Let’s create a very simple table and insert five rows of test data into the table to review how this works. The results are seen here

CREATE TABLE TestInt (
IntID INT IDENTITY(1,1),
IntName VARCHAR(25))
GO
INSERT INTO TestInt VALUES
('One'), ('Two'), ('Three'), ('Four'), ('Five')
SELECT * FROM TestInt

  IntID IntName
1 1 One
2 2 Two
3 3 Three
4 4 Four
5 5 Five

 

 

5 rows

Introduction of UNIQUEIDENTIFIER Data Type

Like an integer, the GUID is a number. However, instead of being a base 10 number like the integer, it is a hexadecimal number (Base 16). All GUIDs have a format of 8hex-4hex-4hex-4hex-12hex as shown here: B8DC0F5E-E4EF-4EA4-BC39-40721AFE680D

A big advantage of the GUID data type over an IDENTITY column of INT, is they are unique across all tables, databases, and computers, over any foreseeable time period (Hundreds of years or more). This is very useful when combining records from multiple SQL Servers into a single data warehouse, as it will certainly avoid any conflicting entries based on this field.

We can quickly see exactly what a GUID looks like and how to easily generate this unique hexadecimal number by using the following code sample. When viewing the results notice how the same code generates a different value each time the NEWID() function is called.

SELECT NEWID() AS 'GUID-A'
SELECT NEWID() AS 'GUID-B'
SELECT NEWID() AS 'GUID-C'

  GUID-A
1 E005915A-6A34-45E9-A57A-DF638BED1A18
  GUID-B
1 952E6FD0-6B14-48BE-BA2B-685B7CDDC5E2
  GUID-C
1 4B73C91D-AE84-41B2-97D2-233860A78032

 

3 rows

Whenever the NEWID() function is called, it will return a value of the UNIQUEIDENTIFIER data type (e.g., a GUID). Now let’s create another simple table using a UNIQUEIDENTIFIER (GUID) instead of an IDENTITY column, and then insert the same VARCHAR() values in each of the five rows of test data that we used for the TestInt table. It is important to remember that GUIDs are not generated automatically by SQL Server and it is recommended to use the DEFAULT keyword both when creating and inserting data into the table.

CREATE TABLE TestGuid (
GuidID UNIQUEIDENTIFIER DEFAULT NEWID(),
IntName VARCHAR(25))
GO
INSERT INTO TestGuid VALUES
(DEFAULT, 'One'), (DEFAULT, 'Two'), (DEFAULT, 'Three'), (DEFAULT, 'Four'), (DEFAULT, 'Five')
SELECT * FROM TestGuid

  GuidID IntName
1 6F6EAB2C-FF82-4057-BF9F-ECC3BA62CF6C One
2 BBC1B87E-0E0B-4D55-ABB9-0D585653CA7F Two
3 CB8379E0-AB76-4AC4-B5E0-24DE9C3DFD1F Three
4 B97F7594-3E3E-42B7-B268-A0CAD0F30914 Four
5 8DD0310C-7844-4C08-9B0D-A266F3F3A11F Five

 

 

5 rows

When comparing the results of the TestInt table values to the TestGuid table values, we can see that the first column in both tables contain unique values, they share a common field name for the second column, in addition to having duplicate values for each respective row in the table.

Book On Amazon | Book On Flipkart

Kit on Amazon | Kit on Flipkart

Click to Download Scripts

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Identity Fields Review – SQL Queries 2012 Joes 2 Pros Volume 2 – The SQL Query Techniques Tutorial for SQL Server 2012

This chapter is abstract from the Beginning SQL 2012 – Joes 2 Pros Volume 2

Book On Amazon | Book On Flipkart

Kit on Amazon | Kit on Flipkart

Why buy this Book: The beta of this book actually existed for a year and was tested and used in my classroom. Its purpose back then was to help the students do the steps individually that led to the skills where they could all pass the Microsoft test. It worked and then many went out for their SQL interview. They did good enough to get the job but they told me about one or two questions they were not able to answer. A year of collecting this data and turning them into lessons doubled the size of this book from 300 pages to 600 pages. This book is designed to make the query question and query skills in the professional work seem must easier.

What will I learn after reading this book: Query Aggregations, TOP, Ranked, Tiles queries, Set operators, CTEs, recursion, basic and correlated subqueries, Using MERGE and OUTPUT.

All supporting files are available with a free download from the www.Joes2Pros.com web site. This example is from the SQL 2012 series Volume 2 in the file SQLQueries2012Vol2Chapter3.2Setup.sql. If you need help setting up then look in the “Free Videos” section on Joes2Pros under “Getting Started” called “How to install your labs”

Identity Fields Review

The CurrentProducts table has 480 records and the next records will be ProductID 481. ProductID is the first field and it is an identity field.  SQL Server generates when you attempt to enter a value into the identity field.

INSERT INTO CurrentProducts VALUES
(481, 'Yoga Mtn Getaway 5 Days', 875, '9/1/2009', 0, 'Medium-Stay')

Messages
Msg 8101, Level 16, State 1, Line 1
An explicit value for the identity column in table ‘CurrentProducts’ can only be specified when a column list is used and IDENTITY_INSERT is ON.

0 rows

Remove the 481 value from the insert statement and then insert the record with the following statement.

INSERT INTO CurrentProducts VALUES
('Yoga Mtn Getaway 5 Days', 875, '9/1/2009', 0, 'Medium-Stay')

Messages
(1 row(s) affected)

0 rows

 

Product 481 has been added.

SELECT * FROM CurrentProducts

Now let’s  delete the yoga product, in order to simulate the deletion scenario.

DELETE FROM CurrentProducts
WHERE ProductName LIKE '%yoga%'

Next, attempt to reinsert the yoga product again we just deleted. Note the identity property of the ProductID field keeps track of all records you insert and delete. It knows most recent inserted record was 481. This next statement will get a ProductID of 482.

INSERT INTO CurrentProducts VALUES
('Yoga Mtn Getaway 5 Days',875.00,'9/1/2009',0,
'Medium-Stay')

SELECT * FROM CurrentProducts

Overriding Identity Fields

We don’t want this gap between ProductID values from 480 to 482. We want the yoga product to retain its original ProductID values of 481. The deleted records have been re-inserted causing SQL Server to jump ahead to the next unused ProductID values.

DELETE FROM CurrentProducts
WHERE ProductName LIKE '%yoga%'

To accomplish this goal, we must temporarily halt the automatic identity and insert our record. Then we can manually assign the correct ProductID values.  Delete the Yoga record again so we can try another type of insertion. Our next step is to temporarily set the IDENTITY_INSERT property to ON. Now we will attempt to run the INSERT statement for the yoga record.

SET IDENTITY_INSERT CurrentProducts ON
INSERT INTO
CurrentProducts VALUES
(481,'Yoga Mtn Getaway 5 Days',875.00,'9/1/2009',0,
'Medium-Stay')

Messages
Msg 8101, Level 16, State 1, Line 2
An explicit value for the identity column in table ‘CurrentProducts’ can only be specified when a column list is used and IDENTITY_INSERT is ON.

0 rows

The error message prompts us to include a column list whenever we manually insert record(s) to a table with an identity field. In other words, when manually inserting records, we have to pass the values by name and not position. It is required to only add data in specified columns. In other words find the names of all the fields in the table and list them in parenthesis after the INSERT INTO keywords but before the VALUES  keyword.

After you’ve added the column list, enclose the list in parentheses and move the keyword VALUES to follow the column list. Now reattempt to insert the  yoga record. After SQL Server gives you confirmation (“1 row(s) affected”).

SET IDENTITY_INSERT CurrentProducts ON
INSERT INTO
CurrentProducts (
ProductID, ProductName, RetailPrice, OriginationDate, ToBeDeleted, Category) VALUES
(481,'Yoga Mtn Getaway 5 Days',875.00,'9/1/2009',0,
'Medium-Stay')

Run a SELECT statement and confirm the yoga records show in the table.

SELECT * FROM CurrentProducts

SQL Server will allow you to utilize IDENTITY_INSERT with just one table at a time in your database. After you’ve completed the needed work, it’s very important to reset the IDENTITY_INSERT back to OFF.  A best practice recommendation is to write the OFF statement as a reminder at the same time you write the ON statement.

SET IDENTITY_INSERT CurrentProducts OFF

Messages
Command(s) completed successfully.

0 rows

Book On Amazon | Book On Flipkart

Kit on Amazon | Kit on Flipkart

Click to Download Scripts

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Query Writing Strategy – SQL Queries 2012 Joes 2 Pros Volume 1 – The SQL Queries 2012 Hands-On Tutorial for Beginners

This chapter is abstract from the Beginning SQL 2012 – Joes 2 Pros Volume 1

Book On Amazon | Book On Flipkart

Kit on Amazon | Kit on Flipkart

Why Buy this Book: Weather you are a tester, developer, or administrator of SQL there are some basic terms and skill they are all expected to know. The core of design, permissions, queries, and SQL objects is often many separate books. But what if you want the proficient base across all these displaces. If you are starting out or are self-thought this will help fill in the pieces you may not know was missing.

What will I learn after reading this book: Queries with all types of Joins, Creating tables and stored procedures, Transactions, Login Permissions, and workplace tips?

Here is something I have yet to find in any book. When enthusiastic SQL students do this, they experience a revelation. The number of errors drops significantly and the speed at writing queries increases immediately. Knowing how to narrow down what we are looking for amongst a vast list of choices helps immensely.

All supporting files are available with a free download from the www.Joes2Pros.com web site. This example is from the SQL 2012 series Volume 1 in the file SQLQueries2012Vol1Chapter4.0Setup.sql. If you need help setting up then look in the “Free Videos” section on Joes2Pros under “Getting Started” called “How to install your labs”

Query Writing Strategy

When visiting a new restaurant, we will ask to see the menu, because we want to see all they have to offer. The odds are that we might be interested in half of the items, but only need a few dishes for our group at the table. Looking at the menu is like starting a query with a ‘SELECT *’ statement. Once we have looked at all the fields, we narrow our choice(s) to only the items we want at the time.

Sometimes restaurants have multiple menus. My favorite restaurant has a kids’ menu, an adult menu, a gluten-free menu and a drink menu. After looking this over a selection of what I liked was narrowed a few items. The Menu organizes what we have to choose from then we pick the one we want. We can write queries this way using the same two phases.

Phase I: Organize. When building a new query from many tables, we often find ourselves wondering, “Where do I start?” First, lay the steps out by identifying which tables contain the essential data. Second, get all the table joins working with a basic ‘SELECT *’ statement. Third, add any basic filtering criteria.

Phase II: Itemize. Once all joins and criteria, such as SELECT, FROM and WHERE are working, we are ready for Phase II. This entails going back and changing our ‘SELECT *’ to an itemized SELECT field list as the final step.

Let’s say we know what fields we want in our query even before we start. In this figure we write a SELECT statement and just have one of the two tables needed in FROM clause. When completed, the FROM clause will have both tables, but for now we just want to get the Location table working. By using the ‘SELECT *’ strategy on the left, we remove any possible errors from line 1. From there, we can focus on the more complicated logic used for joining tables together. We can add tables one at a time until everything is working. This is the Organize phase.

After our query is organized and working, we can go back and itemize the SELECT field list to display only the fields that are necessary. This is done during Phase II (Itemize). The steps for this system are broken down as follows:

Phase I: ORGANIZE (write a SELECT * query statement with joins)

--Test first table logic
SELECT *
FROM Location
--Test second table with join
SELECT *
FROM Location INNER JOIN Employee
ON Location.LocationID = Employee.LocationID
--Test all tables with criteria
SELECT *
FROM Location INNER JOIN Employee
ON Location.LocationID = Employee.LocationID
WHERE [State] = 'WA'

Phase II: ITEMIZE (itemize the SELECT field list)

--Choose the fields
SELECT FirstName, LastName, City, [State]
FROM Location INNER JOIN Employee
ON Location.LocationID = Employee.LocationID
WHERE [State] = 'WA'

Book On Amazon | Book On Flipkart

Kit on Amazon | Kit on Flipkart

Click to Download Scripts

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Beginning SQL 2012 – Basics of CONVERT and FORMAT Function – Abstract from Joes 2 Pros Volume 5

This chapter is abstract from the Beginning SQL 2012 – Joes 2 Pros Volume 5

You can get the Five part SQL Server 2012 Joes 2 Pros Combo Kit for complete reference. 

Book On Amazon | Book On Flipkart

OPENXML has been around longer than the XML data type in SQL Server. The OPENXML requires you to use a series of system stored procedures and a variable to keep track of a handle. Using the number handle was a formality as the real work was in the patterns getting the right values from the right nodes. In this post we will shred XML with an XPath into nodes without needing to use OPENXML or keep track of numbered handles. This is possible with the nodes() method of the XML data type.

All supporting files are available with a free download from the www.Joes2Pros.com web site. This example is from the “SQL Queries 2012 Joes 2 Pros Volume 5” in the resource folder in a file named Lab6.3StarterSetup.sql. If you need help setting up then look in the “Free Videos” section on Joes2Pros under “Getting Started” called “How to install your labs

The nodes() Method

Take a look at the following code. It has three song records in XML stored in an XML data type variable called @Doc. If we run the code and SELECT it we will see the link appear in the result set. Note: If you don’t want to re-type this then open the Lab6.3Starter.sql from the resource folder or SQL 2012 series Volume 5 from Joes2Pros.com.

DECLARE @Doc XML
SET @Doc='<Music>
 <Song TitleID="13159">
 <WriterName>Neil Diamond</WriterName>
 <Title>Red-Red Wine</Title>
 <Singer OrderID="1">
 <BandName>Neil Diamond</BandName>
 </Singer>
 <Singer OrderID="2">
 <BandName>UB40</BandName>
 </Singer>
 </Song>
 <Song TitleID="13160">
 <WriterName>Prince</WriterName>
 <Title>Manic Monday</Title>
 <Singer OrderID="1">
 <BandName>The Bangles</BandName>
 </Singer>
 </Song>
 <Song TitleID="13161">
 <WriterName>Roy Orbison</WriterName>
 <Title>Pretty Woman</Title>
 <Singer OrderID="1" BandName="Roy Orbison">
 <BandName>Roy Orbison</BandName>
 </Singer>
 <Singer OrderID="2">
 <BandName>Van Halen</BandName>
 </Singer>
 </Song>
 </Music>'
SELECT @Doc

Right now it looks like the three nodes are in one record in our result set. If we open the link and collapse the top-level nodes notice we have three songs or three nodes at the /Music/Song level.

How do we turn this into three records of XML fragments as opposed to one complete well-formed XML? The three /Music/Song level nodes will represent our three records. The nodes() method will produce the result needed. Instead of selecting the @Doc directly lets SELECT FROM @DOC with the nodes() method. The nodes() method will need to know the XPath. We want to see all three songs. Since each song is at the /Music/Song level we will use the following XPath:

SELECT
 FROM @Doc.nodes('/Music/Song')

The FROM clause normally expects a named table to produce a tabular result. Because the nodes() method in this example is in the FROM clause it needs to represent and look like a table. To do this we will simply alias the method. In this example we will alias this AS a table named Songs.

SELECT
 FROM @Doc.nodes('/Music/Song') AS Songs

Sometimes tables have no records but they always have fields. Like any table it has at least one field. Most tables have many fields. So a table is likely to contain a collection of records but sometimes a table may have no records at all. A table must have at least one field. That means a table is guaranteed to be a collection of at least one field. In other words a table will have a collection of columns even if it has no records.

Currently our Songs table does not have any defined way to refer to the collection of fields that make up this table alias. If we look back at the original XML there are several field options from the different elements and attributes such as TitleID, WriterName, BandName, etc. We need to specify to our Songs table that all these possible fields are inside a collection which we will call SongRow:

SELECT
 FROM @Doc.nodes('/Music/Song') AS Songs(SongRow)

SongRow is like a suitcase that holds all of our field names for the table Songs. The first value we want to pull out is going to be the Title. From the Songs table, SELECT the SongRow collection and query the (‘Title’) element:

SELECT SongRow.query('Title')
 FROM @Doc.nodes('/Music/Song') AS Songs(SongRow)

To recap the nodes() method, alias the tablename with the rows stored within the column collection (or field collection) in parentheses. In the SELECT statement specify the column(s).

Combining value() and nodes() methods

The nodes() method can shred an XML data type into a tabular stream while allowing us to identify the rowpattern that make up each node. Let’s start this section where we left off.

By specifying that our node is at the /Music/Song level we returned three XML fragments. The first record is <Title>Red-Red Wine</Title> which is a very small XML fragment showing us that song title in an XML element. What if we wanted Red-Red Wine as VARCHAR data without XML? In other words we don’t need the data with tags showing as XML fragments. We really want parsed values not XML fragments to make up our three rows. This is a simple change in the query. Change the SELECT list to use the value() method instead of query() and add a second argument that specifies the data type. The value() method needs a singleton [1] so add that to the Title in the first parameter as seen in the following code:

SELECT SongRow.value('Title[1]', 'VARCHAR(100)')
 FROM @Doc.nodes('/Music/Song') AS Songs(SongRow)

By using the value() method with the nodes() method we were able to pull VARCHAR data from the XML stream. We want to pull out another field from the SongRow column collection. The <Title> element worked and is a child of <Song>. The <TitleID> is also a child of <Song> as an attribute. The nice thing about an attribute is they are guaranteed to be singletons.

It makes sense for ID to be the first field listed in a query so we can drop the Title down to the second line of our SELECT list and create another field. Let’s make TitleID our first field by placing it in front of Title in the SELECT list. TitleID is an INT and we will alias it as TitleID and Title AS Title:

SELECT SongRow.value('@TitleID', 'INT') AS TitleID,
 SongRow.value('Title[1]','VARCHAR(100)') AS Title
 FROM @Doc.nodes('/Music/Song') AS Songs(SongRow)

Since attributes are guaranteed to be singletons it is not necessary to specify [1] in the query. This code now pulls out two fields from the SongRow collection since we used two different value() methods (one for each field).

We were able to pull out the TitleID as well as the Title using the value() method and nodes() method together. Let’s take a moment and look at the XML we are working from.

Let’s use what we know and pull out the first BandName for each singer. BandName is not a direct child of the /Music/Song node, it is a grandchild. Since the specified node level is /Music/Song how do we reach down two more levels for BandName? In the value() method we can specify the additional levels needed. In this case it is Singer[1]/BandName[1]. These are elements so we need to specify the singleton to pull the first Singer and first BandName of each song:

SELECT SongRow.value('@TitleID', 'INT') AS TitleID,
 SongRow.value('Title[1]','VARCHAR(100)') AS Title,
 SongRow.value('Singer[1]/BandName[1]', 'VARCHAR(100)')
 AS BandName
 FROM @Doc.nodes('/Music/Song') AS Songs(SongRow)

Using the value() method along with the nodes() method we have been able to shred our XML without using OPENXML or handles.

Reference: Pinal Dave (http://blog.SQLAuthority.com)