SQL SERVER – Different Methods to Extract Scale Part From Decimal Number

One of my blog readers asked me “Is there a way to extract only the number after the decimal point?”. For example, in decimal number 45.90, 45 is the precision and 90 is the scale value.

There are several methods to get this done

Let us create this dataset

CREATE TABLE #Numbers (value DECIMAL(16, 3));
INSERT INTO #Numbers (value)
VALUES (100.00);
INSERT INTO #Numbers (value)
VALUES (- 23.89);
INSERT INTO #Numbers (value)
VALUES (390.077);
INSERT INTO #Numbers (value)
VALUES (12.87);
INSERT INTO #Numbers (value)
VALUES (390.1);

Method 1 : truncate scale part and subtract it from the original number


Method 2 : Convert the number into a string and get the number after the decimal point


Note that str function is a string representation of a number. The second parameter 3 restrict the number to have maximum of 3 digits and identifying the position of a dot, we will be able to extract only scale part

Method 3 : Use pARSENAME function


Parsename split the data based on the dot and parameter value 1 is used to extract the right most part which is scale in the decimal number

The result of the able queries are

decimal SQL SERVER   Different Methods to Extract Scale Part From Decimal Number

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Fix @@ServerName Property Value When Incorrect

Our software systems are funny and sometimes give us strange values. I don’t quite venture out trying to fiddle around with system level properties on my servers, but having said that, this query had come from one of my colleagues who was working on his SQL Server instance and he found something strange.

When working on the server and when an error occurs, I generally ask for the @@Version. This is a standard practice for me to do it because I don’t have to ask about the version installed, what service pack, CU update has been done on the server etc. This output would give me all this information in black and white.

error SQL SERVER   Fix @@ServerName Property Value When IncorrectAs I was so used to this kind of output, my friend said that the value that is returned by his @@Servername was NULL. This was such a surprise and I wanted to see it in real life. He was kind enough to show me via Skype that this was indeed the case. This got me thinking and I needed to solve this trivial problem. I had gone to bed early as it was a lazy Sunday and I couldn’t put myself to sleep as this was plaguing my mind – I needed a solution. Finally, after an hour and getting some help from my SQL buddies, I found the deal to crack this problem. Strange but I am sure many of you have encountered this somewhere and were looking for a solution.

The @@ServerName refers to the local server that SQL Server is running on.  This property is set during installation, but there are instances where @@ServerName may be NULL or may not be correct.

Running the following will show the current TSQL value for this property:


To Fix the Problem

If the @@ServerName property is NULL, running the following will fix the issue- but only after you restart the SQL Server instance. Yep, fixing this requires an outage.

EXEC sp_addserver '<LocalServerName>', local;

If the @@ServerName property is incorrect, run the following to correct the issue:

EXEC sp_dropserver 'old_name';
EXEC sp_addserver 'new_name', 'local';

Though the solution to this was so simple, I did find it strange about this behavior. I am yet to come to terms this can every happen. If you ever encountered this error on your server, can you please let us know how you landed to this situation. I am eager to learn the cause for this rather than just looking at solution to the problem.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – What are T-SQL Median? – Notes from the Field #090

Kathi SQL SERVER   What are T SQL Median?   Notes from the Field #090[Note from Pinal]: In this episode of the Notes from the Field series database expert Kathi Kellenberger explains about T-SQL Median. Kathi is an amazing instructor, she was the SQL author I have read in my early career. The reason, I love SQL Server because her writing has instigated love for this technology in me. Today she brings a relatively interesting topic for database experts. Read the experience of  Kathi in her own words.

SQL Server has a wealth of useful T-SQL functions that let you do just about anything you need to do. One function that is sorely missing is a MEDIAN function. A median is the middle value in a set of values. For example, 3 is the median of the set 1, 3, 5. If the set is even, the median is calculated or “interpolated” by averaging the two middle values. The median of this set 1, 3, 5, 7 is 4.

You will see median values used frequently even if you do not work with statistics yourself. For example, you could see a news story listing the median household income of a city by neighborhood, the median salary for developers by language, or the median age of a population.

As I mentioned, there is no MEDIAN function in T-SQL. If, however, you are fortunate enough to be running SQL Server 2012 or 2014, there is an easy workaround. In 2012, Microsoft introduced a new function called PERCENTILE_CONT, one of the T-SQL window functions. Given a percent rank and a set of values, PERCENTILE_CONT will return the value ranked at that percent. If there is not an exact value found at the rank, PERCENTILE_CONT will interpolate the answer instead. If you supply 0.5, meaning 50%, PERCENTILE_CONT will return the median value. Here is an example that calculates the median sales by month for each year:

WITH Totals AS (
SELECT YEAR(OrderDate) AS OrderYear,
MONTH(OrderDate) AS OrderMonth,
SUM(TotalDue) AS TotalSales
FROM Sales.SalesOrderHeader
SELECT OrderYear, OrderMonth, TotalSales,
OVER(PARTITION BY Totals.OrderYear) AS MedianSales
FROM Totals
ORDER BY OrderYear, TotalSales;

Before we take a look at the results, let’s review the query. The query has a common table expression (CTE) that calculates the sales for each month and year. In the outer query, the PERCENTILE_CONT function is used with an argument of 0.5. PERCENTILE_CONT is a T-SQL window function, but the syntax for it and a similar function, PERCENTILE_DISC, is different than the other window functions. The function name is followed by the WITHIN GROUP clause. This is where the ORDER BY expression goes, not in the OVER clause. The ORDER BY expression is used by the function to line up the values so that the ranks can be compared to the percentile rank value provided as an argument. The OVER clause supports PARTITION BY, and in this case we would like to get the median within each year, so the PARTITION BY expression is OrderYear.

In 2005, there were orders placed between July and December, six months. Six is even, so the function calculates the average of two middle months as shown in Figure 1.

notes 90 1 SQL SERVER   What are T SQL Median?   Notes from the Field #090

Figure 1: The Median Sales for 2005

Every year in the data has an even number of months except for 2008. In 2008, there are seven months with sales, so the function can find an exact median value. Figure 2 shows the 2008 sales:

notes 90 2 SQL SERVER   What are T SQL Median?   Notes from the Field #090

Figure 2: The Median Sales for 2008

You may be wondering what the difference is between PERCENTILE_CONT and PERCENTILE_DISC. PERCENTILE_DISC always returns an exact value found in the set, never a calculation. For calculating the median, PERCENTILE_CONT is the function to use.

notes 82 3 SQL SERVER   What are T SQL Median?   Notes from the Field #090If you would like to learn more about T-SQL window functions, be sure to check out my latest book Expert T-SQL Window Functions in SQL Server or my Pluralsight course T-SQL Window Functions.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Knowing the Source Application Using APP_NAME() Function

APP_NAME() function in SQL Server returns the application name of the current session in which it is called.

When you execute the following code

SELECT APP_NAME() AS application_name

You get the following result

appnamefun SQL SERVER   Knowing the Source Application Using APP NAME() Function

Microsoft SQL Server Management Studio – Query

This function can be effectively used to audit where the data come from. If you call this function inside a .NET application say VB.NET, it may return VB.NET (not sure the exact name but it will be the application name)

If you want to do some operation based on where it is called, you can make use of this function. If you want to audit your data points about the source, you can create a table with a column that has default value of APP_NAME() which will automatically stores the application name in which data come from.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Converting Standard SSMS File Results to Delimited – Notes from the Field #085

[Note from Pinal]: This is an 85th episode of Notes from the Field series. SSMS is my favorite IDE. I work with many different databases and different IDE. However, I get extreme comfort when I am working with SSMS. Though I like SSMS does not mean it is the best tool and it can accomplish all the tasks when I want it to do. Here is one of the task – converting file results to comma delimited result. However, my friend Kevin came in the rescue. He helped me out in this situation.

Let us read how we can achieve this task of converting standard SSMS file results to Delimited resultset.

KevinHazzard SQL SERVER   Converting Standard SSMS File Results to Delimited   Notes from the Field #085

Now and again, clients send me files to load into the database that they’ve generated using the Results To File feature in SQL Server Management Studio (SSMS). If you’re unfamiliar with this feature, let me explain it. As shown in Figure 1, after right-clicking in a query window in SSMS, you may select the Results to File option from the context menu. With this option enabled, the next time the query is run, a dialog box will appear allowing you to choose the generated file’s location.

notes85 1 SQL SERVER   Converting Standard SSMS File Results to Delimited   Notes from the Field #085

Figure 1 – An easy way to send the results of a query in SSMS to a file.

By default, the file that’s emitted will be column-aligned otherwise known as fixed-width columns. The first row will contain the column names and a second row will have groups of hyphen characters acting as a sort of text-based underscoring. At the end of the file, a count of the emitted rows will also be included. All of these features can be seen in Figure 2, where I’ve hidden the middle rows so you can see the beginning and the end of the file.

notes85 2 SQL SERVER   Converting Standard SSMS File Results to Delimited   Notes from the Field #085

Figure 2 – A standard, column-aligned result file with trailing row count.

When my clients ask me to load these types of files using SQL Server Integration Services (SSIS), there are a few problems to overcome. First of all, fixed-width data is not simple to load. The column specification of the flat-file source has to be updated manually to include the exact width of each column which can be time-consuming and prone to errors. Secondly, the row count at the end of the file isn’t data that can be loaded so it will cause an exception or load incorrectly, forcing us to add special handling logic in the package for that. Lastly, the literal word NULL appearing throughout the file whenever there is a missing value doesn’t really convey the absence of data. Those will also have to be handled with custom code.

To address the first problem, I sometimes ask my clients to re-run their queries setting the output format to comma or tab delimited. In SSMS, this can be done by right-clicking in the query window, selecting the Query Options… item from the context menu and choosing a different Output format in the Results / Text section of the dialog box. The second problem can be solved by adding the SET NOCOUNT ON directive to the start of the query that generated the file. I may also ask my clients to include that change. The third problem, where SSMS emits literal NULL strings whenever values are missing, can also be handled with the use of ISNULL or COALESCE to do NULL detection in the original query. This is yet one more change I must ask my clients to make.

As an experienced consultant, I understand that asking my clients to make so many modifications to their workflow is often not worth their time and trouble. That’s what they hire me for. It’s best to find ways to make whatever my clients provide work for them, especially if it can be automated. To that end, I wrote a C# function called ConvertColumnAlignedFileToDelimitedFile that helps to overcome all of these problems.

public static int ConvertColumnAlignedFileToDelimitedFile(
stringsourcePath, string targetPath, string delimiter = "\t")
intlineNdx = 0;
using (StreamWriter writer = File.CreateText(targetPath))
string header = null;
int[] columnWidths = null;
foreach (string line in File.ReadLines(sourcePath, Encoding.UTF8))
if (lineNdx == 0)
header = line; // save the header for subsequent processing
else if (lineNdx == 1)
columnWidths = ProcessSeparatorRow(line, header,
delimiter, writer);
// stop processing on an empty line
if (line.Length == 0) break;
ProcessDataRow(line, columnWidths, delimiter, writer);
returnlineNdx - 2;

Figure 3 – A function that converts a column-aligned file to a delimited file.

The function takes the name of a source file, a target file and the delimiter that will be inserted between values. There’s no magic here but there is an interesting trick that takes advantage of some metadata lurking in the column-aligned output file. Look at Figure 2 again. That second row containing hyphen characters actually uncovers a key constraint that we otherwise wouldn’t know: the maximum length of each column. Each block of hyphens is separated by a space so if we count the length of each hyphen group, we’ll know how to parse the entire file. I’ve provided a helper function called ProcessSeparatorRow that reads the metadata buried in the hyphen groups, writes out the header row and returns the column widths to the calling function.

private static int[] ProcessSeparatorRow(string line,
string header, string delimiter, StreamWriter writer)
string[] columns = line.Split(' ');
int[] columnWidths = new int[columns.Length];
for (int columnNdx = 0, startPostion = 0;
columnNdx < columnWidths.Length; columnNdx++)
columnWidths[columnNdx] = columns[columnNdx].Length;
int length =
(startPostion + columnWidths[columnNdx] <= header.Length)
? columnWidths[columnNdx]
: header.Length - startPostion;
string name = header.Substring(startPostion, length).Trim();
startPostion += columnWidths[columnNdx] + 1;
writer.Write((columnNdx < columnWidths.Length - 1)
? delimiter : Environment.NewLine);
return columnWidths;

Figure 4 – A function that processes the metadata in the separator row.

For the data rows beyond the header and the separator, when an empty line is encountered, the processing just stops. That will handle the trailing row count problem discussed earlier. Writing out the non-empty rows in the remainder of the file is straightforward with the helper function called ProcessDataRow shown in Figure 5.

private static void ProcessDataRow(string line,
int[] columnWidths, string delimiter, StreamWriter writer)
for (int columnNdx = 0, startPosition = 0;
columnNdx < columnWidths.Length; columnNdx++)
int length =
(startPosition + columnWidths[columnNdx] <= line.Length)
? columnWidths[columnNdx]
: line.Length - startPosition;
string value = line.Substring(startPosition, length).Trim();
if (value.Equals("NULL", StringComparison.InvariantCultureIgnoreCase))
value = String.Empty;
startPosition += columnWidths[columnNdx] + 1;
writer.Write((columnNdx < columnWidths.Length - 1)
? delimiter : Environment.NewLine);

Figure 5 – Process a normal row of data by writing it out using the specified delimiter.

Running the conversion function with the output.rpt sample file shown in Figure 2 produces the tab-delimited file shown in Figure 6. The tab characters appear as orange-colored arrows in the output. Notice also that wherever the word NULL appeared in the original file, it has been converted to an empty string in the output. A couple of lines of C# code in the ProcessDataRow function that handled that quite nicely.

notes85 3 SQL SERVER   Converting Standard SSMS File Results to Delimited   Notes from the Field #085

Figure 6 – The converted file with tab delimiters and detected NULLs.

In closing, let me say that while the C# code shown in this article is somewhat stand-alone, it can easily be put into a Script Task or a Script Component in SSIS. I often use code like this within SSIS to do file preparation or validation before loading the data into staging tables. However, having a command-line utility to convert SSMS results files into delimited files with NULL detection is also quite handy. I’ll even admit that I’ve invoked such a utility as a job step in SQL Agent from time to time. Remember: the job of a good consultant is to make the difficult seem both easy and repeatable. Hopefully, you’ll find this code useful in a variety of situations, too.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #018 – Script to Remove Special Characters – Script to Parse Alpha Numerics

If you ask me – there are certain questions not appropriate for an interview as they are often very subjective. There are some questions, which really do not help to identify if the candidate has the necessary skills or not. Here is one such question, if you ask me in the interview, I may not get it right too, as this requires some experience in writing scripts as well as trial and error mechanics to get it right.

Question: How to write will remove all the special characters and parses Alpha Numeric values only?

Answer: Honestly, it is not that easy to write this script. However, if you are stars are not bright you may end up with an interviewer who believes writing this script is the ultimate test. Do not get me wrong, writing this script is not a bad test, I believe it is not the best question for the interview.

@string VARCHAR(8000)
@IncorrectCharLoc SMALLINT
SET @IncorrectCharLoc = PATINDEX('%[^0-9A-Za-z]%', @string)
WHILE @IncorrectCharLoc > 0
@string = STUFF(@string, @IncorrectCharLoc, 1, '')
SET @IncorrectCharLoc = PATINDEX('%[^0-9A-Za-z]%', @string)
@string = @string
RETURN @string

You can test above function with the help of following test:

-- Test
SELECT dbo.UDF_ParseAlphaChars('AB"_I+{D[]}4|:e;"5,<.F>/?6')

The above query will return following result set:


You can read about this function and additional comments in my earlier blog post over here: Function to Parse AlphaNumeric Characters from String

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Generate Different Random Number for Each Group Using RAND() Function

You know that RAND() function in SQL Server generates a random float value 0 through 1 (excluding 0 and 1).

Let us create the following dataset

CREATE TABLE #random(no INT)
INSERT INTO #random(no)

If you want to generate a random value, you can use the following code

SELECT no, RAND() AS random_number FROM #random

which results to

no random_number
 1 0.370366365964781
 1 0.370366365964781
 1 0.370366365964781
 2 0.370366365964781
 3 0.370366365964781
 3 0.370366365964781

Please note that when you execute it , you may get different random number than what I got for column 2 but all will be same

What if you want to generate random number in such a way that it is reset to each column value (in this case the column is no)?
Did you know that RAND() accepts a seed value as well?

If you execute the following code,

SELECT no, RAND() AS random_number,RAND(no) AS random_number_reset FROM #random

the result is

no random_number random_number_reset
 1 0.58334760467751 0.713591993212924
 1 0.58334760467751 0.713591993212924
 1 0.58334760467751 0.713591993212924
 2 0.58334760467751 0.713610626184182
 3 0.58334760467751 0.71362925915544
 3 0.58334760467751 0.71362925915544

Please note that when you execute it , you may get different random number than what I got for column2 2 and 3. If you notice the result the values of second column is same for all rows, whereas the values of third column is same withing a group (column no), and different across groups. So if you want to generate random number differently for each group, you can make use of this method.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Split Comma Separated List Without Using a Function

The best way to learn something is to revisit some of the older blogs and then look at finding a more efficient way to work on the same problem. Recently I was looking at the forums to find a question come up again and again around making a comma separated list. This is not new over this blog and if you search you will get few. Long time ago I have written below blogs to show the values stored in table as comma separate list – this is not a rehash of what was written before but a newer way to solve the problem again.

SQL SERVER – Creating Comma Separate Values List from Table – UDF – SP
SQL SERVER – Comma Separated Values (CSV) from Table Column

Recently someone asked me a reverse of that. They were storing comma separated values in a column of table. This table was getting populated from some source via SSIS package. Here is the simplified version of the problem I was presented.

EmployeeID INT,
Certs VARCHAR(8000)
INSERT @t VALUES (1,'B.E.,MCA, MCDBA, PGDCA'), (2,'M.Com.,B.Sc.'), (3,'M.Sc.,M.Tech.')


This is how data looks like in the table.

csv split 01 SQL SERVER   Split Comma Separated List Without Using a Function

With the above data available, they want it split into individual values. Searching for possible solutions, here is the solution query I came up with:

SELECT EmployeeID,
LTRIM(RTRIM(m.n.value('.[1]','varchar(8000)'))) AS Certs
SELECT EmployeeID,CAST('<XMLRoot><RowData>' + REPLACE(Certs,',','</RowData><RowData>') + '</RowData></XMLRoot>' AS XML) AS x
FROM   @t
CROSS APPLY x.nodes('/XMLRoot/RowData')m(n)

Output looks like below.

csv split 02 SQL SERVER   Split Comma Separated List Without Using a Function

Hope this would help you in your project. Let me know if you are able to find a different solution to the same problem. The best way to learn is to learn from each other.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Walking the Table Hierarchy in Microsoft SQL Server Database – Notes from the Field #076

[Note from Pinal]: This is a 76th episode of Notes from the Field series. Hierarchy is one of the most important concepts in SQL Server but there are not clear tutorial for it. I have often observed that this simple concept is often ignored or poorly handled due to lack of understanding.

In this episode of the Notes from the Field series database expert Kevin Hazzard explains Table Hierarchy in Microsoft SQL Server Database. Read the experience of Kevin in his own words.

KevinHazzard SQL SERVER   Walking the Table Hierarchy in Microsoft SQL Server Database   Notes from the Field #076

When you need to manage a set of tables in Microsoft SQL Server, it’s good to know the required order of operations. The order could be hard-coded into the process but such approaches tend to fail when the database schema evolves. Instead, I prefer to use the catalog view named [sys].[foreign_keys] to discover the relationships between tables dynamically. Long ago, I wrote a function called [LoadLevels] that I’ve used in hundreds of processes to make them reusable and more resilient. The code for that function is shown in Listing 1:

-- ==========================================================================
-- Description: Get the load levels by tracing foreign keys in the database.
-- License:     Creative Commons (Free / Public Domain)
-- Rights:      This work (Linchpin People LLC Database Load Levels Function,
--              by W. Kevin Hazzard), identified by Linchpin People LLC, is
--              free of known copyright restrictions.
-- Warranties:  This code comes with no implicit or explicit warranties.
--              Linchpin People LLC and W. Kevin Hazzard are not responsible
--              for the use of this work or its derivatives.
-- ==========================================================================
CREATE FUNCTION [dbo].[LoadLevels]()
[SchemaName] SYSNAME,
[TableName] SYSNAME,
[LoadLevel] INT
[key_info] AS
[parent_object_id] AS [from_table_id],
[referenced_object_id] AS [to_table_id]
FROM [sys].[foreign_keys]
[parent_object_id] <> [referenced_object_id]
AND [is_disabled] = 0
[level_info] AS
SELECT -- anchor part
[st].[object_id] AS [to_table_id],
0 AS [LoadLevel]
FROM [sys].[tables] AS [st]
LEFT OUTER JOIN [key_info] AS [ki] ON
[st].[object_id] = [ki].[from_table_id]
WHERE [ki].[from_table_id] IS NULL
SELECT -- recursive part
[li].[LoadLevel] + 1
FROM [key_info] AS [ki]
INNER JOIN [level_info] AS [li] ON
[ki].[to_table_id] = [li].[to_table_id]
INSERT @results
OBJECT_SCHEMA_NAME([to_table_id]) AS [SchemaName],
OBJECT_NAME([to_table_id]) AS [TableName],
MAX([LoadLevel]) AS [LoadLevel]
FROM [level_info]
GROUP BY [to_table_id];

The [LoadLevels] function walks through the table relationships in the database to discover how they’re connected to one another. As the function moves from one relationship to the next, it records the levels where they exist in the hierarchy. A partial output of the function as executed against Microsoft’s AdventureWorks2014 sample database is shown in Figure 1.

notes76 SQL SERVER   Walking the Table Hierarchy in Microsoft SQL Server Database   Notes from the Field #076

Ordering to show the highest load levels first, notice that the most dependent table in the AdventureWorks2014 database is [Sales].[SalesOrderDetails]. Since the load levels are zero-based in the function output, that table is eight levels high in the hierarchy. In other words, if I were developing an Extract, Transform & Load (ETL) system for [Sales].[SalesOrderDetails], there are at least seven other tables that need to be loaded before it. For all 71 tables in the AdventureWorks2014 database, the function reveals some interesting facts about the load order:

  • Level 0 – 25 tables, these can be loaded first
  • Level 1 – 8 tables, these can be loaded after level 0
  • Level 2 – 8 tables, these can be loaded after level 1
  • Level 3 – 19 tables, …
  • Level 4 – 7 tables
  • Level 5 – 1 table
  • Level 6 – 1 table
  • Level 7 – 2 tables, these must be loaded last

The [LoadLevels] function uses two Common Table Expressions (CTE) to do its work. The first one is called [key_info]. It is non-recursive and gathers just the foreign keys in the database that aren’t self-referencing and aren’t disabled. The second CTE is called [level_info] and it is recursive. It starts by left joining the tables in the database to the foreign keys from the first CTE, picking out just those tables that have no dependencies. For the AdventureWorks2014 database, for example, these would be the 25 tables at level zero (0).

Then the recursion begins by joining the output from the previous iteration back to the key information. This time however, the focus is on the target of each foreign key. Whenever matches are found, the reference level is incremented by one to indicate the layer of recursion where the relationship was discovered. Finally, the results are harvested from the [level_info] CTE by grouping the table object identifiers, resolving the schema and table names, and picking off the maximum load level discovered for each entity.

The reason for grouping and selecting the maximum load level for any table becomes clear if you remove the GROUP BY clause and the MAX() operator from the code. Doing that reveals every foreign key relationship in the database. So for example, in the AdventureWorks2014 database, the [Sales].[SalesOrderDetails] table appears in 22 different relationships ranging from three levels high in the hierarchy to eight levels high, output as [LoadLevel] 7 in Figure 1. By grouping and selecting the maximum level for any table, I’m certain to avoid loading tables too early in my dynamic ETL processes.

In summary, you can use the [LoadLevels] function to identify dependencies enforced by foreign key constraints between tables in your database. This information is very useful when developing a process to copy this data to another database while preserving the referential integrity of the source data.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – CONCAT function and NULL values

CONCAT is the new T-SQL function introduced in SQL Server 2012. It is used to concatenate the values. It accepts many parameter values seperated by comma. All parameter values are concatenated to a single string.

A simple example is

SELECT CONCAT('SQL Server',' 2012')

which results to SQL Server 2012

The same can be done using + in the earlier versions

SELECT 'SQL Server'+' 2012'

which results to SQL Server 2012

But did you know the advantage of CONCAT function over +?

SELECT 'SQL Server'+' 2012'+NULL

When you execute the above, the result is NULL

But the CONCAT function will simply ignore NULL values

SELECT CONCAT('SQL Server',' 2012',NULL)

The result is SQL Server 2012

So by using CONCAT function, you do not need to worry about handling NULL values.

How many of you know this?

Reference: Pinal Dave (http://blog.sqlauthority.com)