SQL SERVER – Free SQL Complete Add-in For SSMS

This article covers main features of ApexSQL Complete, free SQL complete add-in. You can download the most recent version, and check its features while reading the article.

ApexSQL Complete is a free Visual Studio and SQL Server Management Studio add-in that will auto-complete SQL statements and speed up coding with built-in snippet library, help managing opened and closed query windows, track all executed queries and more. In this article, we will explain how to use some of the core features.

Auto-complete using the hint list

Auto complete feature provides intelligent code completion through the hint list, allowing you to find an appropriate hint and to fill in SQL keywords, as well as database schema, object, parameter, and variable names. As an example, if you start typing “CRE”, the hint list appears offering the CREATE keyword:

addin1 SQL SERVER   Free SQL Complete Add in For SSMS

The hint list offers context sensitive suggestions based on the current cursor position in the query. This means that if you type the USE keyword, the hint list offers database list:

addin2 SQL SERVER   Free SQL Complete Add in For SSMS

In addition, you can drill down to the column level and choose the specific column directly from the hint list:

addin3 SQL SERVER   Free SQL Complete Add in For SSMS

If you need additional information about any object from the hint list, move the mouse pointer over the object and hold it for a second, the Object script window appears with complete script of the object including extended properties shown in the Description tab:

addin4 SQL SERVER   Free SQL Complete Add in For SSMS

To view the script for encrypted objects, select the Decrypt encrypted objects option in the add-in options:

addin5 SQL SERVER   Free SQL Complete Add in For SSMS

The Auto complete feature will not just help you to find the appropriate hint, but will also complete entire fragment of SQL code as INSERT statement, or JOIN statement:

addin6 SQL SERVER   Free SQL Complete Add in For SSMS

To auto-complete INSERT statement, check the Insert full INSERT statement option in the add-in options:

addin7 SQL SERVER   Free SQL Complete Add in For SSMS

Full INSERT STATEMENT will be auto-completed as follows:

addin8 SQL SERVER   Free SQL Complete Add in For SSMS

Snippets library and using snippets

The Snippets feature allows you to insert custom-made T-SQL code snippets, and often used SQL statements into SQL scripts. SQL coding with snippets can be achieved in two ways: using snippets from the built-in library (which are also editable) or creating new snippets.

New snippets can be created from the Snippet tab in the add-in options, or directly from the query window.

Snippet library

Built in T-SQL snippet library contains over 200 snippets:

addin9 SQL SERVER   Free SQL Complete Add in For SSMS

In order to use any of the predefined snippets, just pick it from the hint list, and it will be automatically deployed to the query.

Create a new SQL snippet

Clicking the Add command in the above image allows you to create a new snippet:

addin10 SQL SERVER   Free SQL Complete Add in For SSMS

Another way to create a snippet is to select the New snippet command from the query window context menu:

addin11 SQL SERVER   Free SQL Complete Add in For SSMS

This will open the same form as when you are creating a new snippet from the Snippets tab under the add-in option, but will automatically add SQL code from the query editor to the Code section:

addin12 SQL SERVER   Free SQL Complete Add in For SSMS

To create a new snippet from a part of SQL code in the query window, select the specific part of code and initiate the New snippet command from the right-click context menu.

Tab navigation

The Tab navigation feature allows you keep the track of all opened and recently closed tabs, from the moment of enabling the feature. It also can help in saving the current tab session, recovering tabs from the session prior to unexpected host application closing. To enable the Tab navigation feature, go to the add-in options, and enable it under the Tab navigation tab:

addin13 SQL SERVER   Free SQL Complete Add in For SSMS

In the same tab, you can enable restoring workspace on startup, or restoring session prior to unexpected host application closing. Once this is enabled, all opened and closed tabs will be kept so you can always navigate to any of them and continue working. The Tab navigation pane will be shown below the query window:

addin14 SQL SERVER   Free SQL Complete Add in For SSMS

For each of the logged query you will have details such as tab caption, creation date and time, and you can instantly preview the content of the query in the right side of the Tab navigation form. In the top of the form, you have a Search field, so you can easily find specific content of the query.

In the add-in options, you can set the timeframe for keeping the closed tabs in the Tab Navigation form. All queries older than a setting of the Delete tabs older than option, under the Tab navigation option, will be automatically removed.

Executed queries

The Executed queries feature keep all executed queries saved, so you can easily browse through the list of executed queries and get back to any of them, at any time.

To enable the Executed queries, check the Log executed queries option in the add-in options, under the Executed queries tab:

addin15 SQL SERVER   Free SQL Complete Add in For SSMS

All queries executed from the moment of enabling the feature will be logged, and you can access them from the Executed queries form, which you can initiate from the add-in menu:

addin16 SQL SERVER   Free SQL Complete Add in For SSMS

In the Executed queries form, you can find details about each query executed including date of execution, status (did it failed or run successfully), which user executed particular query, related database and duration of the execution.

You can search through the content of queries for a specific information, and filter the list so it shows only queries executed in the specified time range:

addin17 SQL SERVER   Free SQL Complete Add in For SSMS

Double-clicking any of the queries from the list will open it in the query editor, so you can continue working if there is a need.

Copy result as

The Copy result as feature allows you to copy query execution results, and save it in one of the following formats: CSV, HTML, or XML. As an example, a result of a simple SELECT statement can be copied by right clicking directly in the results grid:

addin18 SQL SERVER   Free SQL Complete Add in For SSMS

Reference : Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Inserting into Sparse Column Sets and Errors Associated With It

I have written tons of blogs in the past around using sparse columns. The best part and lesser known part of sparse columns is that we can write to the column sets directly too. In this blog let me take a tour of how this can be achieved.

When sparse column values are inserted or updated by using the XML column set, the values that are inserted into the underlying sparse columns are implicitly converted from the xml data type. When inserting data explicitly into individual columns of a column set, you MUST use an explicit column list.

-- Create a table with sparse columns and a column set
c2 CHAR (20),
c3 DATETIME sparse,
--Try to insert a row without specifying a column list
INSERT INTO colset VALUES (3, 'abc', GETDATE())

As mentioned before, we will encounter the following error because the column names are not mentioned explicitly.

Msg 213, Level 16, State 1, Line 9

Column name or number of supplied values does not match table definition.

The correct version would be as shown below:

--Try to insert a row by specifying a column list - successful
INSERT INTO colset (c1,c2, c3) VALUES (3, 'abc', GETDATE())

Step 2, let us try to insert data into the column set itself, you have to specify XML syntax to reference the columns.

-- Insert two rows by specifying a column set- successful. Note we are specifying C2 and CS – the column set. use the table created above
INSERT colSet (c2, cs)
VALUES (3, '<c1>7</c1><c3>01/01/15</c3>');
INSERT colSet (c2, cs)
VALUES (4, '<c1>7</c1><c3>Feb 10 2015 7:05PM</c3>');

Be careful not to insert into a column set both directly via an explicit column name, and via a column set XML syntax.
--Create table
(c1 INT,
c2 CHAR (20) SPARSE,
c3 DATETIME sparse,
--Attempt to insert into table both via column name and column set
--This will fail because C2 is reference twice: explicitly on a column list and in the column set
INSERT colSet2 (c2, cset)
VALUES ('SQLAuthority', '<c2>SQLAuthority</c2><c3>01/01/08</c3>')

We will be encountering the following error in this case.

Msg 360, Level 16, State 1, Line 7

The target column list of an INSERT, UPDATE, or MERGE statement cannot contain both a sparse column and the column set that contains the sparse column. Rewrite the statement to include either the sparse column or the column set, but not both.

Let us go ahead and clean up the tables we used.

-- Clean up

Though I have written quite a few blogs on sparse columns. Do let me know if you are using any of these in your environments? What are those scenarios? Drop a comment and let us learn together.

Reference : Pinal Dave (http://blog.SQLAuthority.com)

Interview Question of the Week #029 – Difference Between CHARINDEX vs PATINDEX

Question: What is the difference between CHARINDEX vs PATINDEX?

Answer: We can use either CHARINDEX or PATINDEX to search in a TEXT field in SQL SERVER. The CHARINDEX and PATINDEX functions return the starting position of a pattern you specify.

regular expressions regex Interview Question of the Week #029   Difference Between CHARINDEX vs PATINDEX

Both functions take two arguments. With PATINDEX, you must include percent signs before and after the pattern, unless you are looking for the pattern as the first (omit the first %) or last (omit the last %) characters in a column. For CHARINDEX, the pattern cannot include wildcard characters. The second argument is a character expression, usually a column name, in which Adaptive Server searches for the specified pattern.

Example of CHARINDEX:
USE AdventureWorks2014;
SELECT CHARINDEX('important', DocumentSummary),
FROM Production.Document
WHERE Title = 'Repair and Service Guidelines';

Examples of PATINDEX:
USE AdventureWorks2014;
SELECT PATINDEX('%imp_rtant%', DocumentSummary),
FROM Production.Document
WHERE Title = 'Repair and Service Guidelines';

PATINDEX is CHARINDEX + WildCard Search. Use either of them is depending your business need.

Reference : Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – How to Find If Queries are Run in Parallel?

Technology innovations over years have made personal computing and the infrastructure inside our datacenters even more powerful. Gone are the days when our laptops used to come with single processors and single cores. I wonder even if they ever sell such configurations in the market.

Talking about multi-cores on our desktops and servers, these days software like SQL Server just use them to the max. When working with SQL Server, there are a number of settings that influence using parallelism. Check blog “SQL SERVER – MAXDOP Settings to Limit Query to Run on Specific CPU”, “SQL SERVER – CXPACKET – Parallelism – Usual Solution – Wait Type” and many other posts on this topic.

Having said that, I have seen people struggle to identify parallel queries in their environments. So here is the first shot at this requirement.

FROM sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) AS p
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) AS q
WHERE cp.cacheobjtype = 'Compiled Plan' AND p.query_plan.value('declare namespace
p="http://schemas.microsoft.com/sqlserver/2004/07/showplan"; max(//p:RelOp/@Parallel)'
, 'float') > 0

Queries that run in Parallel can be found with the above query. Remember, if a query runs in parallel it is a query that SQL Server thinks is expensive enough to run in parallel. MAX_DOP and the cost_threshold_for_parallelism drive the behavior. MAX_DOP should be configured to match the number of physical processors in the server if required.

The next step is to understand what to do when you find them? When you find them look for ways to make them run more efficiently if they are run often and their performance during business hours is critical. Check indexing in DTA for recommendations, simplify the query, remove ORDER BYs, GROUP BYs, if they aren’t necessary – these are some steps to help you guided.

Another way to find parallelism is to get queries where the amount of time spent by the workers are more than the query execution time. You can also use the below method to get the same too:

sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.plan_handle) AS q
WHERE qs.total_worker_time > qs.total_elapsed_time

I hope these two scripts will be of use and you have something similar in your environments. Please share me scenario’s where you saw parallelism perform slower and how did you find them? Do let me know via comments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – FIX – Property IsLocked is Not Available for Login

I play and experiment with SQL Server to the max. Sometimes I am not sure what I did and I land into something undesired. Then the reason to troubleshoot starts and I get to learn something new. Sometimes these learnings are based on error messages that you folks send me. Recently I faced below error while going to properties of an SA account in SQL Server Management Studio.

locked 01 SQL SERVER   FIX – Property IsLocked is Not Available for Login

Here is the text of the error message.

Property IsLocked is not available for Login ‘[sa]’. This property may not exist for this object, or may not be retrievable due to insufficient access rights. (Microsoft.SqlServer.Smo)

It was clear that there is something wrong with SQL Login but interestingly it was NOT happening with all other SQL Login accounts. I have put profiler to find out what is happening in the background causing this error. I nailed it down to the problem that below query is returning NULL instead of 0 or 1.


locked 02 SQL SERVER   FIX – Property IsLocked is Not Available for Login

Now, it was interesting to find why its returning NULL. Here are some possible reasons of such behavior. Later I realized that it’s happening with only those SQL Logins which are having “Enforced Password policy” enabled. Here is the query to check such accounts.

SELECT      name
FROM  sys.sql_logins
WHERE is_policy_checked = 1

Quick solution for this would be to disable the password policy check for such account using T-SQL below.

USE [master]

A few things to check if you run into such issue on a regular basis and don’t want to disable password policy.

  1. Check if SQL Service account (the account running SQL Service in services.msc) is locked out. To verify that try logging in to some machine using service account.
  2. Check if Password policy validation is failing due to some issue with domain controller connectivity.

If it’s the first issue, then you need to contact the domain administrator to unlock service account. For the second issue, you can check security ring buffers.

SELECT  DATEADD(ms, dorf.[timestamp] - dosi.ms_ticks, GETDATE()) AS Date_Time
,CAST(record AS XML).value('(//SPID)[1]', 'bigint') AS SPID
,CAST(record AS XML).value('(//ErrorCode)[1]', 'varchar(255)') AS Error_Code
,CAST(record AS XML).value('(//CallingAPIName)[1]', 'varchar(255)') AS CallingAPIName
,CAST(record AS XML).value('(//APIName)[1]', 'varchar(255)') AS WindowsAPIName
FROM sys.dm_os_ring_buffers dorf
CROSS JOIN sys.dm_os_sys_info dosi
WHERE dorf.ring_buffer_type = 'RING_BUFFER_SECURITY_ERROR'

In my case it was a first issue, so unlocking account fixed the issue for me. Have you ever faced such an issue? What was the solution which worked for you? Let me know via your comments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – What are T-SQL Median? – Notes from the Field #090

Kathi SQL SERVER   What are T SQL Median?   Notes from the Field #090[Note from Pinal]: In this episode of the Notes from the Field series database expert Kathi Kellenberger explains about T-SQL Median. Kathi is an amazing instructor, she was the SQL author I have read in my early career. The reason, I love SQL Server because her writing has instigated love for this technology in me. Today she brings a relatively interesting topic for database experts. Read the experience of  Kathi in her own words.

SQL Server has a wealth of useful T-SQL functions that let you do just about anything you need to do. One function that is sorely missing is a MEDIAN function. A median is the middle value in a set of values. For example, 3 is the median of the set 1, 3, 5. If the set is even, the median is calculated or “interpolated” by averaging the two middle values. The median of this set 1, 3, 5, 7 is 4.

You will see median values used frequently even if you do not work with statistics yourself. For example, you could see a news story listing the median household income of a city by neighborhood, the median salary for developers by language, or the median age of a population.

As I mentioned, there is no MEDIAN function in T-SQL. If, however, you are fortunate enough to be running SQL Server 2012 or 2014, there is an easy workaround. In 2012, Microsoft introduced a new function called PERCENTILE_CONT, one of the T-SQL window functions. Given a percent rank and a set of values, PERCENTILE_CONT will return the value ranked at that percent. If there is not an exact value found at the rank, PERCENTILE_CONT will interpolate the answer instead. If you supply 0.5, meaning 50%, PERCENTILE_CONT will return the median value. Here is an example that calculates the median sales by month for each year:

WITH Totals AS (
SELECT YEAR(OrderDate) AS OrderYear,
MONTH(OrderDate) AS OrderMonth,
SUM(TotalDue) AS TotalSales
FROM Sales.SalesOrderHeader
SELECT OrderYear, OrderMonth, TotalSales,
OVER(PARTITION BY Totals.OrderYear) AS MedianSales
FROM Totals
ORDER BY OrderYear, TotalSales;

Before we take a look at the results, let’s review the query. The query has a common table expression (CTE) that calculates the sales for each month and year. In the outer query, the PERCENTILE_CONT function is used with an argument of 0.5. PERCENTILE_CONT is a T-SQL window function, but the syntax for it and a similar function, PERCENTILE_DISC, is different than the other window functions. The function name is followed by the WITHIN GROUP clause. This is where the ORDER BY expression goes, not in the OVER clause. The ORDER BY expression is used by the function to line up the values so that the ranks can be compared to the percentile rank value provided as an argument. The OVER clause supports PARTITION BY, and in this case we would like to get the median within each year, so the PARTITION BY expression is OrderYear.

In 2005, there were orders placed between July and December, six months. Six is even, so the function calculates the average of two middle months as shown in Figure 1.

notes 90 1 SQL SERVER   What are T SQL Median?   Notes from the Field #090

Figure 1: The Median Sales for 2005

Every year in the data has an even number of months except for 2008. In 2008, there are seven months with sales, so the function can find an exact median value. Figure 2 shows the 2008 sales:

notes 90 2 SQL SERVER   What are T SQL Median?   Notes from the Field #090

Figure 2: The Median Sales for 2008

You may be wondering what the difference is between PERCENTILE_CONT and PERCENTILE_DISC. PERCENTILE_DISC always returns an exact value found in the set, never a calculation. For calculating the median, PERCENTILE_CONT is the function to use.

notes 82 3 SQL SERVER   What are T SQL Median?   Notes from the Field #090If you would like to learn more about T-SQL window functions, be sure to check out my latest book Expert T-SQL Window Functions in SQL Server or my Pluralsight course T-SQL Window Functions.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Generating Meaningful Test Data with dbForge Data Generator for SQL Server

Recently I faced with necessity of generating meaningful SQL data for testing purposes. During my search for a proper application for the task, I found out that Devart, a recognized developer of database solutions, released dbForge Data Generator for SQL Server, a GUI tool for fast generation of meaningful SQL data for development and testing purposes. Devart stated that with the tool, I will be able to:

  • Populate SQL Server tables with millions of rows of test data
  • Generate meaningful data that looks just like real data
  • Use 100+ predefined generators with sensible configuration options
  • Customize built-in generators
  • Create user-defined generators
  • Preview generation results
  • Save data population script for future reference
  • Automate data generation with the command-line interface

It sounded great, so I decided to give a quick test for the tool. Moreover, Devart offers a 30-day free trial for the most of its products, including Data Generator for SQL Server.

The first thing I noticed after running SQL Data Generator was clear user-friendly interface. The start page of the tool includes quick access to the main features and allows to quickly run recent projects, if any.

dbforge 1 SQL SERVER   Generating Meaningful Test Data with dbForge Data Generator for SQL Server

After running new data generation, I was prompted to select connection and database as well as to set default data generation options. I decided to generate meaningful data for few columns of the AdventrureWokrsTest database.

dbforge 2 SQL SERVER   Generating Meaningful Test Data with dbForge Data Generator for SQL Server

On the Options page, I selected to generate 2000 rows, and cleared the Include Null values option.

dbforge 3 SQL SERVER   Generating Meaningful Test Data with dbForge Data Generator for SQL Server

After that, Data Generator showed the generation document. I selected the following columns of the Person.Contact table for data population:

  • ContactID
  • Title
  • First Name
  • LatName
  • EmailAddress

Now, the time has come for selecting generators for each column. In dbForge Data Generator for SQL Server, all generators are split into several categories for easier look-up.

dbforge 4 SQL SERVER   Generating Meaningful Test Data with dbForge Data Generator for SQL Server

I was really impressed with the collection of generators the application offers. The tool automatically detects and offers appropriate generators for a given column.  Also, Data Generator for SQL Server includes multiple ways to make generated data more customized. For instance, you can:

  • Set the range of generated values for different SQL data types
  • Define the percentage of NULLs to be generated
  • Set the value distribution mode
  • Immediately preview data that will be generated

Data Generator suggested the following generators:

  • IDs for the ContactID Additionally, I selected the Random by timestamp value distribution mode for getting more complex numbers.
  • Honirific Prefix (Title) for the Title column
  • First Name (male) for the FirstName column
  • Last Name for the LastName column
  • Email for the EmailAddress This generator is based on the following regular expression \c{3,8}(\d{0,3}|.\c{3,10})@nowhere.com that can be customized right in the Fill settings section of Column generation settings. For instance, you can modify the regular expression, so that the domain name would be “mailservice.com” instead of “nowhere.com”.

dbforge 5 SQL SERVER   Generating Meaningful Test Data with dbForge Data Generator for SQL Server

Data Generator also allows to modify existing generators and create your own ones, so in future you won’t need to modify the same generator over and over again. I opened the Email.xml file, that is located at C:\Users\Public\Documents\Devart\dbForge Data Generator for SQL Server\Data Generators,  modified regular expression so that the domain name would consist of combination of letters, updated description, name, and saved the file as MyEmail.xml:

dbforge 6 SQL SERVER   Generating Meaningful Test Data with dbForge Data Generator for SQL Server

The following screenshot shows the data generation document with the result of my manipulations:

I was quite satisfied with the results showed in Preview, so I was ready to populate the target database with data. Data Generator offers several ways to do that, namely:

  • Open the data population script in the internal editor
  • Save the SQL script to a file
  • Run the script automatically

dbforge 7 SQL SERVER   Generating Meaningful Test Data with dbForge Data Generator for SQL Server

I chose the first option and in few seconds, I got the following population script:

dbforge 8 SQL SERVER   Generating Meaningful Test Data with dbForge Data Generator for SQL Server

To sum up, I would like to say that the tool has done a fantastic job for me. In my opinion, the most strong points of Devart’s Data Generator are:

  1. Simple work-flow in a convenient interface. With dbForge Data Generator, the process takes just couple of minutes and basically, it is simplified to three simple steps:
  • Selecting required tables
  • Selecting and customizing a required generator
  • Populating tables with test data
  1. Impressive collection of generators. The tool includes over 100 generators that allow to populate tables with realistic-looking test data.
  2. Flexible Customization Options. Developers put really much effort for making data more customized and suiting specific needs.

I strongly recommend to give dbForge Data Generator for SQL Server a try and to evaluate its functionality on your own by downloading a 30-day free trial.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – FIX – Server principal ‘Login Name’ has granted one or more permission(s). Revoke the permission(s) before dropping the server principal

There are parts of SQL Server where I always like to take expert advice. Sometimes a quick call to a friend can save you tons of time. When I see topics around AlwaysOn, though I know what the basic steps are to building and working with AlwaysOn, the hardcore troubleshooting is not the forte perse. But when I encounter questions around it, I put my learning hat to explore the possible reasons. There was a mail from one of the readers which has been on my Inbox for couple of weeks and I wanted to find the answer. His question goes like this:

While working with AlwaysOn availability group and cleaning it up, I am encountering below error while dropping a login.


Msg 15173, Level 16, State 1, Line 1
Server principal ‘Pinal’ has granted one or more permission(s). Revoke the permission(s) before dropping the server principal.

drop error 02 SQL SERVER   FIX – Server principal ‘Login Name’ has granted one or more permission(s). Revoke the permission(s) before dropping the server principal

Even trying from UI also gives same error

drop error 01 SQL SERVER   FIX – Server principal ‘Login Name’ has granted one or more permission(s). Revoke the permission(s) before dropping the server principal

Next task was to find what is causing the error, so I made below query which can tell the exact objects that are dependent and making our DROP command to fail.

SELECT class_desc,*
FROM sys.server_permissions
WHERE grantor_principal_id = (
SELECT principal_id
FROM sys.server_principals
WHERE NAME = N'Pinal')

FROM sys.server_principals
WHERE principal_id IN (
SELECT grantee_principal_id
FROM sys.server_permissions
WHERE grantor_principal_id = (
SELECT principal_id
FROM sys.server_principals
WHERE NAME = N'Pinal'))

drop error 03 SQL SERVER   FIX – Server principal ‘Login Name’ has granted one or more permission(s). Revoke the permission(s) before dropping the server principal

Based on output, it means there is an endpoint on which Pinal has been granted permission. Second result shows that I have used my account i.e. Pinal to create endpoint and granted permission to AppUser account using below command:

CREATE ENDPOINT [hadr_endpoint]
GRANT CONNECT ON ENDPOINT::[hadr_endpoint] TO AppUser

As the error message suggested, I revoked permission using the below command:

REVOKE CONNECT ON ENDPOINT::[hadr_endpoint] TO AppUser

After the above step, I was able to delete login. Yes, this was a great learning for me today. Do let me know if you ever encountered similar errors in your environments?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Fix – Error: Msg 468, Level 16, State 9, Line 1

Ever since I have started writing about Error Messages a number of you ping me from time to time to understand why you are getting an error. As I always say, Error messages are a great way to learn concepts and we become better if we understand the reason behind every error one gets. As I scan through almost every single mail that comes in, some of them catch my attention and I tend to write this experience back as a blog post.

Last week, one fresher who was working for an European client wrote back to me with an error mentioned below:

Msg 468, Level 16, State 9, Line 15
Cannot resolve the collation conflict between “SQL_Latin1_General_CP1_CI_AS” and “French_CS_AI” in the equal to operation.

Though the error looked simple and self-explanatory, this kid wanted to mimic this error in his environment. He also added that he was using temp tables while this error surfaced. This statement got me interested and I wanted to show him with a simple script to how this error can be achieved.

Let me start by knowing the server collation to start with:

-- Server Collation



As you can see, I am using the default collation on my server at this moment. Now let us create an DB with a different collation (French) for example. We will also create two tables post that. First will be a normal table while the second one will be an Temp table.

-- Create database with different collation

-- Create 2 tables with 1 being a temp table so it goes to tempdb and uses the server collation and not the database collation
USE [DB_Not_Contained]
CREATE TABLE [DemoCollation]
(DemoCollationNM VARCHAR(100))
CREATE TABLE #DemoCollation
(DemoCollationNM VARCHAR(100))

-- Insert records into both tables
INSERT dbo.DemoCollation
VALUES ('Test Join');
INSERT #DemoCollation
VALUES ('Test Join');

Now that the basic setup of data is over. Let us make a simple join between these two tables.

-- Now query and try and join both tables having 2 different collations
SELECT p.DemoCollationNM
FROM dbo.DemoCollation p
INNER JOIN #DemoCollation d ON
p.DemoCollationNM = d.DemoCollationNM

Woot, we will receive the error message:

Msg 468, Level 16, State 9, Line 35
Cannot resolve the collation conflict between “SQL_Latin1_General_CP1_CI_AS” and “French_CS_AI” in the equal to operation.

Whenever you encounter this error message, please go ahead and check the collation of tables under question to be clear they are not different.

collation conflict 01 SQL SERVER   Fix   Error: Msg 468, Level 16, State 9, Line 1

So what are the learnings from these simple steps?

  • We cannot join tables, columns when there is conflict in collation between the objects
  • Temp table uses the same collation as our Server by default, so are the objects that are created.

I know I have given you a hint on how you might potentially mitigate this, but let me see if anyone can give me the solution to this problem and how have you ever been able to solve these? Have you ever encountered this error?

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #028 – Few SQL Best Practices

Just last week, I moderated a SQL Server interview and I heard very interesting conversation.

Question – What are the best practices are you following for SQL Server?

When I heard this question – I thought candidate will answer many different things, but he was stunned and had nothing much to add to the conversation. That reminded me one of my older articles which I believe still adds quite a lot of value to this conversation. Here is the article as an answer to the question.

Answer – 

1. Know your data and business application well.
Familiarize yourself with these sources; you must be aware of the data volume and distribution in your database.

2. Test your queries with realistic data.
A SQL statement tested with unrealistic data may behave differently when used in production. To ensure rigorous testing, the data distribution in the test environment must also closely resemble that in the production environment.

3. Write identical SQL statements in your applications.
Take full advantage of stored procedures, and functions wherever possible. The benefits are performance gain as they are precompiled.

4. Use indexes on the tables carefully.
Be sure to create all the necessary indexes on the tables. However, too many of them can degrade performance.

5. Make an indexed path available.
To take advantage of indexes, write your SQL in such a manner that an indexed path is available to it. Using SQL hints is one of the ways to ensure the index is used.

6. Understand the Optimizer.
Understand the optimizer how it uses indexes, where clause, order by clause, having clause, etc.

7. Think globally when acting locally.
Any changes you make in the database to tune one SQL statement may affect the performance of other statements used by applications and users.

8. The WHERE clause is crucial.
The following WHERE clauses would not use the index access path even if an index is available.
E.g. Table1Col1 (Comparison Operator like >, >=, <=,) Table1Col2, Table1Col1 IS (NOT) NULL, Table1Col1 NOT IN (value1, value2), Table1Col1 != expression, Table1Col1 LIKE ‘%pattern%’, NOT Exists sub query.

9. Use WHERE instead of HAVING for record filtering.
Avoid using the HAVING clause along with GROUP BY on an indexed column.

10. Specify the leading index columns in WHERE clauses.
For a composite index, the query would use the index as long as the leading column of the index is specified in the WHERE clause.

11. Evaluate index scan vs. full table scan. (Index Only Searches Vs Large Table Scan, Minimize Table Passes)
If selecting more than 15 percent of the rows from a table, full table scan is usually faster than an index access path. An index is also not used if SQL Server has to perform implicit data conversion. When the percentage of table rows accessed is 15 percent or less, an index scan will work better because it results in multiple logical reads per row accessed, whereas a full table scan can read all the rows in a block in one logical read.

12. Use ORDER BY for index scan.
The SQL Server optimizer will use an index scan if the ORDER BY clause is on an indexed column. The following query illustrates this point.

13. Minimize table passes.
Usually, reducing the number of table passes in a SQL query results in better performance. Queries with fewer table passes mean faster queries.

14. Join tables in the proper order.
Always perform the most restrictive search first to filter out the maximum number of rows in the early phases of a multiple table join. This way, the optimizer will have to work with fewer rows in the subsequent phases of join, improving performance.

15. Redundancy is good in where condition.
Provide as much information as possible in the WHERE clause. It will help the optimizer to clearly infer conditions.

16. Keep it simple, stupid.
Very complex SQL statements can overwhelm the optimizer; sometimes writing multiple, simpler SQL will yield better performance than a single complex SQL statement.

17. You can reach the same destination in different ways.
Each SQL may use a different access path and may perform differently.

18. Reduce network traffic and increase throughput.
Using T-SQL blocks over Multiple SQL statements can achieve better performance as well as reduce network traffic. Stored Procedures are better over T-SQL blocks as they are stored in SQL Server and they are pre-compiled.

19. Better Hardware.
Better hard ware always helps performance. SCACI drives, Raid 10 Array, Multi processors CPU, 64-bit operating system improves the performance by a great amount.

20. Avoid Cursors.
Using SQL Server cursors can result in some performance degradation in comparison with select statements. Try to use a correlated sub query or derived tables if you need to perform row-by-row operations.

Reference: Pinal Dave (http://blog.sqlauthority.com)