SQL SERVER – JSON Support for Transmitting Data for SQL Server 2016 – Notes from the Field #114

Kathi SQL SERVER   JSON Support for Transmitting Data for SQL Server 2016   Notes from the Field #114[Note from Pinal]: In this episode of the Notes from the Field series database expert Kathi Kellenberger explains about JSON Support for Transmitting Data for SQL Server 2016. Kathi is an amazing instructor, she was the SQL author I have read in my early career. The reason, I love SQL Server because her writing has instigated love for this technology in me. Today she brings a relatively interesting topic for database experts. Read the experience of  Kathi in her own words.


My favorite aspect of working with SQL Server is the T-SQL language. I love working with data, writing queries, and tuning queries when they don’t perform well. Whenever a new version of SQL Server is on the way, I can’t wait to hear what has been added to T-SQL.

I was really hoping to see additional T-SQL windowing functionality with 2016, but, so far, nothing has been announced. There is however, a very interesting programmability enhancement on the way: JSON support.

JSON, which stands for JavaScript Object Notation, is used for transmitting data and is used in many programming languages. JSON looks a bit like XML with square [] and curly {} brackets instead of angle <> brackets.

Here is an example comparing XML to JSON with data from the Production.Product table.

XML JSON
<Product>
  <Name>HL Road Frame - Black, 58</Name>
  <ListPrice>1431.5000</ListPrice>
</Product>
<Product>
  <Name>HL Road Frame - Red, 58</Name>
  <ListPrice>1431.5000</ListPrice>
</Product>
<Product>
  <Name>Sport-100 Helmet, Red</Name>
  <ListPrice>34.9900</ListPrice>
</Product>
<Product>
  <Name>Sport-100 Helmet, Black</Name>
  <ListPrice>34.9900</ListPrice>
</Product>
<Product>
  <Name>Mountain Bike Socks, M</Name>
  <ListPrice>9.5000</ListPrice>
</Product>

{
"Product": [
{
"Name": "HL Road Frame - Black, 58",
"ListPrice": 1431.5
},
{
"Name": "HL Road Frame - Red, 58",
"ListPrice": 1431.5
},
{
"Name": "Sport-100 Helmet, Red",
"ListPrice": 34.99
},
{
"Name": "Sport-100 Helmet, Black",
"ListPrice": 34.99
},
{
"Name": "Mountain Bike Socks, M",
"ListPrice": 9.5
}
]
}

The additions to T-SQL to support JSON are listed below. There is one new clause and four functions:

Name Type Purpose
FOR JSON Clause Return JSON from a SELECT statement
OPENJSON Function Return JSON data in tabular format
ISJSON Function Returns TRUE if a string is valid JSON string
JSON_VALUE Function Returns a single value from a

JSON string

JSON_QUERY Function Returns an array of values from a JSON string

I used the FOR JSON clause to generate the example JSON data above. Here is the code I used:

SELECT TOP(5) Name, ListPrice
FROM Production.Product
WHERE ListPrice > 0 AND Name NOT LIKE '%Seat%'
FOR JSON PATH, ROOT('Product');

There are several options, so be sure to take a look at the documentation for more information. Here is an example script demonstrating the functions:

--Save a string in JSON format to a variable
DECLARE @Books NVARCHAR(MAX) = N'
{"Books":
{"BookList":
[ {"Book":
{ "Title":"Microsoft SQL Server T-SQL Fundamentals",
"Authors":"Itzik Ben Gan"
}
},
{"Book":
{
"Title":"The Data Warehouse Toolkit: The Definitive Guide to Dimensional Modeling",
"Authors":"Ralph Kimball; Margy Ross"
}
},
{"Book":
{
"Title":"SQL Server Concurrency: Locking, Blocking, and Row Versioning",
"Authors":"Kalen Delaney"
}
}
]
}
}'
;
IF ISJSON(@Books) = 0 BEGIN
   PRINT
'Invalid JSON string';
END
ELSE BEGIN
  
--Turn the JSON string into tabular format
  
SELECT *
  
FROM OPENJSON(@Books,'$.Books.BookList')
  
WITH (
      
Title NVARCHAR(100) '$.Book.Title',
      
Authors NVARCHAR(100) '$.Book.Authors');
  
--Return the BookList array in JSON format
  
SELECT JSON_QUERY(@Books,'$.Books.BookList') AS [Book Array];
  
--Return the second row as JSON
  
SELECT JSON_QUERY(@Books,'$.Books.BookList[1]') AS [Second Book];
  
--Return the title from the second row
  
SELECT JSON_VALUE(@Books,'$.Books.BookList[1].Book.Title') AS [Book Title];
END;

Here are the results:
114 SQL SERVER   JSON Support for Transmitting Data for SQL Server 2016   Notes from the Field #114
The most challenging aspect for me when writing the previous example, was creating a valid JSON string. The path attribute, that string starting with $ used with the functions, looks a bit intimidating, but it is actually simple. The $ is the starting point. Follow the path in the data to get what you want.

Let’s take a look at the last query:

SELECT JSON_VALUE(@Books,'$.Books.BookList[1].Book.Title') AS [Book Title];

The JSON_VALUE function takes two parameters, the JSON string and a path. In this case, I am specifying the item from the array to extract. It is found in Books and the array is called BookList. I want the item at index 1. The item is called Book and I am retrieving the Title property.

The new functionality will make it easier for developers to work with SQL Server data regardless of the programming language they use.

Kathi Kellenberger is the author of Beginning T-SQL, Expert T-SQL Window Functions in SQL Server, and PluralSight course T-SQL Window Functions.

notes 82 3 SQL SERVER   JSON Support for Transmitting Data for SQL Server 2016   Notes from the Field #114If you would like to learn more about T-SQL window functions, be sure to check out my latest book Expert T-SQL Window Functions in SQL Server or my Pluralsight course T-SQL Window Functions.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSIS – Get Started with the For Loop Container – Notes from the Field #113

[Notes from Pinal]: SSIS is very well explored subject, however, there are so many interesting elements when we read, we learn something new. A similar concept has been Get Started with the For Loop Container.

Tim Mitchell SQL SERVER   SSIS   Get Started with the For Loop Container   Notes from the Field #113Linchpin People are database coaches and wellness experts for a data driven world. In this 113th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to get started with the FOR LOOP Container.


SQL Server Integration Services is equipped with tasks and containers to make it easy to design and maintain the flow of ETL: which logic should be executed, when should it be executed, and how many times should it occur. Most SSIS developers are familiar with the sequence container and the For Each Loop container, which can be used to group together tasks and execute the same logic a discrete number of times. In addition to these, there is a lesser-known but still very useful container for controlling logic flow: the For Loop container.

Simply put, the For Loop container executes its ETL logic zero to n times. It has three functions that control how many times the loop will be executed:

  • InitExpression: Used for the initial setting of a starting variable (such as setting a counter variable to zero).
  • EvalExpression: This is the expression that evaluates whether the loop should continue. Of the three functions described here, this is the only one that requires a value – the others are optional.
  • AssignExpression: This allows the use of an assignment expression, such as incrementing a loop counter.

For those with a programming background, this look very much like a for() loop statement in the C-derived languages. Functionally, it works in the exact same way as the for() loop, by continuing to execute the contained logic as long as the control condition remains true. This helps to draw contrast between the For Each Loop and the For Loop in SSIS. The former is list-based, and will execute for every item in the list supplied to it. The latter is value-based, and will execute as long as the EvalExpression is true.

In fairness, most ETL loads lean toward the list-based approach, but there are valid cases where a value-based approach is necessary. Some of those include:

  • Processing a fixed subset of data
  • Sampling for test or validation purposes
  • Forcing a “wait state” until some milestone is reached
  • Allowing the loop logic to be executed for some specified amount of time

Configuring the For Loop Container

As noted above, the only value that is required for the For Loop container is the EvalExpression. A very simple For Loop configuration is shown below.

113 1 SQL SERVER   SSIS   Get Started with the For Loop Container   Notes from the Field #113

The above supplies only the required value – a value of true to the EvalExpression. However, this is a very poorly configured For Loop, because the loop will continue executing indefinitely! True will always be true, so there is no logical end to this loop.

A more practical design pattern would use either an initialization expression, an assignment expression, or possibly both, to constrain the number of iterations. A simple example of this is shown below. I set up an SSIS package variable, typed as an Integer and named @vLoopCounter, with a default value of 0. In the For Loop settings, I’ve used the EvalExpression to check to see if this value is less than 10, and I use the AssignExpression to increment the @vLoopContainer value by 1 for every iteration of the loop.

113 2 SQL SERVER   SSIS   Get Started with the For Loop Container   Notes from the Field #113

This example works, executing any logic contained in the For Loop exactly ten times. However, this pattern is very static. What if I want to increase the value expression to let the loop run more than 10 times? I’d need to open the package and modify it. Fortunately, the configurations expressions allow for the use of both variables and parameters. Below, the package has a couple of updates: an initial value of 1 is set for the @vLoopCounter variable, and the static comparison in EvalExpression is replaced by using the package parameter @pMaxLoops for the maximum number of loops.

113 3 SQL SERVER   SSIS   Get Started with the For Loop Container   Notes from the Field #113

In the example above, the number of maximum loops can be specified at runtime, making for a more dynamic pattern.

The examples above show only an iteration based on the number of times the loop has run. Keep in mind when using the For Loop container, this logic can be based on any statement we choose: whether a particular set of files exist, how long the For Loop has been running, how many records have been processed, or some other custom metric specified in the expression. Even with a small number of inputs controlling how many times the For Loop container will execute, the possible uses for this are many, and can be as complex as needed.

Conclusion

The For Loop container provides another way to execute repeating logic in SSIS. By using an approach similar to the for() loop in structured programming languages, the For Loop container adds more ETL flexibility through a value-based iterative pattern.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Fixing Annoying Network Binding Order Error – Notes from the Field #112

ryanadams SQL SERVER   Fixing Annoying Network Binding Order Error   Notes from the Field #112[Note from Pinal]: In this episode of the Notes from the Field series database expert Ryan Adams explains a very critical error user receive when binding network error. Ryan is one guy who spends more time with real world issues with SQL Server than anything else. He has mastered the art of resolving complex errors and document them so easily that one can’t find anywhere else. In this blog post Ryan addresses a very interesting error related to binding network error. Read the experience of  Ryan in her own words.


One of the more common errors I see when setting up a Windows cluster (usually in preparation for a SQL Server Failover Cluster Instance or a SQL Server Availability Group) is regarding an incorrect network binding order.  You will see this presented as a warning in the Cluster Validation Report.  There are actually two different errors that I have seen with the first being very common and the second being more rare.

Error 1

Rule “Network binding order” generated a warning.

The domain network is not the first bound network.  This will cause domain operations to run slowly and can cause timeouts that result in failures.  Use the Windows network advanced configuration to change the binding order.

What this error means is that the network card used to connect to your domain network is not at the top of the network binding order.  The fix for this is usually pretty easy because we just need to go into Control Panel…Network and Internet…Network Connections and make the change.  Once we get there we need to identify which NIC card is connected to the domain network and that can be seen in the “Network Category” column shown in the screen shot below.  You’ll notice that I have labeled my connections Public and Private and they both show “Domain Network”.  If you are configuring a multi-subnet cluster you will see the exact same thing, but if your cluster is on a single subnet the Private network connection will show “Public Network”.  So in a single subnet it’s the one labeled “Domain Network” that you are targeting and in a multi-subnet cluster it’s your Public connection you are targeting.

112 1 SQL SERVER   Fixing Annoying Network Binding Order Error   Notes from the Field #112

In order to change the network binding order we need to go into the advanced settings.  Starting in Windows 2008 this option is hidden.  If you don’t see it hit ALT on your keyboard and the Advanced option pointed out in the previous screen shot will appear.  We need to select that and then go to Advanced Settings.  You will now be presented with the Advanced Settings box shown below.  In the screen shot you will see that my Public network is the second in the binding order and we need to move it to the top by selecting it and hitting the up arrow.  Click OK and go run Cluster validation again to see if it is resolved.

112 2 SQL SERVER   Fixing Annoying Network Binding Order Error   Notes from the Field #112

There is a chance that either your domain network was already at the top of the binding order, or you ran Cluster Validation again and it failed with the same error.  If that is the case then you either have a ghost network card (can be caused by NIC changes or driver changes) or the Microsoft Failover Cluster Virtual Adapter is bound before the domain network.  These adapters are not show in the GUI, but can be found in the registry and other places like the ipconfig /all command.

Unfortunately the network binding order in the registry uses GUIDs instead of friendly names, so we’ll have to do some translating to find and move the domain network to the top.  The first thing we will do is go figure out what the GUID of the domain network NIC is by running the following command from a command prompt.

WMIC Nicconfig Get Description, SettingID

You’ll remember that I renamed my NIC cards to be called Public and Private, but that’s the friendly name and not what will be returned from WMIC.  WMIC returns what is in the “Device Name” column from the very first screen shot above.  In my case it is called “Intel(R) PRO/1000 MT Desktop Adapter”.  You can see this pointed out in the screen shot below where we can see the output of WMIC in the command window.  Note that the GUID starts with A7.

112 3 SQL SERVER   Fixing Annoying Network Binding Order Error   Notes from the Field #112

Now we just need to open Regedit and head to HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Linkage\Bind.  That’s right, we’re getting down and dirty now so make sure you know what you are doing and above all else backup your registry before you make any changes.  Mistakes in the registry can be costly and destroy a system so proceed with caution because from here on out the responsibility lies solely with you…not me.  In the screen shot above you can see that my Public Domain Network is next to the last in the list and we need it to be at the top.  As an aside, I have also pointed out where the Microsoft Failover Cluster Virtual Adapter is located since I see this listed above the Public network from time to time.

The fix here is to cut the GUID for the Public Domain Network that starts with A7 and paste it at the top of the list.  Now we can go run Cluster Validation and life should be good unless you get the second error we’ll talk about now.

Error 2

Note that the error message is the same error you got above.  However, it’s a completely different issue.  So let’s say you verified the above and that the domain network is the first in the list, but the error persists.  Go Look in the following file and search for “IsDomainInCorrectBindOrder” to find the warning in the log file.

C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log\"Newest Log File Folder By Date"\Detail.txt

Here is a sample of the section you are looking for.

Init rule target object: Microsoft.SqlServer.Configuration.SetupExtension.NetworkBindingFacet
NetworkBindingFacet: Looking up network binding order.
NetworkBindingFacet: Network: ‘Production Team’ Device: ‘\Device\{0BF4D354-E6E9-480C-91CF-DC598282C4C1}’ Domain: ‘LITWARE.COM’ Adapter Id: ‘{0BF4D354-E6E9-480C-91CF-DC598282C4C1}’
NetworkBindingFacet: Network: ‘Local Area Connection’ Device: ‘\Device\{4DB91193-72F1-4713-A938-EB73F27CFEC8}’ Domain: ” Adapter Id: ‘{4DB91193-72F1-4713-A938-EB73F27CFEC8}’
NetworkBindingFacet: Network: ‘Heart Beat’ Device: ‘\Device\{5AC63784-8088-40F7-93C8-37F9CD03D445}’ Domain: ” Adapter Id: ‘{5AC63784-8088-40F7-93C8-37F9CD03D445}’
NetworkBindingFacet: Network: ‘BackUp Network’ Device: ‘\Device\{52AEDCB0-9E8E-4243-9D5D-ED86E602DF23}’ Domain: ” Adapter Id: ‘{52AEDCB0-9E8E-4243-9D5D-ED86E602DF23}’
IsDomainInCorrectBindOrder: The top network interface ‘Production Team’ is bound to domain ‘LITWARE.COM’ and the current domain is ‘CONTOSO.COM’.
Evaluating rule : IsDomainNetworkTopOfBindings
Rule running on machine: Server1
Rule evaluation done : Warning
Rule evaluation message: The domain network is not the first bound network. This will cause domain operations to run slowly and can cause timeouts that result in failures. Use the Windows network advanced configuration to change the binding order.
Send result to channel: RulesEngineNotificationChannel

The issue here is that the server is joined to the LITWARE.COM domain, but the current domain is that of the currently logged in user which happens to be CONTOSO.COM.  Another way to say this is that the server is joined to the LITWARE.COM domain, but you logged with a user account from the CONTOSO.COM domain to create the cluster.  From a domain perspective these are completely different domains that have been trusted and it’s possible that they are in different forests too, but again they are trusted.  Technically this configuration is correct as the public/domain joined network is indeed at the top of the list.  You have two choices here.  You can safely ignore this warning or you can log out and back in with a user in the LITWARE.COM domain.

If you are looking for more information around Clustering, AlwaysOn Failover Clusters, or AlwaysOn Availability Groups you can visit my blog at http://www.ryanjadams.com/category/sql-server/alwayson-ag/ for more articles.  Setting up clusters to support SQL Server is complicated, but can yield great benefits if done correctly.  If you’re setting up a new cluster or interested in having us take a look at your current systems we would be happy to work with you.  You can find more information on the services we provide and contact us on our website http://www.linchpinpeople.com/.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Live Query Statistics in 2016 … and More! – Notes from the Field #111

mikel SQL SERVER   Live Query Statistics in 2016 … and More!   Notes from the Field #111[Note from Pinal]: In this episode of the Notes from the Field series database expert Mike Lawell explains about Live Query Statistics in SQL Server 2016.  Statistics are heart of SQL Server. They play a very powerful role when SQL Server Engine needs help to build a new execution plan for a query. In earlier version of SQL Server we had the only privilege to see statistics after the query is executed. However, with the advancement of SQL Server 2016, the scenario has been changed. Here is a very interesting new feature of SQL Server 2016 which can help every performance tuning expert. Read the experience of  Mike in his own words.


Your boss has asked you to implement SQL Server AlwaysOn Availability Groups in your environment. Exciting as it may, you’re an accidental DBA with little to no experience implementing any high availability solution, let alone, a fairly new availability group.

The first thing you say is, fantastic! Then, hmmm, fantastic. You realize this is above your head, but it is a great opportunity to learn.

This happened to one of my clients, where the DBA barely had SQL Server Database Mirroring experience. The product had a required up time of 99.999% (yeah, right), that they were having problems maintaining. They wanted to improve their availability and business continuity.

The DBA had a lot of anxiety with implementing AGs in this environment without experiencing considerable downtime.

Fortunately they were able to engage our servic

Welcome to 2016! This is going to be an exciting year for SQL Server professionals with the upcoming release of SQL Server 2016. With this in mind I’m going to talk a little bit about one of the cool new features in SQL Server 2016.

Many of you have had the opportunity to see me talk about one of my favorite topics, Execution Plans. I had the privilege to talk about Execution Plans at PASS Summit 2015, and many SQL Saturdays across the US. You can find my presentation here on my blog site at SQL Server Associates.

Keeping in step with my love of Execution Plans, Live Query Statistics is one of the new features in SQL Server 2016 that gives me a smile.

Live Query Statistics? What’s the benefit in that? I’ve already heard it from some senior DBAs I know. Have you ever had someone ask you to performance tune a query that never completes? It just takes forever and you know you’re not about to wait for it to complete before you can see the actual execution plan? Well, here’s your chance to look at the operators in progress and as they complete.

Live Query Stats even gives you the ability to see a percentage of completion as the operator is in progress. More about that later.

Here is how you enable Live Query Statistics from Management Studio. Select Query > Include Live Query Statistics.

111 1 SQL SERVER   Live Query Statistics in 2016 … and More!   Notes from the Field #111

Voila, there you have it. Now it is as simple as running your query.

This screenshot shows the query in process. Notice the first operator “Clustered Index Scan” has completed (100%). The other operators are at 8% or 0%.

111 2 SQL SERVER   Live Query Statistics in 2016 … and More!   Notes from the Field #111

Now here is the best part, hover over the operator that is currently in process, the one you think is the issue. Even though it is “in process” you can see the details about the operator. Notice the Actual Number of Rows information? There is a lot of information here to give you a start on identifying where and what the issue might be with the query.

111 3 SQL SERVER   Live Query Statistics in 2016 … and More!   Notes from the Field #111

Once the query has completed you get an execution plan that looks like the screenshot below.

111 4 SQL SERVER   Live Query Statistics in 2016 … and More!   Notes from the Field #111

And there you go. So simple my mom could take advantage of it. Of course mom would be saying, “Michael Mark, you had better not be lying to me. It can’t be that easy!”

Really mom, it’s that simple and it is going to help you performance tune like a pro…oh, wait, sorry mom, performance tuning is a little more complicated.

Wait, but there’s more…. In what versions does this work? Well, of course SQL Server 2016…duh… but wait there’s more!

111 5 SQL SERVER   Live Query Statistics in 2016 … and More!   Notes from the Field #111

Yep, that’s right! I just ran Live Query Statistics against a SQL Server 2014 server. Isn’t that awesome?

I’ve been able to use this new feature with several of the Linchpin People clients (even though SQL Server 2016 isn’t available yet) because it works on SQL Server 2014 installations. It just doesn’t get any better than that.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

It’s a New Year, Take Advantage of It – Notes from the Field #110

[Note from Pinal]: This is the first episode of Notes from the Field series in the year 2016 and 110th overall.

There is one place where I see the maximum crowd in the new year – fitness center (gym). It is indeed a good thing that we all want to stay fit and active. However, the reality unveils itself from the second week when the gyms are empty once again. We all start strong, but only few finish strong as well. Earlier last year when I was taking advice from Mike about staying healthy, I had narrated this scenario to Mike. I was expecting that Mike will stay that he understands my feeling we will soon start talking about something related to SQL – in reality, that did not happen. When I mentioned this to Mike, he said it is indeed a good thing and he is, suggested why not we take advantage of initial enthusiasm to set a strong base for the future and healthy lifestyle. Mike did make a great point. We kept on talking about various ways to stay healthy. When we ended our conversation, just as expected, we talked about SQL. Just like health, Mike had had a wonderful insight about the database.
mikewalsh Its a New Year, Take Advantage of It   Notes from the Field #110

In this episode of the Notes from the Field series database expert Mike Walsh presents his thoughts about why we should take advantage of the opportunity present at new year and build a stronger base for healthy databases. Trust me, you want to read it!


Us technologists, we are good at what we are good at. We excel at troubleshooting, solving complex challenges and rising to the occasion. We keep our systems up, we rush in to save the day (especially the DBAs). We ace tests, we love digging in and tuning queries. We’re good at our jobs for the most part.

Sometimes, though, some of us aren’t so great at getting things done. Maybe you suffer from that problem? I know I do. I am quick to pick up the next thing before the current thing is done. I am good at putting things on my to-do lists (in fact I actually have a lot of to-do lists in different forms in different places) but I’m not so good at checking things off of the to-do lists.

smile Its a New Year, Take Advantage of It   Notes from the Field #110I get the urgent things done, but the important and not so urgent things? They can take a back seat. Are you like that? It seems that many of us technologists are. Not all of course, but we live distraction full, device driven “NOW!” controlled lives lately.

This post is a departure from the great guest feature Pinal allows us at Linchpin People to offer. Normally we are answering a technical question, helping point you in the direction of an answer or a better way of doing something. Today, I want to slide back to Professional Development.

It’s a New Year, Take Advantage of It

Worried? Stop and do something about it :-)

On the Gregorian Calendar, this week signifies the start of a new year. You often will see New Year’s resolutions come out this time of year. People say this is finally the year they’ll start eating better, quit a habit, get in shape or do something they’ve been meaning to do.

I’m not asking you to make a resolution. But I am wondering if we can agree that there are things we all meant to do at work last year but never did. Are there services we never took care of? Improvements we left unchecked. If there are, and if you are like me there probably are, make a plan to do something about them. Not a plan that won’t get touched like my to-do lists all over the place. But start today, be persistent and let’s get it done.

This is a new year and we have an opportunity to have a reason and push to do something different for a change. Let’s take advantage of it.

 Its a New Year, Take Advantage of It   Notes from the Field #110

Some Ideas

There are a lot of categories we could look at here.

Learning –

Yes, your own growth is important and something that we end up letting get neglected if it isn’t part of your official job description and your employer/manager doesn’t sort of force you along on that path. SQL Server 2016 comes out this year.  Here’s an action plan for you – Go to this site(https://www.microsoft.com/en-us/server-cloud/products/sql-server-2016/), learn about what’s coming, download a Community Technology Preview and play with some of the features and get familiar.

Maintenance –

Especially if you are a DBA – how is your environment? When is the last time you’ve given it a health check? You don’t have to reach out to Linchpin People to have us do a WellDBA Exam – though we’d be happy to help you do that. Look at the free tools out there, spend time with your key SQL Servers – look at how they are configured and running. Are things good here? What can be done differently? What maintenance is missing? Make a list and start knocking things off as you get to them.

Plan Ahead –

How old are your servers? When’s the last time you had that conversation about data archival? What are the business’ plans this year and you can your systems handle those plans? Have a conversation with the technical teams and make sure you are ready for 2016 and beyond.

What Else?

I don’t know about you, but when I read a post like this or hear people talk about procrastination or missing to-do items – I get this feeling inside. This sort of angry, panicky voice that says “Oh yeah!! I have to do ______” are you getting that voice about anything? I am just typing about it. So what I’m going to do is stop writing this post and go get those things done. You should stop reading and start getting those things done, or delegate them, or realize they weren’t important and you should stop worrying about them. Happy 2016!

If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Are You Suffering from Unknown SSAS Performance Challenges? – Notes from the Field #109

[Note from Pinal]: This is a new episode of Notes from the Field series. We build our business application with zeal and enthusiasm. We believe it will do well for many years but it is not the case always. After a while performance started to go down and everything is not in the best shape. We often suffer from unknown SSAS performance issues in our application which we are not even aware of.

Bill%20Anton SQL SERVER   Are You Suffering from Unknown SSAS Performance Challenges?   Notes from the Field #109

In this episode of the Notes from the Field series I asked SQL Expert Bill Anton a very crucial question – How to we know if our application is suffering from unknown SSAS performance issue? Bill was very kind to answer the questions and provides plenty of information about how a novice developer can work with SSAS.


Ever wondered what’s actually happening on your Analysis Services server? If so, then you’re certainly not alone. Many DBAs think of Analysis Services as this black box implemented by a crazy group of Business Intelligence developers and then left behind once the developers move on to the next project. Down the road, when a performance issue arises, whether it’s a new report taking a long time to load or an issue with the nightly processing job, it’s up to the DBA to try and resolve the issue. There’s just one problem – the DBA hasn’t had the training necessary to know where to start looking.

As a BI consultant I see this situation all the time. So in this post, I’m going to show you how to take the first step in troubleshooting Analysis Services performance issues – which is to run an extended events trace on your Analysis Services server to see what’s actually happening.

Running an Extended Events Trace

Extended Events (xEvents) is an event-handling system providing insight to the behavioral and performance characteristics of an Analysis Services instance. Even though it was first introduced for the database engine in SQL Server 2008, it didn’t make its way into Analysis Services until SQL Server 2012. However, it is the successor to SQL Profiler (which is has been deprecated) so unless you’re still working with SSAS 2008R2 (or below), it’s time to get comfortable w/ xEvents.

Because there’s no GUI with xEvents for Analysis Services (definitely one of the major deterrents for newcomers) you’ll have to use XMLA commands to start and stop the trace.

Note: to save you time, all scripts shown in the examples below can be downloaded from the link at the bottom of this post.

To start the trace, you can simply open SSMS, connect to the Analysis Services instance, open an Analysis Services XMLA query window (circled in the screenshot below), paste in the command, and hit execute.

109 1 SQL SERVER   Are You Suffering from Unknown SSAS Performance Challenges?   Notes from the Field #109

When you’re ready to stop the trace, you can execute the following XMLA command…

109 2 SQL SERVER   Are You Suffering from Unknown SSAS Performance Challenges?   Notes from the Field #109

Once the trace is stopped you can find the output file in the default Analysis Services log directory…

109 3 SQL SERVER   Are You Suffering from Unknown SSAS Performance Challenges?   Notes from the Field #109

Note: if you want the output files to be written out to another directory then you will need to specify the full filepath for the filename parameter in the XMLA command to start the trace. You’ll also need to make sure that the Analysis Services service account has the correct permissions for the directory you specify.

Now that we have the trace output, let’s focus on how to extract information from the trace file.

Analyzing an Extended Events Trace

The trace we ran in the previous section contains tons of useful information.

Query-Related

For queries we can see every query executed on the system including the total duration, query text, start/end time, as well as the name of the user who ran the query. It also contains information about the execution of the query (which may be more than you care to know) such as the amount of time spent in the Formula Engine vs the Storage Engine, number of partition scans, number of aggregation hits, number of cache hits – all of which can be helpful when troubleshooting slow query performance.

Processing-Related

If the trace is running while the SSAS database is being processed we can see total processing duration as well as the processing duration by major object (e.g. cube, measure group, partition, aggregation, dimension, etc) – which can be very helpful when determining where to focus your effort as the cube grows and the time it takes to process begins to approach the end of the processing window.

However, before we can start tapping into all of this wonderful information, we first need to load the data from the trace file into a SQL Server table.

The most common way to do this is by using the sys.fn_xe_file_target_read_file function. The statement below uses this function to read in the trace file, shred the (XML) contents, and write the flattened results into a temp table named #xevent_raw_trace_data.

109 4 SQL SERVER   Are You Suffering from Unknown SSAS Performance Challenges?   Notes from the Field #109

With the data in a table, we’re ready to start extracting useful information.

Below are a few examples to help get you started. They provide some of the more common types of information you’ll want to know about the query activity happening on the server.

Example 1 – number of queries and average query duration by application and user

This query is useful to see who your heaviest users are, which application they’re using most, and what the user experience is like in terms of query performance. For example, if you see a user with a high query count and a high average query duration (e.g. > 30 seconds), it’s probably worth checking up on them to see what they’re trying to do and if there’s a better way for them to do it.

109 5 SQL SERVER   Are You Suffering from Unknown SSAS Performance Challenges?   Notes from the Field #109

Example 2 – queries that took longer than 10 seconds to complete

This query will show you the worst performing MDX and/or DAX queries are generated by your users. It is one of my favorites and I recommend clients review the results on a regular basis. Typically, what you’ll see is that there are a small handful of users (or applications) that make up the bulk of the slow queries. Those are the users/applications where you’ll want to focus your time and energy in order to have the biggest impact on the overall performance and user experience.

109 6 SQL SERVER   Are You Suffering from Unknown SSAS Performance Challenges?   Notes from the Field #109

Example 3 – query execution details

This one starts to get a bit more complicated. You might use this query after you’ve isolated a slow MDX/DAX query (from the previous example) and you want to start digging deeper.

109 7 SQL SERVER   Are You Suffering from Unknown SSAS Performance Challenges?   Notes from the Field #109

Conclusion

Extended events traces are the key to unlocking the mystery of what’s going on inside your Analysis Services server. They are the successor for SQL Profiler and should be part of any decent Analysis Services Performance Monitoring solution.

It might not be pretty (few things are when it comes to XML), but like it or not, extended events are here to stay. Fortunately, it looks like Microsoft will be adding an extended events GUI in Analysis Services 2016.

Here’s the download link to the XMLA scripts and SQL queries shown in this post to help get you started.

If you want to get started on performance monitoring with SSAS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How To Improve Performance by Offloading Backups to a Secondary Replica – Notes from the Field #108

[Notes from Pinal]: If we get one server, we want two servers, and if we get two servers, we want four servers. If we know we are going to get only two servers, we try our best to get maximum out of our available server. Maximum utilization of resources is always our primary goal. In this blog post we are going to talk about almost the same story where we try to get maximum out of our servers. Let us assume that we have two servers, how do we get maximum performance from them. Well, our generic answer would be that we will keep the most important task on our primary server and move all the not so important task on secondary server. This is common sense and essential too. This is when I reached out to Eduardo and asked him what can we do to make our primary server faster by offloading backups to secondary replica.

Eduardo SQL SERVER   How To Improve Performance by Offloading Backups to a Secondary Replica   Notes from the Field #108Linchpin People are database coaches and wellness experts for a data driven world. In this 108th episode of the Notes from the Fields series database expert Eduardo Castro (partner at Linchpin People) shares very interesting conversation related to how to improve performance of SQL Server by offloading backups to a secondary replica in SQL Server standard edition.


Microsoft introduced AlwaysOn in SQL Server 2012 as a way to bring a high availability option to scenarios where the database administrator doesn’t have a SAN.

AlwaysOn is based on the concept of availability groups that support the grouping of several user databases that can fail over together to other server in case of an interruption of the main server. Each availability group defines partners known as availability replicas. Each replica is part of the availability group and is hosted in a separate instance of SQL Server.

The following picture shows a basic configuration with one primary and two replicas:

108 1 SQL SERVER   How To Improve Performance by Offloading Backups to a Secondary Replica   Notes from the Field #108

One of the main features of AlwaysOn, besides the high availability scenarios, is the option to have active secondary replicas.

Two common questions I get after my sessions on this topic are:

  1. How to improve OLTP performance in scenarios where there is a lot of reporting being done during peak hours?
  2. How to improve the speed of backups without affecting our main server throughput?

This is where AlwaysOn Active Secondary Replicas come to work. Basically in AlwaysOn, you have the option to use your replicas to distribute the load from the primary server and send the backups and read-only operations to one of the replicas.

If you are creating the AlwaysOn for the first time, you need to configure the backup priority during the Availability Group Wizard. The following picture shows how you can set it up so the backups are run in the secondary replicas. In this way, you can specify that the backups are run in the replica.

108 2 SQL SERVER   How To Improve Performance by Offloading Backups to a Secondary Replica   Notes from the Field #108

If you have already created your AlwaysOn Availability Group, and you haven’t configured where the backups are run, you must alter you group using T-SQL as shown below.

ALTER AVAILABILITY GROUP [@MyOLTPAvailablityGroup] MODIFY REPLICA ON <@MyOLTPInstanceA> WITH (BACKUP PRIORITY = 80)

If you need to automate this task you can create a PowerShell script as shown:

Set-SqlAvailabilityReplica -BackupPriority 80 -Path SQLSERVER: \Data\AvailabiltiyGroups\AvailabiltiyReplicas\&lt;@ MyOLTPAvailablityGroup &gt;

In case you want to configure a Read-only secondary after you have created the Availability Group, then you must alter the current configuration to include the read only routing, as show below:
ALTER AVAILABILITY GROUP MyOLTPAvailablityGroup
MODIFY REPLICA
ON 'MySQLServerName'
WITH (
SECONDARY_ROLE (
READ_ONLY_ROUTING_URL = 'TCP://address:port' )
)

Once you have run the scripts, you can modify the connection string of the applications that are read-only to include the following parameter ApplicationIntent=Read-only, in this way the Availablity Group will redirect the read-only connections to the proper secondary replica.

Conclusion

If you want to leverage all the potential of AlwaysOn you should consider its high availability features, but the value of spending some time configuring the secondary read-only replicas and backups will also help you balance the request of your systems, optimize the resource usage and speed up your SQL Server.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to Create a Readable Secondary Server in SQL Server Standard – Notes from the Field #107

[Notes from Pinal]: The basic nature of human is greedy. When we get one thing which we desire the next thing. In the early world of SQL Server we got a secondary server as a backup or high availability. In earlier times the secondary server was not readable, it just served as a backup. At this point of time our human nature kicked in and we want to get more from the server, which was just sitting there most of the time. We wanted to make our secondary server readable. This is when I reached out to Kenneth and asked him what can we do to make our secondary server as a readable.

Kenneth SQL SERVER   How to Create a Readable Secondary Server in SQL Server Standard   Notes from the Field #107Linchpin People are database coaches and wellness experts for a data driven world. In this 107th episode of the Notes from the Fields series database expert Kenneth Urena (partner at Linchpin People) shares very interesting conversation related to how to create readable secondary server in SQL Server standard edition.


AlwaysOn Availability groups are a great technology to create up-to-date, readable secondary’s and distribute read-only load to servers that are not involved in read/write operations. There is one main license requirement, though: your servers need to be running SQL Server Enterprise edition.

So, what happens if you need this functionality, but you are running SQL Server Standard Edition? Transactional Replication is a tool you might want to look to for answers.

This post will demystify some of the misconceptions about transactional replication, and review the considerations and tips you need to successfully configure this technology:

Understanding Transactional Replication

Transactional replication is composed of 3 main roles: The Publisher, The Distributor and The Subscriber.

notes107 1 SQL SERVER   How to Create a Readable Secondary Server in SQL Server Standard   Notes from the Field #107

This is how it works – The Publisher tracks what Objects of the database (Articles) are going to be published, then the Distributor get the changes from the Publisher and makes it available for the Subscriber to consume.

Transactional Replication Requirements

The physical implementation of this technology requires at least 3 databases:

  • The Source Database: This database is your actual production database, and it is going to hold the publication on the Publisher.
  • The Distribution Database: This database will host all the articles modifications per subscriber per database. It also has a timeframe to keep this information available.
  • The Destination database: This database is where all the data will get replicated, and potentially you can redirect the read only queries to take place. This database is hosted on the subscriber host.

Configuring Transactional Replication Properly

In replication the roles can be hosted by the same server. But this choice may actually cause a worse problem than the one we are trying to solve (load balancing). It is because of this that you should keep the following tips in mind before configuring transactional replication:

  1. Make sure all of the tables on the source database have primary keys, otherwise that article can’t be include in the publication.
  2. Since the goal is take load out of the primary server, make sure to configure the distribution role on a different server than the primary. If you don’t have the budget to configure a distribution server, you can use the subscriber as distributor. Otherwise you will be overloading the server during high traffic.
  3. The configuration can be done as follows:
    notes107 2 SQL SERVER   How to Create a Readable Secondary Server in SQL Server Standard   Notes from the Field #107
  1. If the subscriber is going to host the distributor database, make sure the secondary server has the same version (or greater) of SQL server than the publisher.
  2. If you are planning to host publications from different servers, with multiple databases on each server, on the same distributor server, it is possible to configure one distribution database per publisher server as follows:

USE MASTER
EXEC
sp_adddistributor @distributor = N'DESKTOP-QVDC9JE\SQL2016', @password = N'secret'
GO
EXEC sp_adddistributiondb @database = N'distribution', @data_folder = N'C:\Program Files\Microsoft SQL Server\MSSQL13.SQL2016\MSSQL\Data', @log_folder = N'C:\Program Files\Microsoft SQL Server\MSSQL13.SQL2016\MSSQL\Data', @log_file_size = 2, @min_distretention = 0, @max_distretention = 72, @history_retention = 48, @security_mode = 1
GO
USE [distribution]
IF (NOT EXISTS (SELECT * FROM sysobjects WHERE name = 'UIProperties' AND TYPE = 'U '))
CREATE TABLE UIProperties(id INT)
IF (EXISTS (SELECT * FROM:: fn_listextendedproperty('SnapshotFolder', 'user', 'dbo', 'table', 'UIProperties', NULL, NULL)))
EXEC sp_updateextendedproperty N'SnapshotFolder', N'C:\Program Files\Microsoft SQL Server\MSSQL13.SQL2016\MSSQL\ReplData', 'user', dbo, 'table', 'UIProperties'
ELSE
EXEC
sp_addextendedproperty N'SnapshotFolder', N'C:\Program Files\Microsoft SQL Server\MSSQL13.SQL2016\MSSQL\ReplData', 'user', dbo, 'table', 'UIProperties'
GO
EXEC sp_adddistpublisher @publisher = N'DESKTOP-QVDC9JE\SQL2016', @distribution_db = N'distribution', @security_mode = 1, @working_directory = N'C:\Program Files\Microsoft SQL Server\MSSQL13.SQL2016\MSSQL\ReplData', @trusted = N'false', @thirdparty_flag = 0, @publisher_type = N'MSSQLSERVER'
GO

  1. If you plan to have multiple subscribers on different servers to the same publication, make sure to configure that publication as pull, so the distribution database doesn’t get overloaded.
  1. If your publication contains most of the tables of the source database, it will be quicker to initialize the subscriber from backups. Just make sure the publication is configured to allow initialization from backups as follows:
  • Expand the Replication folder
  • Expand the Local Publication folder
  • Right click over the Publication
  • Click on Properties
  • Select Subscription Options
  • Make sure Allow Initialization from Backup Files is on true

notes107 3 SQL SERVER   How to Create a Readable Secondary Server in SQL Server Standard   Notes from the Field #107

Readable Secondary’s and More Balanced Loads – voila!

Although it may require a bit more setup, transactional replication provides a satisfactory replacement for Enterprise level AlwaysOn Availability Groups in the Standard Edition of SQL Server. So long as you take care to configure and maintain the tool properly, this technology should go a long way towards helping you create up-to-date, readable secondary’s and distribute load balancing in a workable manner.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Getting Started with Project Versions in the SSIS Catalog – Notes from the Field #106

[Notes from Pinal]: We are human and we make mistakes. However, sometimes mistakes are so big that we can’t reverse it. Version control is our rescue when we make mistakes. In my life I have been fortunate few times when I deleted something important and version controlled saved me. Similarly, for SSIS Catalog version control is very important as well.

Tim Mitchell SQL SERVER   Getting Started with Project Versions in the SSIS Catalog   Notes from the Field #106Linchpin People are database coaches and wellness experts for a data driven world. In this 106th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to do project versions in SSIS catalog .


A handy but frequently overlooked feature of the SSIS catalog is project versioning. When the SSIS catalog is installed on a server, by default the catalog is configured to store a limited number of previous versions of each distinct SSIS project deployed to the catalog. Not only does the catalog keep those versions, but the SSMS interface makes it easy to browse – and even revert to – previous versions.

In this post, I’ll show how you can use the version history feature in the SSIS catalog to access previously deployed versions of a project.

Accessing the Version History

The fact that the version history is available doesn’t really stand out when you’re browsing the SSIS catalog. To find it, you have to right-click the project in question, and in the context menu you’ll see the option to show versions as indicated below.

107 1 SQL SERVER   Getting Started with Project Versions in the SSIS Catalog   Notes from the Field #106

When I access the version history for my project named Testing SSIS Packages, the following dialog is shown. The version history here indicates that this project has ten different versions stored in the history (the default maximum – more on that momentarily). Also, the most recent version is the current version, as indicated by the check in the Current column on the far left.

107 2 SQL SERVER   Getting Started with Project Versions in the SSIS Catalog   Notes from the Field #106

Although this interface does not show what has changed, it does indicate when each version was deployed. There is also a Last Restored column, indicating if an older version was ever restored. By default, the Last Restored value will be null, until and unless that version is changed from a historical version to the current version.

Why Restore?

In a moment, I’ll demonstrate how to restore an older version of a project. First, though, I like to start with the why before I show the how. Why might you want to restore an older version of a deployed project? There are a few cases when this might be useful, but one of the most practical is for regression testing. Let’s say you deploy a project to your dev/test server (and you’re always testing before deploying to production, right?). If there is a question about a new version of an SSIS project, you can very easily revert to an older version and compare the old behavior to the new behavior. The other obvious case is simple human error – someone mistakenly deployed an incorrect or not-yet-ready project, which could be easily remedied by rolling back using the following method.

Rolling Back to a Prior Version

Reverting to an older version of a project is very easy. In the Project Versions dialog box, simply select the version you want to use as the current version and click the Restore to Selected Version button. You’ll be prompted to confirm that you want to do this, and if you answer in the affirmative, the selected version will then become the current version. As shown below, you can see that I have restored to a month-old version of the project, as indicated by the Current check box and the Last Restored date.

107 3 SQL SERVER   Getting Started with Project Versions in the SSIS Catalog   Notes from the Field #106

You might ask, what happened to the version I just replaced? Don’t worry – it’s still there. This does not do a restore in the same way a relational database is overwritten when it is restored. Rather, a configuration setting stored in the SSIS catalog simply points to the Project LSN value of the current version for each project, which may or may not be the most recently deployed version. When a package in this project is executed, the plumbing within the SSIS catalog will execute the version marked as current.

I can easily undo the change I just made by selecting the most recent version again, go through the restore exercise again, and we’re back to where we started.

107 4 SQL SERVER   Getting Started with Project Versions in the SSIS Catalog   Notes from the Field #106

As shown above, we’re back to where we started – the most recent version is the current version, and that version will be used when package(s) in that project are executed. The only difference is that we can see – by referencing the Deployed Time and Last Restored values – that the prior release version (Project LSN = 15) was temporarily restored, and then replaced as current by the more recent version (Project LSN = 16).

One last note on restoring a project: Because the code deployed to the SSIS catalog is grouped at the project (not package) granularity, it is not possible in current versions of SSIS to restore just a single package. However, in SSIS 2016, package-level deployment will return, which will change the behavior of restoring project versions.

Version Retention

As shown in the previous example, there are exactly ten historical versions of this project stored in the catalog. That is no accident – the maximum number of project versions is set to 10 by default. This is a configurable value, however. If you open the catalog properties window in SSMS, you’ll see that the maximum number of stored versions can be configured.

107 5 SQL SERVER   Getting Started with Project Versions in the SSIS Catalog   Notes from the Field #106

There are two configurable properties here: the maximum number of version to keep, and the Boolean value to enable the periodic removal of old versions. I have rarely found a need to stray from the default settings, but they are available for modification should you need to do so.

This Is Not Source Control!

One final though on versioning: Please don’t use this as a substitute for real source control! Just because SSIS will store the version history, it doesn’t mean you should use that as a means for source control. The versioning functionality is intended as an administrative tool for logging, testing, and if necessary, emergency rollback. The SSIS catalog version store does not have most of the features of a full-service source control tool. Therefore, you should be checking in your code to a proper source control repository even if you maintain the version history in the SSIS catalog.

Conclusion

The version history feature in the SSIS catalog is one of the best-kept secrets of this product. Although you hopefully won’t have to use this on a daily basis, it is a very handy tool to have at your disposal.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Could Not Register Service Principal Name Persists – Notes from the Field #105

ryanadams SQL SERVER   Could Not Register Service Principal Name Persists   Notes from the Field #105[Note from Pinal]: In this episode of the Notes from the Field series database expert Ryan Adams explains a very critical error user receive when registering service principals. Ryan is one guy who spends more time with real world issues with SQL Server than anything else. He has mastered the art of resolving complex errors and document them so easily that one can’t find anywhere else. In this blog post Ryan addresses a very interesting error related to Service Principal Name. Read the experience of  Ryan in her own words.


Have you ever seen the error below in your SQL Server log shortly after startup?  You’ll actually see two of them and you can see the difference between them in the screen shot, but here is the text.

The SQL Server Network Interface library could not register the Service Principal Name (SPN) [ MSSQLSvc/Node1.stars.com:1433 ] for the SQL Server Service.  Windows return code: 0x2098, state: 15.  Failure to register a SPN might cause integrated authentication to use NTLM instead of Kerberos.  This is an informational message.  Further action is only required if Kerberos authentication is required by authentication policies and if the SPN has not been manually registered.

Kerb Error in Log SQL SERVER   Could Not Register Service Principal Name Persists   Notes from the Field #105

So what causes this error and how can you fix it?  The first thing to note is that it is an informational message and not actually an error.  There are two scenarios in which you will see this message.

  1. The first scenario is what you see right out of the box.  SQL Server does not have the rights on a domain service account to register Service Principal Names (SPNs).  The description we see here is very clear in telling us that SQL Server could not register the required SPNs.  It even tells us the exact SPN syntax it attempted to register.  If you want to use Kerberos you have to register the SPNs manually or give the service account the right to perform the change itself.  If you decided to register them manually, then now is a good time to write down the SPNs from the description.
  2. The second scenario is a weird one that throws people off.  If you choose to manually register the SPNs on the service account and restart SQL Server, you’ll still see the same message in the log.  Now why in the world would this message even show up if you already registered the SPNs?  In fact, many folks will see this message and assume they are not using Kerberos, because the message clearly states that it could not register the SPNs.  The assumption is usually that they got the SPN syntax wrong or that the SPNs never got registered.

Just for kicks, let’s jump back over to my test server and take a look at the current connections.  Most folks will add a WHERE clause to the following query to just look at their current connection, but I’m going to caution you about that.  If you’re on the server itself, you won’t get accurate results because you end up using Named Pipes unless it’s disabled.  We are looking to see if there are any Kerberos connections at all so we don’t want to filter the result set.

SELECT * FROM sys.dm_exec_connections
--WHERE session_id = @@SPID

Kerb Connection SQL SERVER   Could Not Register Service Principal Name Persists   Notes from the Field #105

Well that’s an interesting result, huh?  I clearly have Kerberos connections despite the message I keep getting in the SQL Server log.  So why is that?  Well it comes down to the semantics of the message.  The message said it couldn’t register the SPNs and that’s true.  It couldn’t register them because you already did it.  So if you ever see this message, make sure you go look at your connections first (using the above query) to see if it is something you need to address or not.  If you see KERBEROS in the auth_scheme column, then you are all set.

If you want the message to go away completely, there is only one way to do that.  You have to give the account running the SQL Server service the permissions to change its own SPNs.  You can do that by opening the properties of the account and heading to the security tab.  You will find an account in the account list called SELF.  Grant that account “Write to Public Information”, restart the SQL Server service, and the message will disappear.  Now you’ll see a new message stating that SQL Server was able to successfully register the required SPNs.

If you are looking for more information around Kerberos and SQL Server you can visit my blog for more articles.  This is one of the many checks we perform during our WellDBA exams here at Linchpin People.  If you’re interested in taking us a look at your systems and perform a WellDBA exam, we would be happy to work with you.  You can find more information here about the Linchpin WellDBA exams.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)