SQL SERVER – Fixing Annoying Network Binding Order Error – Notes from the Field #112

ryanadams SQL SERVER   Fixing Annoying Network Binding Order Error   Notes from the Field #112[Note from Pinal]: In this episode of the Notes from the Field series database expert Ryan Adams explains a very critical error user receive when binding network error. Ryan is one guy who spends more time with real world issues with SQL Server than anything else. He has mastered the art of resolving complex errors and document them so easily that one can’t find anywhere else. In this blog post Ryan addresses a very interesting error related to binding network error. Read the experience of  Ryan in her own words.


One of the more common errors I see when setting up a Windows cluster (usually in preparation for a SQL Server Failover Cluster Instance or a SQL Server Availability Group) is regarding an incorrect network binding order.  You will see this presented as a warning in the Cluster Validation Report.  There are actually two different errors that I have seen with the first being very common and the second being more rare.

Error 1

Rule “Network binding order” generated a warning.

The domain network is not the first bound network.  This will cause domain operations to run slowly and can cause timeouts that result in failures.  Use the Windows network advanced configuration to change the binding order.

What this error means is that the network card used to connect to your domain network is not at the top of the network binding order.  The fix for this is usually pretty easy because we just need to go into Control Panel…Network and Internet…Network Connections and make the change.  Once we get there we need to identify which NIC card is connected to the domain network and that can be seen in the “Network Category” column shown in the screen shot below.  You’ll notice that I have labeled my connections Public and Private and they both show “Domain Network”.  If you are configuring a multi-subnet cluster you will see the exact same thing, but if your cluster is on a single subnet the Private network connection will show “Public Network”.  So in a single subnet it’s the one labeled “Domain Network” that you are targeting and in a multi-subnet cluster it’s your Public connection you are targeting.

112 1 SQL SERVER   Fixing Annoying Network Binding Order Error   Notes from the Field #112

In order to change the network binding order we need to go into the advanced settings.  Starting in Windows 2008 this option is hidden.  If you don’t see it hit ALT on your keyboard and the Advanced option pointed out in the previous screen shot will appear.  We need to select that and then go to Advanced Settings.  You will now be presented with the Advanced Settings box shown below.  In the screen shot you will see that my Public network is the second in the binding order and we need to move it to the top by selecting it and hitting the up arrow.  Click OK and go run Cluster validation again to see if it is resolved.

112 2 SQL SERVER   Fixing Annoying Network Binding Order Error   Notes from the Field #112

There is a chance that either your domain network was already at the top of the binding order, or you ran Cluster Validation again and it failed with the same error.  If that is the case then you either have a ghost network card (can be caused by NIC changes or driver changes) or the Microsoft Failover Cluster Virtual Adapter is bound before the domain network.  These adapters are not show in the GUI, but can be found in the registry and other places like the ipconfig /all command.

Unfortunately the network binding order in the registry uses GUIDs instead of friendly names, so we’ll have to do some translating to find and move the domain network to the top.  The first thing we will do is go figure out what the GUID of the domain network NIC is by running the following command from a command prompt.

WMIC Nicconfig Get Description, SettingID

You’ll remember that I renamed my NIC cards to be called Public and Private, but that’s the friendly name and not what will be returned from WMIC.  WMIC returns what is in the “Device Name” column from the very first screen shot above.  In my case it is called “Intel(R) PRO/1000 MT Desktop Adapter”.  You can see this pointed out in the screen shot below where we can see the output of WMIC in the command window.  Note that the GUID starts with A7.

112 3 SQL SERVER   Fixing Annoying Network Binding Order Error   Notes from the Field #112

Now we just need to open Regedit and head to HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Linkage\Bind.  That’s right, we’re getting down and dirty now so make sure you know what you are doing and above all else backup your registry before you make any changes.  Mistakes in the registry can be costly and destroy a system so proceed with caution because from here on out the responsibility lies solely with you…not me.  In the screen shot above you can see that my Public Domain Network is next to the last in the list and we need it to be at the top.  As an aside, I have also pointed out where the Microsoft Failover Cluster Virtual Adapter is located since I see this listed above the Public network from time to time.

The fix here is to cut the GUID for the Public Domain Network that starts with A7 and paste it at the top of the list.  Now we can go run Cluster Validation and life should be good unless you get the second error we’ll talk about now.

Error 2

Note that the error message is the same error you got above.  However, it’s a completely different issue.  So let’s say you verified the above and that the domain network is the first in the list, but the error persists.  Go Look in the following file and search for “IsDomainInCorrectBindOrder” to find the warning in the log file.

C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log\"Newest Log File Folder By Date"\Detail.txt

Here is a sample of the section you are looking for.

Init rule target object: Microsoft.SqlServer.Configuration.SetupExtension.NetworkBindingFacet
NetworkBindingFacet: Looking up network binding order.
NetworkBindingFacet: Network: ‘Production Team’ Device: ‘\Device\{0BF4D354-E6E9-480C-91CF-DC598282C4C1}’ Domain: ‘LITWARE.COM’ Adapter Id: ‘{0BF4D354-E6E9-480C-91CF-DC598282C4C1}’
NetworkBindingFacet: Network: ‘Local Area Connection’ Device: ‘\Device\{4DB91193-72F1-4713-A938-EB73F27CFEC8}’ Domain: ” Adapter Id: ‘{4DB91193-72F1-4713-A938-EB73F27CFEC8}’
NetworkBindingFacet: Network: ‘Heart Beat’ Device: ‘\Device\{5AC63784-8088-40F7-93C8-37F9CD03D445}’ Domain: ” Adapter Id: ‘{5AC63784-8088-40F7-93C8-37F9CD03D445}’
NetworkBindingFacet: Network: ‘BackUp Network’ Device: ‘\Device\{52AEDCB0-9E8E-4243-9D5D-ED86E602DF23}’ Domain: ” Adapter Id: ‘{52AEDCB0-9E8E-4243-9D5D-ED86E602DF23}’
IsDomainInCorrectBindOrder: The top network interface ‘Production Team’ is bound to domain ‘LITWARE.COM’ and the current domain is ‘CONTOSO.COM’.
Evaluating rule : IsDomainNetworkTopOfBindings
Rule running on machine: Server1
Rule evaluation done : Warning
Rule evaluation message: The domain network is not the first bound network. This will cause domain operations to run slowly and can cause timeouts that result in failures. Use the Windows network advanced configuration to change the binding order.
Send result to channel: RulesEngineNotificationChannel

The issue here is that the server is joined to the LITWARE.COM domain, but the current domain is that of the currently logged in user which happens to be CONTOSO.COM.  Another way to say this is that the server is joined to the LITWARE.COM domain, but you logged with a user account from the CONTOSO.COM domain to create the cluster.  From a domain perspective these are completely different domains that have been trusted and it’s possible that they are in different forests too, but again they are trusted.  Technically this configuration is correct as the public/domain joined network is indeed at the top of the list.  You have two choices here.  You can safely ignore this warning or you can log out and back in with a user in the LITWARE.COM domain.

If you are looking for more information around Clustering, AlwaysOn Failover Clusters, or AlwaysOn Availability Groups you can visit my blog at http://www.ryanjadams.com/category/sql-server/alwayson-ag/ for more articles.  Setting up clusters to support SQL Server is complicated, but can yield great benefits if done correctly.  If you’re setting up a new cluster or interested in having us take a look at your current systems we would be happy to work with you.  You can find more information on the services we provide and contact us on our website http://www.linchpinpeople.com/.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #055 – How to Convert ASCII to DECIMAL or DECIMAL to ASCII?

Question: How do you convert ASCII to DECIMAL or DECIMAL to ASCII?

Answer: 

ASCII – Returns the ASCII code value of the leftmost character of a character expression.

CHAR – Fixed-length non-Unicode character data with length of n bytes.

Examples:
--Decimal to ASCII
SELECT CHAR(80)+CHAR(73)+CHAR(78)+CHAR(65)+CHAR(76) ASSQLAuthorityAuthor
GO

--ASCII to Decimal
SELECT ASCII('P') AS SQLAuthorityAuthor
UNION ALL
SELECT ASCII('I')
UNION ALL
SELECT ASCII('N')
UNION ALL
SELECT ASCII('A')
UNION ALL
SELECT ASCII('L')
GO

Reference : Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Live Query Statistics in 2016 … and More! – Notes from the Field #111

mikel SQL SERVER   Live Query Statistics in 2016 … and More!   Notes from the Field #111[Note from Pinal]: In this episode of the Notes from the Field series database expert Mike Lawell explains about Live Query Statistics in SQL Server 2016.  Statistics are heart of SQL Server. They play a very powerful role when SQL Server Engine needs help to build a new execution plan for a query. In earlier version of SQL Server we had the only privilege to see statistics after the query is executed. However, with the advancement of SQL Server 2016, the scenario has been changed. Here is a very interesting new feature of SQL Server 2016 which can help every performance tuning expert. Read the experience of  Mike in his own words.


Your boss has asked you to implement SQL Server AlwaysOn Availability Groups in your environment. Exciting as it may, you’re an accidental DBA with little to no experience implementing any high availability solution, let alone, a fairly new availability group.

The first thing you say is, fantastic! Then, hmmm, fantastic. You realize this is above your head, but it is a great opportunity to learn.

This happened to one of my clients, where the DBA barely had SQL Server Database Mirroring experience. The product had a required up time of 99.999% (yeah, right), that they were having problems maintaining. They wanted to improve their availability and business continuity.

The DBA had a lot of anxiety with implementing AGs in this environment without experiencing considerable downtime.

Fortunately they were able to engage our servic

Welcome to 2016! This is going to be an exciting year for SQL Server professionals with the upcoming release of SQL Server 2016. With this in mind I’m going to talk a little bit about one of the cool new features in SQL Server 2016.

Many of you have had the opportunity to see me talk about one of my favorite topics, Execution Plans. I had the privilege to talk about Execution Plans at PASS Summit 2015, and many SQL Saturdays across the US. You can find my presentation here on my blog site at SQL Server Associates.

Keeping in step with my love of Execution Plans, Live Query Statistics is one of the new features in SQL Server 2016 that gives me a smile.

Live Query Statistics? What’s the benefit in that? I’ve already heard it from some senior DBAs I know. Have you ever had someone ask you to performance tune a query that never completes? It just takes forever and you know you’re not about to wait for it to complete before you can see the actual execution plan? Well, here’s your chance to look at the operators in progress and as they complete.

Live Query Stats even gives you the ability to see a percentage of completion as the operator is in progress. More about that later.

Here is how you enable Live Query Statistics from Management Studio. Select Query > Include Live Query Statistics.

111 1 SQL SERVER   Live Query Statistics in 2016 … and More!   Notes from the Field #111

Voila, there you have it. Now it is as simple as running your query.

This screenshot shows the query in process. Notice the first operator “Clustered Index Scan” has completed (100%). The other operators are at 8% or 0%.

111 2 SQL SERVER   Live Query Statistics in 2016 … and More!   Notes from the Field #111

Now here is the best part, hover over the operator that is currently in process, the one you think is the issue. Even though it is “in process” you can see the details about the operator. Notice the Actual Number of Rows information? There is a lot of information here to give you a start on identifying where and what the issue might be with the query.

111 3 SQL SERVER   Live Query Statistics in 2016 … and More!   Notes from the Field #111

Once the query has completed you get an execution plan that looks like the screenshot below.

111 4 SQL SERVER   Live Query Statistics in 2016 … and More!   Notes from the Field #111

And there you go. So simple my mom could take advantage of it. Of course mom would be saying, “Michael Mark, you had better not be lying to me. It can’t be that easy!”

Really mom, it’s that simple and it is going to help you performance tune like a pro…oh, wait, sorry mom, performance tuning is a little more complicated.

Wait, but there’s more…. In what versions does this work? Well, of course SQL Server 2016…duh… but wait there’s more!

111 5 SQL SERVER   Live Query Statistics in 2016 … and More!   Notes from the Field #111

Yep, that’s right! I just ran Live Query Statistics against a SQL Server 2014 server. Isn’t that awesome?

I’ve been able to use this new feature with several of the Linchpin People clients (even though SQL Server 2016 isn’t available yet) because it works on SQL Server 2014 installations. It just doesn’t get any better than that.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #053 – What is the Difference Between Deterministic Functions and Nondeterministic Functions?

Some questions are so theoretical that I believe they really do not add too much value if users know that question or not. Here is one such question I am very confident that you agree with my point of view.

Questions: What is the Difference Between Deterministic Functions and Nondeterministic Functions?

Answer: 

Deterministic functions always return the same output result all the time it is executed for same input values. i.e. ABS, DATEDIFF, ISNULL etc.

Nondeterministic functions may return different results each time they are executed. i.e. NEWID, RAND, @@CPU_BUSY etc. Functions that call extended stored procedures are nondeterministic. User-defined functions that create side effects on the database are not recommended.

Now you have read the answer – I have a question back to you.

Did you the difference between deterministic and nondeterministic function before this blog? If no, has it ever impacted your performance in your daily job?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Could Not Register Service Principal Name Persists – Notes from the Field #105

ryanadams SQL SERVER   Could Not Register Service Principal Name Persists   Notes from the Field #105[Note from Pinal]: In this episode of the Notes from the Field series database expert Ryan Adams explains a very critical error user receive when registering service principals. Ryan is one guy who spends more time with real world issues with SQL Server than anything else. He has mastered the art of resolving complex errors and document them so easily that one can’t find anywhere else. In this blog post Ryan addresses a very interesting error related to Service Principal Name. Read the experience of  Ryan in her own words.


Have you ever seen the error below in your SQL Server log shortly after startup?  You’ll actually see two of them and you can see the difference between them in the screen shot, but here is the text.

The SQL Server Network Interface library could not register the Service Principal Name (SPN) [ MSSQLSvc/Node1.stars.com:1433 ] for the SQL Server Service.  Windows return code: 0x2098, state: 15.  Failure to register a SPN might cause integrated authentication to use NTLM instead of Kerberos.  This is an informational message.  Further action is only required if Kerberos authentication is required by authentication policies and if the SPN has not been manually registered.

Kerb Error in Log SQL SERVER   Could Not Register Service Principal Name Persists   Notes from the Field #105

So what causes this error and how can you fix it?  The first thing to note is that it is an informational message and not actually an error.  There are two scenarios in which you will see this message.

  1. The first scenario is what you see right out of the box.  SQL Server does not have the rights on a domain service account to register Service Principal Names (SPNs).  The description we see here is very clear in telling us that SQL Server could not register the required SPNs.  It even tells us the exact SPN syntax it attempted to register.  If you want to use Kerberos you have to register the SPNs manually or give the service account the right to perform the change itself.  If you decided to register them manually, then now is a good time to write down the SPNs from the description.
  2. The second scenario is a weird one that throws people off.  If you choose to manually register the SPNs on the service account and restart SQL Server, you’ll still see the same message in the log.  Now why in the world would this message even show up if you already registered the SPNs?  In fact, many folks will see this message and assume they are not using Kerberos, because the message clearly states that it could not register the SPNs.  The assumption is usually that they got the SPN syntax wrong or that the SPNs never got registered.

Just for kicks, let’s jump back over to my test server and take a look at the current connections.  Most folks will add a WHERE clause to the following query to just look at their current connection, but I’m going to caution you about that.  If you’re on the server itself, you won’t get accurate results because you end up using Named Pipes unless it’s disabled.  We are looking to see if there are any Kerberos connections at all so we don’t want to filter the result set.

SELECT * FROM sys.dm_exec_connections
--WHERE session_id = @@SPID

Kerb Connection SQL SERVER   Could Not Register Service Principal Name Persists   Notes from the Field #105

Well that’s an interesting result, huh?  I clearly have Kerberos connections despite the message I keep getting in the SQL Server log.  So why is that?  Well it comes down to the semantics of the message.  The message said it couldn’t register the SPNs and that’s true.  It couldn’t register them because you already did it.  So if you ever see this message, make sure you go look at your connections first (using the above query) to see if it is something you need to address or not.  If you see KERBEROS in the auth_scheme column, then you are all set.

If you want the message to go away completely, there is only one way to do that.  You have to give the account running the SQL Server service the permissions to change its own SPNs.  You can do that by opening the properties of the account and heading to the security tab.  You will find an account in the account list called SELF.  Grant that account “Write to Public Information”, restart the SQL Server service, and the message will disappear.  Now you’ll see a new message stating that SQL Server was able to successfully register the required SPNs.

If you are looking for more information around Kerberos and SQL Server you can visit my blog for more articles.  This is one of the many checks we perform during our WellDBA exams here at Linchpin People.  If you’re interested in taking us a look at your systems and perform a WellDBA exam, we would be happy to work with you.  You can find more information here about the Linchpin WellDBA exams.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Availability Group and the Six Critical Steps for DBAs to Follow – Notes from the Field #104

Mike Lawell SQL SERVER   Availability Group and the Six Critical Steps for DBAs to Follow   Notes from the Field #104

Mike’s, “now what am I supposed to do?”, face.

[Note from Pinal]: In this episode of the Notes from the Field series database expert Mike Lawell explains about Availability Group and the Six Critical Steps for DBAs to Follow.  A few weeks ago, I asked questions in conferance, how many people know about availability group. Only a handful of people raised their hands. I was expecting this as many have heard about AlwaysOn and Availability Groups but not everyone has implemented the same. Most of the time, there are no clear guidelines for the same. In this blog post Mike tells an amazing story about Six Critical Steps every DBA must know about Availability group. He makes this simple concept very easy.  Read the experience of  Mike in his own words.


Your boss has asked you to implement SQL Server AlwaysOn Availability Groups in your environment. Exciting as it may, you’re an accidental DBA with little to no experience implementing any high availability solution, let alone, a fairly new availability group.

The first thing you say is, fantastic! Then, hmmm, fantastic. You realize this is above your head, but it is a great opportunity to learn.

This happened to one of my clients, where the DBA barely had SQL Server Database Mirroring experience. The product had a required up time of 99.999% (yeah, right), that they were having problems maintaining. They wanted to improve their availability and business continuity.

The DBA had a lot of anxiety with implementing AGs in this environment without experiencing considerable downtime.

Fortunately they were able to engage our services to lead them through the process. We migrated from mirroring to AGs with approximately a minute of downtime into a 4 server multi-subnet environment to a new set of servers.

1.      Research

There are many environments where AGs will simply not work (or have a hidden disaster waiting). For instance, you should not use availability groups in SQL Server 2012 or 2014, if you’re doing cross database transactions or distributed transactions. If you have a failover during a distributed transaction, it is possible that your database will enter a suspect mode (btw, this changes in SQL Server 2016).

How many servers and replicas will you need? Which databases do you want in which AG? What might the licensing costs be? What should/can your quorum mode be? Do your applications even support AGs?

2.      Plan

Develop the high level plan to the best of your knowledge. Don’t worry about being wrong, you will miss something. Take the time to think of every application or service, etc. that will connect to the AG. Document all of the jobs, alerts, mail configuration, operators, everything you can think of (find a script to do it).

3.      Test the AG Build

Now that you have some idea of what is possible and you think you know what needs to be done, build an AG as a proof of concept in an isolated environment to test your theory of how it should work. Make sure you use the same drive letters as production. You will learn that you will miss important configurations, steps, etc. Tear the AG down then rebuild it, until you are familiar with the steps, write the steps you take down (every step). Do this until you’re confident you have all of the steps assembled and you haven’t missed anything. You will miss something, because most database environments are complex, and many DBAs don’t know all of the services connecting to their databases.

4.      Script the Implementation

104 doc SQL SERVER   Availability Group and the Six Critical Steps for DBAs to Follow   Notes from the Field #104This part really helps in minimizing downtime. You don’t want to be clicking through SSMS trying to get all of the configurations correct in production and risk additional downtime at failover or worse.

I built the AGs ahead of time in the production environment and added a test database so I could test the failover between machines before go live.

The scripts were PowerShell scripts pulling the configuration from a csv file. I learned how to script PowerShell from the internet. Make sure your configuration file is tested. I create a spreadsheet with the important data (created from a script run against production) about database name, data file location, and more. Don’t type the configurations if you can avoid it. This is where you get into trouble especially in complex environments. I have some examples of the PowerShell AG and SQLCMD Log Shipping scripts I used on my blog at SQLServerAssociates.com

5.      Test the Scripts

This is the most important step as even the best plan must be tested. It must work repeatedly in your development environment without errors. Don’t compromise with this step. If you want to avoid extended downtime because you received an error that you’ll have to research to fix, test your scripts.

6.      Know your Environment

You must be able to talk confidently about AGs and defend your configuration to management. You will need to document your environment for management, operations, and other DBAs (including yourself). Use Visio diagrams for visuals.

Summary

Migrating your environment to AGs can be a great learning experience, but don’t be afraid to engage a consultant to help you do it right. The justification is usually very easy when talking to management when you equate downtime to revenue lost.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – T-SQL Window Function Framing and Performance – Notes from the Field #103

Kathi SQL SERVER   T SQL Window Function Framing and Performance   Notes from the Field #103[Note from Pinal]: In this episode of the Notes from the Field series database expert Kathi Kellenberger explains about T-SQL Over clause. Kathi is an amazing instructor, she was the SQL author I have read in my early career. The reason, I love SQL Server because her writing has instigated love for this technology in me. Today she brings a relatively interesting topic for database experts. Read the experience of  Kathi in her own words.


In my last post to SQL Authority, I explained how to add frames to T-SQL window functions where it is supported. I broke down the syntax and showed where things can go wrong if you rely on the default frame instead of explicitly specifying the frame. Be sure to read that article if you are new to frames or just need a refresher.

When creating a running total, for example, It’s very easy to just leave out the frame, and, in many cases, the results will be what you want. There is another downside to using the default frame, however, and that is a pretty big performance penalty. The default frame, when one is not specified, is RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. If the ORDER BY expression in the OVER clause produces unique values, you will get a valid running total.

In order to perform the window function calculation, such as a running total, SQL Server creates a worktable and populates it with each partition. When RANGE is used, that worktable is always created in tempdb. When ROWS is used instead by specifying the frame, the worktable is created in memory most of the time. The worktable created in memory has no I/O, locking, or contention issues, so it performs much better. Unfortunately, when you look at the graphical execution plan, you won’t see much of a difference between ROWS and RANGE. You can see the difference, however, if you look at STATISTICS IO.

To see the difference for yourself, run this script against the AdventureWorks database. Be sure to enable the Actual Execution Plan before running it as well.

SET STATISTICS IO ON;
GO
PRINT 'Default frame';
SELECT CustomerID, SalesOrderID, TotalDue,
SUM(TotalDue) OVER(PARTITION BY CustomerID ORDER BY SalesOrderID) AS RunningTotal
FROM Sales.SalesOrderHeader;
PRINT 'ROWS frame';
SELECT CustomerID, SalesOrderID, TotalDue,
SUM(TotalDue) OVER(PARTITION BY CustomerID ORDER BY SalesOrderID
ROWS UNBOUNDED PRECEDING
) AS RunningTotal
FROM Sales.SalesOrderHeader;

First, take a look at the execution plans as shown in the Figure 1. The two queries did not produce identical plans, but the relative cost for each is 50%. So, at least the execution plans report that the queries perform the same.

103 1 SQL SERVER   T SQL Window Function Framing and Performance   Notes from the Field #103

Figure 1: The graphical execution plans

Now take a look at the messages tab. Here you will see a big difference as shown in Figure 2. The query using the default frame reported a very high number of logical reads from a worktable. The second query, which uses ROWS, reports 0 reads for the worktable. If you turn on Statistics Time, you will also see that the second query runs faster.

103 2 SQL SERVER   T SQL Window Function Framing and Performance   Notes from the Field #103

Figure 2: The statistics IO

It is often difficult to show performance differences with AdventureWorks tables. They are just too small. Adam Machanic, a SQL Server MVP, published a script that creates a couple of large tables based on AdventureWorks data. One of those tables, bigTransactionHistory, contains 30 million rows. I used bigTransactionHistory to do some performance testing to see if there really was a time difference that could be measured between ROWS and RANGE. For the test, I created a couple of smaller tables, subsets of Adam’s table, which contained 15 and 7.5 million rows respectively. I turned off the grid results so that only the server time would be measured. In addition to comparing ROWS and RANGE, I also tested two traditional methods: using a self-join subquery and a cursor. You can see the results of my tests in Figure 3.

103 3 SQL SERVER   T SQL Window Function Framing and Performance   Notes from the Field #103

Figure 3: The results of testing running totals

Using an accumulating window aggregate function (WF) outperformed the traditional techniques even with the default frame. You can see, however, that specifying a Rows frame was incredibly fast.

Whenever frames are supported (FIRST_VALUE, LAST_VALUE and accumulating window aggregates), make sure that you always specify the frame. Not only can you avoid some logic problems caused by RANGE frames, you will almost always see a performance boost by using ROWS. Remember: Use ROWS. Use ROWS. And use ROWS.

notes 82 3 SQL SERVER   T SQL Window Function Framing and Performance   Notes from the Field #103If you would like to learn more about T-SQL window functions, be sure to check out my latest book Expert T-SQL Window Functions in SQL Server or my Pluralsight course T-SQL Window Functions.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – What is T-SQL Window Function Framing? – Notes from the Field #102

Kathi SQL SERVER   What is T SQL Window Function Framing?   Notes from the Field #102[Note from Pinal]: In this episode of the Notes from the Field series database expert Kathi Kellenberger explains about T-SQL Over clause. Kathi is an amazing instructor, she was the SQL author I have read in my early career. The reason, I love SQL Server because her writing has instigated love for this technology in me. Today she brings a relatively interesting topic for database experts. Read the experience of  Kathi in her own words.


One of the best kept secrets of T-SQL window functions, also called windowing or windowed functions, is framing. Framing was introduced with SQL Server 2012, and it allows the set of rows, or window, to be defined very granularly. For example, when calculating a running total, each row needs to see an increasing number of values to perform the calculation. Row 1 needs to see just row 1. Row 2 needs to see rows 1 and 2. Row 3 needs to see rows 1, 2, and 3. Framing allows each row to have a distinct set of rows for the window function calculation.

NOTE: In my last SQL Authority post, I explained the OVER clause. Be sure to review that article if the OVER clause is new to you.

Framing is used in a very specific set of circumstances: with accumulating window aggregates and with FIRST_VALUE and LAST_VALUE. Accumulating window aggregates are the aggregate functions you use every day along with an OVER clause containing an ORDER BY. You can calculate a running total, for example. FIRST_VALUE lets you pull in any column from the first row of the frame. LAST_VALUE lets you pull in any column from the final row of the frame. By default, if you don’t specify the frame in the OVER clause when it is supported, it will consist of the rows starting with row 1 of the partition and continue to the current row. Often, the default frame will give you the results you are looking for. By explicitly specifying the frame, you can achieve even more functionality, avoid some logical errors, and get better performance.

Framing has several keywords you should learn.

Table 1 lists each one.

Term Definition
ROWS A positional operator used to define the frame.
RANGE A logical operator used to define the frame. This operator is not fully implemented as of SQL Server 2014. It is the default frame type.
CURRENT ROW The row where the result of the calculation will be returned and also used as a boundary.
UNBOUNDED PRECEDING The first row of the partition, often used as the lower boundary.
UNBOUNDED FOLLOWING The final row of the partition, often used as the upper boundary.
# PRECEDING A number of rows before the current row, often used as the lower boundary. This is only supported with ROWS.
# FOLLOWING A number of rows following the current row, often used as the upper boundary. This is only supported with ROWS.
BETWEEN Used to connect the two boundaries.

Table 1: Framing terms

Here is the default frame, when one isn’t specified:

RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW

This means that the rows in the window will consist of the first row of the partition and continue to the current row. This works for calculating a running total as long as the ORDER BY columns found in the OVER CLAUSE are unique. If not, because RANGE evaluates the data logically, you can get results that don’t make sense.

When I speak on this topic, I tell my audience they need to learn three things: Use ROWS. Use ROWS. And use ROWS. Instead of relying on the default frame, to calculate a running total, use this frame:

ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW

If that is too much to type, you can also abbreviate it like this:

ROWS UNBOUNDED PRECEDING

If you would like to calculate a three month moving average, for example, use this frame:

ROWS BETWEEN 2 PRECEDING AND CURRENT ROW

I’ll save the performance differences for another post, but I would like to show you some logical problems found by leaving out the frame. This example runs in the AdventureWorks database. The ORDER BY column is not unique, and I have chosen a customer with multiple orders on the same date.

SELECT CustomerID, SalesOrderID, CAST(OrderDate AS Date) AS OrderDate, TotalDue,
SUM(TotalDue) OVER(ORDER BY OrderDate
ROWS UNBOUNDED PRECEDING
) AS RunningTotal,
SUM(TotalDue) OVER(ORDER BY OrderDate) AS DefFrameRunningTotal,
FIRST_VALUE(SalesOrderID) OVER(ORDER BY OrderDate) AS FirstOrder,
LAST_VALUE(SalesOrderID) OVER(ORDER BY OrderDate
ROWS
BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS LastOrder,
LAST_VALUE(SalesOrderID) OVER(ORDER BY OrderDate) AS DefFrameLastOrder
FROM Sales.SalesOrderHeader
WHERE CustomerID = 29586
ORDER BY OrderDate;

Here are the results:

102 1notes SQL SERVER   What is T SQL Window Function Framing?   Notes from the Field #102

Take a look at the RunningTotal column. The value increases until you get to the final row. The correct frame is specified the OVER clause. Take a look at the values in DefFrameRunningTotal. Notice that, instead of continually increasing, it repeats values. This problem is due to the default frame, using RANGE, which looks at the value of OrderDate instead of just the position of the row. If the OrderDate values are the same, then they belong together in the same window when RANGE is used. You can solve this problem by using ROWS or by making sure that the ORDER BY expression in the OVER clause is unique. (Use ROWS. Use ROWS. And use ROWS.)

Take a look at the FirstOrder column. In each case, the value returned came from the first row of the partition. Even without specifying the frame, it works correctly. The LastOrder column looks correct as well. Notice that I specified a frame with ROWS that starts with the current row and goes up to the end of the partition. When leaving out the frame as shown in DefFrameLastOrder, the results don’t make much sense at all. Remember, the default frame only goes up to the current row. But since RANGE is a logical operator, it actually will go to a row with the same ORDER BY value as the current row. By default, the frame doesn’t go all the way to the end of the partition where the last value can be found. To correctly use LAST_VALUE, you must specify the frame, and you should use ROWS.

Table 2 lists commonly used frames:

Frame Meaning
ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW Start at row 1 of the partition and include rows up to the current row.
ROWS UNBOUNDED PRECEDING Start at row 1 of the partition and include rows up to the current row.
ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING Start at the current row and include rows up to the end of the partition.
ROWS BETWEEN N PRECEDING AND CURRENT ROW. Start at a specified number of rows before the current row and include rows up to the current row.
ROWS BETWEEN CURRENT ROW AND N FOLLOWING Start at the current row and include rows up to a specified number of rows following the current row.
ROWS BETWEEN N PRECEDING AND N FOLLOWING Start at a specified number of rows before the current row and include a specified number of rows following the current row. Yes, the current row is also included!

Table 2: Commonly used framing syntax

Specifying the correct frame is really important to ensure that you achieve the expected results. There are also some important performance differences which I’ll talk about in my next SQL Authority post.

notes 82 3 SQL SERVER   What is T SQL Window Function Framing?   Notes from the Field #102If you would like to learn more about T-SQL window functions, be sure to check out my latest book Expert T-SQL Window Functions in SQL Server or my Pluralsight course T-SQL Window Functions.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – What is the OVER Clause? – Notes from the Field #101

Kathi SQL SERVER   What is the OVER Clause?   Notes from the Field #101[Note from Pinal]: In this episode of the Notes from the Field series database expert Kathi Kellenberger explains about T-SQL Over clause. Kathi is an amazing instructor, she was the SQL author I have read in my early career. The reason, I love SQL Server because her writing has instigated love for this technology in me. Today she brings a relatively interesting topic for database experts. Read the experience of  Kathi in her own words.


The OVER clause was added to SQL Server in 2005. Yes, it’s been around for a decade, but most SQL Server professionals know very little about all the ways it can be used. Whenever you add a T-SQL window (also called windowed or windowing) function such as ROW_NUMBER to your query, you must use the OVER clause. Getting the OVER clause right is critical to getting the correct results. Window functions require the OVER clause, but there is one other situation where the OVER clause is used: with the sequence object. In this article, I’m going to stick just with the ways the OVER clause is used with window functions.

The OVER clause has three components: partitioning, ordering, and framing. Partitioning is always supported, but support for ordering and framing depends on which type of window function you are using.

The partitioning component, expressed as PARTITION BY, is optional and is supported for all types of window functions. Partitioning divides up the rows. It’s kind of like a physical window that is divided up into panes. The entire window is a window. The panes are also windows. Say that I partition by CustomerID. The partition for CustomerID 1000 is restricted to rows for CustomerID 1000.  For the calculation on any given row, the window consists of rows from that partition only. This means that the ROW_NUMBER function restarts the row numbers for each partition, for example. You can specify more than one column for PARTITION BY as long as you separate the columns with commas. Partitioning is not the same as grouping with the GROUP BY clause. When you use the GROUP BY clause, only one row is returned for each group. Partitioning is just a way to divide the data in order to perform the calculations; the detail columns are returned. Figure 1 shows an example.

101 1 SQL SERVER   What is the OVER Clause?   Notes from the Field #101

Figure 1: The effects of PARTITION BY

Ordering within the OVER clause is supported when the data must be sorted to perform the calculation. For example, ROW_NUMBER requires an ORDER BY expression within the OVER clause because the rows must be lined up. Actually, most of the window functions support ORDER BY.  ORDER BY in the OVER clause is not supported for calculating subtotals, for example. You don’t need the data sorted to calculate a sum. Note that the ORDER BY within the OVER clause has nothing to do with an ORDER BY clause found in the query itself. Just like partitioning, you can specify a comma delimited list of columns. Figure 2 shows the results of ORDER BY in the OVER clause.

101 2 SQL SERVER   What is the OVER Clause?   Notes from the Field #101

Figure 2: Using ORDER BY

The third component of the OVER CLAUSE is called framing. Framing was introduced with SQL Server 2012 and is used in only a couple of situations: accumulating window aggregates (running totals, moving averages, etc.) and the functions FIRST_VALUE and LAST_VALUE. While partitioning is similar to window panes, framing is like a stained glass window. When performing the calculations, every row might see a different set of rows through the window. For example, when calculating a running total, row 1 sees row 1. Row 2 sees rows 1 and 2. Row 3 see rows 1, 2, and 3. The syntax for framing is a bit complex, so I am going to save it for my next post. By default, the frame consists of the first row of the sorted partition and all subsequent rows up to the current row.  Figure 3 is an example. The small arrows represent the current row where the calculation is being performed.

101 3 SQL SERVER   What is the OVER Clause?   Notes from the Field #101

Figure 3: The results of framing

To see the OVER clause in action, run this query against the AdventureWorks database:

SELECT CustomerID, SalesOrderID, TotalDue,
ROW_NUMBER() OVER(PARTITION BY CustomerID ORDER BY SalesOrderID) AS RowNum,
SUM(TotalDue) OVER() AS GrandTotal,
SUM(TotalDue) OVER(PARTITION BY CustomerID) AS SubTotal,
SUM(TotalDue) OVER(PARTITION BY CustomerID ORDER BY SalesOrderID) AS RunningTotal,
LAG(SalesOrderID) OVER(PARTITION BY CustomerID ORDER BY SalesOrderID) AS PrevOrder,
FIRST_VALUE(TotalDue) OVER(PARTITION BY CustomerID ORDER BY SalesOrderID) AS FirstAmt
FROM Sales.SalesOrderHeader;

notes 82 3 SQL SERVER   What is the OVER Clause?   Notes from the Field #101Next time, I’ll provide a deep dive into framing. If you would like to learn more about T-SQL window functions, be sure to check out my latest book Expert T-SQL Window Functions in SQL Server or my Pluralsight course T-SQL Window Functions.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Database Testing and Waitfor Delay – Notes from the Field #099

[Note from Pinal]: This is a 99th episode of Notes from the Field series. In this episode we are going to learn something very simple but effective about database testing. Database testing is one of the most critical elements for any developer or DBA. If you do not test your code, there are good chances to make mistakes. However, sometime testing requires some advanced tricks.

In this episode of the Notes from the Field series database expert Kevin Hazzard explains the how to use command Waitfor Delay for our advantage when doing database testing. Read the experience of Kevin in his own words.


KevinHazzard SQL SERVER   Database Testing and Waitfor Delay   Notes from the Field #099

Database testing has become something of an obsession for us at Linchpin People. To test ETL processes, I often write mocking procedures that can simulate data flowing in from clients. When I first began doing this years ago, I’d set up SQL Agent jobs to invoke the mocking procedures on a regular frequency. However, that’s fairly cumbersome to set up so I started writing scripts like this instead:

WAITFOR DELAY '00:00:03';
EXEC sp_InsertMockTransaction;
GO 100

This code uses the WAITFOR command which many developers already understand. If you’ve never seen or heard of WAITFOR before, it’s fairly straightforward. In the code shown above, the DELAY option is used to induce a delay of three seconds. Afterwards, the mocking procedure runs.

The last line of the script is not as familiar to developers, I’ve found. We’ve all seen and used the GO phrase but as it turns out, it’s not part of the T-SQL language. GO is a batch separator that tells SSMS or the SQLCMD processor to forward the current batch of commands to an instance of SQL Server. What many developers don’t realize is that the GO batch separator can accept a positive integer parameter that will loop over the batch that many times.

In SSMS, the code shown above will invoke the mocking procedure 100 times, pausing for three seconds in between. This is a simple but effective way to simulate the input I’ll need for my tests. Of course, this pattern comes in handy for all sorts of uses, not just testing. Hopefully you’ll find simple trick as useful as I have.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)