SQL SERVER – The Easy Functions of SQL Server – Notes from the Field #062

[Note from Pinal]: In this episode of the Notes from the Field series database expert Kathi Kellenberger explains about easy and amazing functions of SQL Server. Kathi is an amazing instructor, she was the SQL author I have read in my early career. The reason, I love SQL Server because her writing has instigated love for this technology in me. Today she brings a relatively unknown topic for database experts. Read the experience of  Kathi in her own words.

Have you ever tried to format a datetime value in T-SQL? I have seen some pretty crazy code that looked something like this:

SELECT CAST(YEAR(@date) AS CHAR(4)) + '/' +
RIGHT('0' + CAST(MONTH(@date) AS VARCHAR(2)),2) + '/' +
RIGHT('0' + CAST(DAY(@date) AS VARCHAR(2)),2) + ' ' +
RIGHT('0' + CAST(DATEPART(HOUR,@date) AS VARCHAR(2)),2) + ':' +

Starting with SQL Server 2012, there is an even easier method: The FORMAT function. The FORMAT function, can do the same thing as the previous code, but it is so much easier to write. The FORMAT function is one of five functions added with SQL Server 2012 that I like to call The Easy Functions. The Easy Functions are listed in the table below.

The Easy Functions
Category Function Definition
Logical CHOOSE Select one expression from a list
  IIF Inline IF. Evaluate an expression and substitute another expression for true or false
String CONCAT Add strings together
  FORMAT Format a string including “C” for currency and “P” for percent
Date EOMONTH Returns the last day of the month

The following script demonstrates the logical functions.

USE AdventureWorks2014; --Or 2012
SELECT  CustomerID, COUNT(*) AS OrderCount,
CHOOSE(COUNT(*), 'Bronze','Silver','Gold','Platinum') AS MemberLevel,
AVG(TotalDue) AS AvgOrder,
IIF(AVG(TotalDue) > 1000, 'High','Low') AS Priority
FROM Sales.SalesOrderHeader
WHERE CustomerID IN (11060, 11062, 11065, 11068, 11070, 12165)
GROUP BY CustomerID;

The query returns the count of orders for several customers. A MemberLevel is assigned based on the number of orders the customer has placed, and a Priority is assigned based on the average order. If the average order is over $1000, then the customer has a high priority.

Take a look at the CHOOSE expression. The first argument is an integer, the count of the orders. After the order count, there is a list of values. The list of values is actually a 1-based array, and the first argument is an index into the array.

The IIF function requires three arguments. The first argument is an expression to check, the average order. The second argument is a value to return if the expression is TRUE. The third argument is a value to return for FALSE.

The next example demonstrates the string functions.

SELECT C.CustomerID, P.FirstName, P.MiddleName, P.LastName,
CONCAT(P.FirstName, ' ' + P.MiddleName, ' ', P.LastName) AS FullName,
FORMAT(SUM(TotalDue),'C') AS TotalSales,
FORMAT(SUM(TotalDue)/SUM(SUM(TotalDue)) OVER(),'P') AS PercentOfSales,
FORMAT(MIN(OrderDate),'yyyy.MM.dd') AS FirstOrderDate
FROM Sales.SalesOrderHeader AS SOH
JOIN Sales.Customer AS C ON SOH.CustomerID = C.CustomerID
JOIN Person.Person AS P ON P.BusinessEntityID = C.PersonID
WHERE C.CustomerID IN (11060, 11062, 11065, 11068, 11070, 12165)
GROUP BY C.CustomerID, P.FirstName, P.MiddleName, P.LastName;

The query uses the CONCAT function to build a FullName column. The CONCAT function ignores NULL values. Notice that rows 3 and 4 returned the FullName even though the MiddleName values are NULL. To make sure an extra space doesn’t show up when the MiddleName is missing, the space is combined with MiddleName as one of the arguments. NULL added to a space is NULL, which will be ignored.

There are three examples of FORMAT. The first example formats TotalSales as currency, in this case USD. The next example, PercentOfSales, formats a complex expression comparing each customer’s sales to the overall total. It uses a window aggregate function to calculate the total sales for the query. (Window aggregates is a great topic for another post!) The final example formats the FirstOrderDate in a custom format.

Here is one last easy function: EOMONTH.

SELECT CustomerID, MIN(OrderDate) AS FirstOrderDate,
EOMONTH(MIN(OrderDate)) AS LastDayOfMonth,
EOMONTH(MIN(OrderDate),1) AS LastDayOfNextMonth
FROM Sales.SalesOrderHeader
WHERE CustomerID IN (11060, 11062, 11065, 11068, 11070, 12165)
GROUP BY CustomerID;

The EOMONTH function returns the last day of the month for the supplied date argument. The second example, LastDayOfNextMonth, uses the optional parameter that adds another month.

I hope that using The Easy Functions will make your life easier!

If you want to get started with BIML with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to Increase Number of Errorlog Files

A long back I had written a blog on – SQL SERVER – Recycle Error Log – Create New Log file without Server Restart. Recently one of my blog reader emailed me with this question:

Hi Pinal,
In our SQL Server, we are logging successful logins to ERRORLOG and due to this the file size keeps on increasing. I have read your blog about recycling error log. I have put a job in SQL Agent to recycle the logs every midnight. No problem so far. In near future, we will have audit in our company and as per their requirement, they want to have Errorlog worth 30 days (to check login success/failures). Since I have put midnight job and we have six archive errorlogs (ERRORLOG.1 to ERRORLOG.6) and that would cover last 6 days only.

To solve the problem, I can put a copy step to move the file somewhere else before midnight recycle. But again I also need to put logic to delete 30 days old file. Is there any way to achieve this in SQL Server without much efforts?

<Name Hidden>

I thought I had blog covering this answer but surprisingly, I couldn’t find anything on my blog. So, here are various ways to achieve the same.

Using SSMS

Once we connect to SQL instance via SSMS, we can go to “Management” Node and right click on “SQL Server Logs” and choose “Configure” as shown below.

Once we click on configure, the checkbox shown below would be unchecked by default and value would be shown as 6. That’s the reason we have files till ERRORLOG.6.

We can check the box and put the desired value in the box. Based on daily recycle of errorlog which my blog reader had and 30 days requirement, the value can be set to 30 to keep his auditors happy.

Using T-SQL

If you are not a fan of UI then below is the T-SQL which can be used to achieve the same change.

USE [master]
EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'NumErrorLogs', REG_DWORD, 30

Above is essentially changing the registry key called NumErrorLogs for this instance. Notice that SQL is invoking xp_instance_regwrite which doesn’t have instance related details as it detects the exact key internally. For my named instance “SQL2014” for SQL Server the exact key would be

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL12.SQL2014\MSSQLServer as shown below

Hope this would help you in real world to keep more errorlogs than default if that is a requirement from your DBA/Network or auditor teams.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Visibility into the Overall Health of Your SQL Server Environment – You Can’t Fix What You Can’t See

I am sure most of us have experienced driving a four wheeler. The first time I went behind the wheels, it was an amazing experience. My excitement was multiplied by tension in the face of my father. After a while, I could see a sigh of relief and confidence even in his eyes. This is a classic case of us being pampered, guided and always shadowed by our loved ones. I loved the learning experience then. The real challenge came way later. One of the days while I was driving to office, the car stopped in the middle of the road. I was clueless to what could be the problem. The human instinct was to look at the dashboard for some signals. I checked the fuel level, oil level indicators or for any indicator that could guide me in finding the problem. It was time to call the experts was my opinion. A lot of times, we are faced with situations where the health of the components cannot be diagnosed easily – what can we do?

The above analogy is just an example of what happens inside SQL Server too. If we don’t know or cannot see the problem in hand, how can we fix the problem or even worst how do we know a problem exists on first place? In the recent past, I was quizzed with a similar problem and didn’t know what course of action has to be taken.

During one of the chat sessions with a DBA friend, he said – “most of the testing and integration environments are managed by junior DBA’s and seniors handle only the production environments.” He explained there was a recent incident that caught his attention and wanted to know if there is a way to solve the problem in hand.

This got me curious to what had really happened. He said, one of the test environments crashed just couple of days before a major release cycle. The testing team goes ahead and creates databases for a release cycle and loads tons of relevant data. After the crash, the DBA’s didn’t have any Log backup because it was failing and they didn’t monitor the same. The DBA said the maintenance plans were in place, yet it had failed.

The junior DBA got curious and asked this question to him. There must be something fundamentally wrong here.

FULL recovery acts like SIMPLE

Sometimes we need to understand the basics first. Even though the database was set in FULL recovery model, it acts like Simple recovery model till the first Full backup is taken. This is a well-known concept. To illustrate this, let me take a Log backup of a newly created database. The T-SQL looks like:

BACKUP LOG [Bigdata] TO DISK = N'C:\Backup\Bigdata.bak'
NAME = N'Bigdata-Log Backup',

This will throw the below error 4214. This is the exact error they were getting too.

Msg 4214, Level 16, State 1, Line 1

BACKUP LOG cannot be performed because there is no current database backup.

Msg 3013, Level 16, State 1, Line 1

BACKUP LOG is terminating abnormally.

The error is self-explanatory that we don’t have a database backup yet and that will solve the problem. The GUI steps will yield the following error message dialog -

Now that my friend understood this concept, he wanted to be vigilant and proactive in solving this problem. There is no way he can track all the databases created in the test and integration environments. He wanted to know if there is a way to know this ahead of time.

Overall health using Spotlight for SQL Server

There is no way one can keep track of databases that are created randomly by application / testing team every single day. If you have a monitoring tool like Spotlight, this task becomes easy for sure. Yes, we can make a deployment standard that everyone needs to take a Full backup as soon as the database is created. But this required constant monitoring and discipline from users / script writers.

In this example below, I have gone ahead and pointed Spotlight to a newly built server. As soon as this happens, the heatmap view shows us some critical data which needs attention and acknowledgement. Here you can see the tooltip suggest – “Full backups are missing for few databases”. I felt this was a nifty suggestion that can save lives for many.

On double clicking the same, we are taken to the Server home page where we can look at the databases node (marked in Red) and get the detailed information to which all databases are pending for backup.

The learnings here are two folds. We understood the principle that a log backup is not available till the first full backup happens. Secondly, the FULL recovery model acts like Simple recovery model till the first Full backup is taken. To mitigate the risks on business critical environments, it is important to have some monitoring capability like Spotlight for SQL Server.

Try Spotlight on SQL Server free for 30 days!

Spotlight on SQL Server continues to break new ground when it comes to providing DBAs with the ultimate in flexibility to suit their specific needs. Building on the success of the Wait Stat analysis workflow, Spotlight on SQL Server version 11.0 now includes multi-dimensional workload analysis to provide maximum insight into performance issues, an enhanced mobile capability to remotely diagnose issues directly from mobile phones and the introduction of SNMP traps. A fresh new look provides simple to use, powerful visualization to effectively monitor the health of your SQL Server environment.

If you’re looking for additional help on how to optimize your SQL Servers, Visit Dell Software’s SQL Server Resource Center for video, guides and other useful content.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Finding Tables Created Last Week – DBA Tip Part II

When I wrote my previous blog post on SQL SERVER – Finding Tables Created Last Week – DBA Tip, lesser did I know something as simple as this can become such an interesting topic of conversation. I have been overwhelmed with the number of emails I have got in the past week to know more about such real life scenarios. Having said that, a number of you also asked are there a much easier way to look at the same inside SQL Server Management Studio.

As I was thinking about possible options, onething that struck me is – I havent told you how this can be done using UI inside SQL Server Management Studio.

SSMS is a powerful tool and a number of options exisits to address the same scenario. Let me talk about two other easier ways to get the answers to our question next.

Object Explorer

When using Object Explorer, get to Tables node -> Right Click and Select Filter -> Filter Settings. Once here, you will get a “Creation Date” Property which can be used ot filter. Select the Operator and the value and we are all done.

This will filter the “Tables” node on Object Explorer with objects which were created based on the date condition we specified in the filter.

Object Explorer Details Pane

The next option is to use the Object Explorer Details Pane (F7 shortcut). Here on the header, select the “Create Date” Column and then sort by the same. Now the tables that were created latest will bubble to the top.

Awesome right? These are couple more ways to find the latest tables that were created in the database. This is quick and handy if you don’t want to write any TSQL or any other methods. Do let me know if you have any other way to find the same. I look forward to learning from you folks too.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #002 – Script to Find Byte Size of a Row for All the Tables in Database

In recent times I was part of the interview panel where I was stunned to see that following question was asked to interviewee. I personally believed that this question is no longer relevant as we are in the age where database are capable to handle large amount of data and the row size is no more significant. However, the question was still asked and the interviewee has to still answer it with regards to the latest version of SQL Server. Here it goes:

Question: How to find total row size in bytes for any table in SQL Server?

Answer: In SQL Server 2000 and earlier version it was crucial to know that byte size of the row as there was limit to how big a row can be. This limit has been removed in the recent versions of SQL Server so this question is no more relevant now.

Here is the script for recent versions of SQL Server. I have an additional column which indicates that if a table contains any column with datatype MAX.

CASE WHEN (GROUPING(sob.name)=1) THEN 'All_Tables'
ELSE ISNULL(sob.name, 'unknown') END AS Table_name,
SUM(scol.max_length) AS Approx_Max_Length,
CASE WHEN MIN(scol.max_length) = -1 THEN 'Yes'
ELSE 'No' END AS Fieldwith_MAX_Datatype
FROM sys.objects sob
INNER JOIN sys.columns scol ON scol.OBJECT_ID=sob.OBJECT_ID
GROUP BY sob.name
WITH CUBE          

If you are using SQL Server 2000, here is the blog post which contains the answer SQL SERVER – Query to Find ByteSize of All the Tables in Database.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – What is the query used in sp_cursorfetch and FETCH API_CURSOR?

In a recent debugging exercise for performance, one of my friend recently posted a question to me:

“When I view activity monitor, under the recent expensive queries and under query column, there I see a “fetch api_cursor0000000000000003”. What is this? How can I know what was running behind this query?

Obviously, the hint from that question was the keywords “Fetch and Cursor”. So I knew what the starting point would be. How many times you have been into situation where you want to track a query but when you capture profiler or use conventional methods of sys.sysprocesses or DBCC INPUTBUFFER you would see something like this:

The same thing in profiler we would look like below:

EXEC sp_cursorfetch 180150003,16,8,1


FETCH API_CURSOR0000000000000001

Note: the number in the query might differ in your environment though. Here is a VBScript code to simulate the problem.

Dim strConnection
Dim MyConnection
Dim MyRecordSet
Dim strSQL
strConnection = “Provider=SQLOLEDB;Data Source=.\SQL2014;Initial Catalog=Master;Integrated Security=SSPI;”
Set MyConnection = CreateObject(“ADODB.Connection”)
MyConnection.Open strConnection
Set MyRecordSet = CreateObject(“ADODB.recordset”)
strSQL = “SELECT TOP 3 * FROM sysobjects”
MyRecordSet.Cursorlocation = 2
MyRecordSet.Open strSQL, MyConnection, 3,3
Set MyRecordSet = Nothing

To reproduce the issue, you can save above code in a file and keep extension .vbs. Then you can run from command prompt. You may need to change value of Source in the connection string to match your environment. Once we run the VB Script, you would get a popup with the table name. If profiler is capture from the beginning, we should see sp_cursoropen which can tell the query. If query is already running then we can use below query

SELECT creation_time,
SUBSTRING(st.TEXT, ( c.statement_start_offset / 2) + 1, (
CASE c.statement_end_offset
ELSE c.statement_end_offset
END - c.statement_start_offset) / 2) + 1) AS statement_text
FROM   sys.Dm_exec_cursors(0) AS c
JOIN sys.dm_exec_sessions AS s
ON c.session_id = s.session_id
CROSS apply sys.Dm_exec_sql_text(c.sql_handle) AS st

Here is the sample execution and as we can see, we can get query text which has opened cursor.


Note: The cursor_id is same as what we saw in sp_cursorfetch . Additionally, if we capture complete profile, we can see the below output:

Moral of the story: If you are seeing sp_cursorfetch or FETCH API_CURSOR0000000000000001 then either capture complete profiler, since the starting of the query OR use sys.Dm_exec_cursors to get exact query.

Do let me know if you every faced this problem in your environments before. How did you debug the same?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – What is is_not_trusted in sys.foreign_keys?

My greatest source of learning is via my blog. Many of you take time in asking your toughest questions and sometimes basic questions that make me think. This interaction creates a source of exploration for me and ultimately translates into my knowledge sharing. All questions asked by DBAs are not straight forward and sometimes I also have to do some research before providing answer to them. Here is another interesting question that I received via email recently -

Hi Pinal, What is the meaning of is_not_trusted column in sys.foreign_keys?

I would have told to check books online but if we look at books online for sys.foreign_keys, the column says “FOREIGN KEY constraint has not been verified by the system.” This is not very clear and easy to understand. No wonder he asked me this questions. Let us learn this using an example:

For simplicity let us assume we have three tables: EmpMaster, AddressMaster and EmpAddress using the below schema:

CREATE TABLE AddressMaster(AddressId INT PRIMARY KEY, Address VARCHAR(100), Status CHAR(1))
CREATE TABLE EmpAddress(Emp_id INT, AddressId INT)

From the name you might have guessed that EmpAddress is the mapping table which can have only those values which are in master tables. So, to enforce referential integrity we should create foreign keys to avoid violation of parent-child relationship.

FK_EmpLocation_EmpMaster FOREIGN KEY
) REFERENCES dbo.EmpMaster
FK_EmpLocation_AddressMaster FOREIGN KEY
) REFERENCES dbo.AddressMaster

We have set the basic structure to work. Now, let us insert one row in each master table as below.

INSERT INTO EmpMaster VALUES (1, 'Pinal','Dave')
INSERT INTO AddressMaster VALUES (1, 'Bangalore, India', 'A')

Now, if we try to insert a value (1, 2) in child table, we would get error

INSERT INTO EmpAddress (Emp_id, AddressId) VALUES (1, 2);

Msg 547, Level 16, State 0, Line 33

The INSERT statement conflicted with the FOREIGN KEY constraint “FK_EmpLocation_AddressMaster”. The conflict occurred in database “SQLAuthDB”, table “dbo.AddressMaster”, column ‘AddressId’.

The statement has been terminated.

This error is expected because “2” is not a valid value for AddressId because it’s not available in parent table i.e. AddressMaster.

Let’s look at metadata for foreign key in the catalog view sys.foreign_keys

SELECT name, is_not_trusted FROM sys.foreign_keys

Is_not_trusted is set to zero by default which means that child is not having data which is not in parent. Now, let’s use below to disable the constraint.

ALTER TABLE [dbo].[EmpAddress] NOCHECK CONSTRAINT [FK_EmpLocation_AddressMaster]

And now insert the value.

INSERT INTO EmpAddress (Emp_id,AddressId) VALUES (1, 2)

This means, now we have inconsistent value in the child table. Executing the query on sys.foreign_keys again will confirm this assumption:

SELECT name, is_not_trusted FROM sys.foreign_keys

As highlighted above, the value 1 in the column means that system has not verified the consistency of the data. Now, let’s try to enable the constraint again.

ALTER TABLE [dbo].[EmpAddress] WITH CHECK CHECK CONSTRAINT [FK_EmpLocation_AddressMaster]

You might say that above looks incorrect but “Check” twice is intended. That’s the correct syntax to enable the constraint. Here is the error one would receive while enabling it.

Msg 547, Level 16, State 0, Line 59

The ALTER TABLE statement conflicted with the FOREIGN KEY constraint “FK_EmpLocation_AddressMaster”. The conflict occurred in database “SQLAuthDB”, table “dbo.AddressMaster”, column ‘AddressId’.

Modify the value as below.

UPDATE EmpAddress
SET AddressId = 1
WHERE AddressId = 2

And now we can re-execute the ALTER TABLE command and it should work.

Once you are done with the above script, you can clean up the database by executing following script:


So this was a quick overview of the is_not_trusted value in the foreign_keys table and how it functions. Do let me know if you have every used this feature anywhere.

Reference: Pinal Dave (http://blog.sqlauthority.com)