SQL SERVER – Backups are Non-negotiable Lifeline for DBAs

I have had my share of DBA friends who have told me a number of horror stories from their experiences. Thinking back, these look like moments to laugh away but there are some serious learnings we get from these experiences. I am a personal believer of learning something in everything that we do. That is one of the reasons I have been able to sustain active blogging every day for so many years. Being like a kid, with the inquisitiveness to learn, should never stop within us if we have to survive in the IT industry.

If you were to ask me to pick one single laughing and learning moment that stands out among the rest, I don’t think I could pick just one. Having said that, I do see one common theme when it comes to databases and backups. I am a regular visitor and contributor to SQL Server Bangalore UG and I learn a lot from my friends there. A number of them have asked how you restore system databases once it is corrupted. Do you ever take system databases backups proactively, as a just-in-case practice? How many of us do this in our testing, development and other integration servers today? I can bet you, not many of us.

In a recent user group meeting, someone asked me to simplify how a normal backup happens. For a second all the parameters involved in the UI screen came to my mind and I held my thought. I said, “Buddy it is simpler than you can think.” I told the following flow:

This is the fundamental building block as backups happen. Now we can add spice to this workflow with options like compression, verify file, encryption and others. But the basic structure cannot be simplified more than this.

SQL Safe Backup

There is never just one way to do things. I had the opportunity to look at Idera’s SQL Safe Backup and some of the things that caught my attention are listed below. You need a structured method to automate and build a process for your backups if you are working in an enterprise.

First impressions – Starting a backup

I started the backup wizard to see what it can give me for a starter. The below screenshot shows the set of databases on a particular instance which can be configured for backups. It can be Full, Differential, Log or File backups. All these are out-of-box capabilities. What I saw immediately was the data about when the last backup was done. This is nifty and nice information to start when taking manual backups.

Four settings that surprised me

The wizard has a number of settings that stumped me. For instance the CopyOnly backup option as part of General tab was a pleasant surprise. To add to it, on selection it gives me important information about how a CopyOnly doesn’t break the Log chain etc. If you have a Junior DBA in your organization, these tips will surely help.

The next two settings are great best practices for me. The “Enable Checksum” is something I advocate everyone when working with versions of SQL Server 2005 and above. It is great to see as an option as part of the wizard.

The icing on the cake the tool gave was “Network Resiliency.” The attached screen shot shows how the retry logic happens and for how long when the backup needs to be placed in a network share (UNC path).

Finally, if these were not enough. There is a way to script the whole thing and run it as command line option or TSQL script. This can later be integrated with custom applications or even with SQL Agent. A typical script for my above system databases backup looks like:

Command line script:

“C:\Program Files\Idera\SQLsafe\SQLsafeCmd.exe” Backup master msdb model “C:\Backup\%instance%_%database%_%backuptype%_%timestamp% (%ordinal% of %total%).safe” -CompressionLevel ispeed -RetryWrites 10 300 60 -Server sqldude

T-SQL construct

DECLARE @ResultCode INT
EXEC
@ResultCode = [master].[dbo].[xp_ss_backup]
@database = N'master',
@database = N'msdb',
@database = N'model',
@filename = N'C:\Backup\%instance%_%database%_%backuptype%_%timestamp% (%ordinal% of %total%).safe',
@compressionlevel = N'ispeed',
@retrywrites = N'10 300 60',
@server = N'sqldude'
IF(@ResultCode! = 0)
RAISERROR('One or more operations failed to complete.', 16, 1);

Though all these got me interested, I wish the tool had more options to integrate with native settings. Some of the options I love to see would include – utilizing the native compression technique of SQL Server, MaxTransferSize setting, BlockSize setting, BufferCount settings for backup can be really handy. Also for SQL Server AlwaysOn configuration databases, it would be great to add options or settings to check for secondary servers in case we want to create a backup. The restore process is simple and the wizard is really handy here too. Let me move to the next interesting part.

Defining Policies – Making a blueprint

Every organization has its own retention policies when it comes to backups. They build their own strategy of backups based on RTO and RPO requirements of the business. Having a strategy to recover is the most important task. So Idera’s SQL Safe Backup did give me some great ideas. The wizard for creating the policy can be for backup, restore or for log shipping. The basic backup wizard allows us to select instances, DB’s and create a plan accordingly.

What I loved about the wizard is that, I can create a consolidated single plan in one shot for Full, Transaction logs and Differentials for my organizational mission critical database in one go. The above schedule is a classic screen shot of how you can schedule in one screen how the backups would be taken.

What I would love to see is some graphical representation on a week’s timeline to how backups are taken. This will help the DBA take a call if their strategy is meeting their SLA or not. I am so used to backup timelines with SSMS that I thought this would be a great addition. Apart from this, in enterprises we have a need to define our own custom encryption with SQL Server using Backup Encryption. I hope the feature to import a custom certificate for encryption would help many with the tool.

Final note

All tools evolve, and their fundamental motive is to make life of people easier. SQL Safe Backup is surely a tool geared in that direction. It uses the standard VDI interface to take backups. I wish as the new versions are released, the tool also aligns with utilizing the capabilities of new functionality. Some that I have pointed included external Encryption, native Compression, AlwaysOn secondary backups, backup to URL etc. For today’s deployments, a lot of them require these tools to organize their backup strategy efficiently. I think of SQL Safe Backup as like a Swiss army knife in the DBA’s pocket.

You can download SQL Safe Backup from here.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How DBAs Have to be Proactive in Identifying and Fixing Problems Before They Happen

In real life our dear ones always love to have surprises. If we do things even before they can think, we are in for some fun for rest of the days/weeks. I vividly remember a conversation with my daughter in recent past. I had made a deal to get her a bicycle if she secured the top marks in her Math paper. I wanted to create curiosity and enough incentive for her to learn. Anyways, getting her a bicycle was on my shopping list for her birthday. So I knew she is going to put some extra effort in learning this time. When the day of results came, I was prepared mentally to get her the gift ahead of time. More than a surprise for her, she did surprise me with the extra effort in getting the top grade. When she returned home, I was ready to show her the surprise. Even before she could ask me for the gift, I had it in front of her at the parking lot. For a matter of fact, I went ahead and bought a bicycle so that both of us can cycle our way in the evenings and spend some quality time. Being proactive in our actions can get our dear ones happy and the same holds good at work.

At work, I always have had the feeling to finish work ahead of time and I try to surprise my bosses from time to time. In my consulting days, doing challenging work was my forte and solving tough customer problems were things I cherished a lot. Most of my blogs are based on this one learning that I keep sharing every single day.

In one of the assignments, I had to help a DBA in his day-today activities. The problem statement was so simple that it made me think twice for a possible solution.

Pinal: Hi buddy!

DBA: I need some urgent help.

Pinal: Oh sure. Will be glad to help. Please let me know.

DBA: We are in the business of 24×7 and some of these applications are critical.

Pinal: Ok. Is that a problem?

DBA: No. Some of the application users are complaining of errors in the night. And since most of us are gone, it is very difficult for me to know what is going on.

Pinal: Well, do you know the exact error they are getting?

DBA: I know few are getting few deadlocks because they said they were a victim etc and were asked to contact the administrator. I want to solve this one by one. But I need your guidance Pinal.

Pinal: I am glad you got to the core of the problem. Let us see how I can help you. Have you enabled any Trace Flags to capture the Deadlocks?

DBA: No. We have a long approval cycle to enable anything on our Servers. So we have not done any to my knowledge on this new box.

Pinal: Do you have Profiler, Extended events or anything that captures deadlock errors?

DBA: Not that I am aware off. But my guess was confirmed based on the error messages my application developers sent me. It just states the connection was terminated because of deadlock victim. I am not sure how I will debug this. They want help and are asking me for more details. I am stuck.

There are a number of ways to solve this problem. If nothing else, try to use some of the monitoring tools that are available in the market.

Monitoring using Spotlight

Monitoring SQL Server is a proactive approach to analyzing problems on our servers. In this example, since they had the Spotlight enabled, I had to get to their diagnostics server to see the SQL Server under question. I told my DBA friend get to the Spotlight Home page and look for “Red” marks. He was quick to spot the Blocked Process was in red. I said, answer was right in front of his eyes.

Clicking on the Blocked Process, A pop-up came with the deadlock occurrence date and time. It was corresponding to the previous night and he was surprised. The obvious next question was – “Thanks Pinal. This was a good start. Now how do I know what was Deadlocked? What was the …”. I interrupted and said – “Friend, this is just a start. Let us do our detective work next. We know it is deadlock problem. So let us try to dig little bit more into this.”

My next step was to jump into the “Reports” Section and I quickly went ahead and clicked onto the “Deadlock List” report. The name is self-explanatory. We will be presented with a viewer where the date range has to be selected and we need to search for results.

I knew this was an error from previous night. So I went ahead and selected the date range and pressed on “View Report”. As you can see below, we will be presented with all the deadlocks that have occurred on the server for that time range.

In our example, I knew the time when this occurred. The report has the complete stack with some important information that can be useful to application developers. We will know the connection details, objects under question, what were the locks taken, what were the waits types and many more. Below is a screenshot of information from the same report.

This is one of the many typical out-of-box reports available with Spotlight for SQL Server. My DBA friend was delighted to see the level of information he had now and was all set to see the home page every day as soon as he reaches office. Finding the problem before it is being raised by users is a critical and key competency of an experienced DBA. Moreover he went ahead and sent this information to his application team for code review.

I gave my piece of advice to my DBA friend:

Deadlocks are not a problem of SQL Server. It is the way SQL Server makes sure there are no inconsistency in data. Application developer must know the access pattern to tables and must use the same pattern. In the above scenario, it is a classic cyclic deadlock that has happened. Please tell your application developers to fix this in their code to avoid these errors.

Do let me know if you ever encountered such scenarios in your environment? Have you ever been in a situation to identify the problem proactively? What have you done? Do share your stories and let me know your thoughts.

Try Spotlight on SQL Server free for 30 days!

Spotlight on SQL Server continues to break new ground when it comes to providing DBAs with the ultimate in flexibility to suit their specific needs. Building on the success of the Wait Stat analysis workflow, Spotlight on SQL Server version 11.0 now includes multi-dimensional workload analysis to provide maximum insight into performance issues, an enhanced mobile capability to remotely diagnose issues directly from mobile phones and the introduction of SNMP traps. A fresh new look provides simple to use, powerful visualization to effectively monitor the health of your SQL Server environment.

If you’re looking for additional help on how to optimize your SQL Servers, Visit Dell Software’sSQL Server Resource Center for video, guides and other useful content.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to Find Weak Passwords Using T-SQL?

Recently one of my blog reader emailed me below question.

Hi Pinal,
Need your urgent help. In recent past, we have been attacked by the hacker who was able to get in to our SQL Server via sysadmin account and made big damage to our data. To make sure it doesn’t happen in future, I have taken task to find out SQL Server password which are weak.

Do you have any suggestions for me?

This is one of the area which is always haunting all SQL DBAs. There are recommendations to use Windows Authentication to connect to SQL Server and that would save from all such problem. But it is not always feasible to use Windows Authentication. Now, if we decided to choose SQL Authentication, there is a setting which is “Enforce Password Policy” which would ensure that you are choosing a strong password.

If recommendations are not followed, you might end up in situation where SQL Logins have weak and basic passwords. SQL Server has provided a function PWDCOMPARE which can become very useful to find known password. Below are few example use of this out of box funtion:

SELECT NAME,
NAME 'password'
FROM   sys.sql_logins
WHERE  Pwdcompare(NAME, password_hash) = 1
UNION
SELECT
NAME,
'<blank>' AS 'password'
FROM   sys.sql_logins
WHERE  Pwdcompare('', password_hash) = 1
UNION
SELECT
NAME,
'password123' AS 'password'
FROM   sys.sql_logins
WHERE  Pwdcompare('password123', password_hash) = 1

In above query, we are trying to find:

  • Password same as user name – first query
  • Blank password – second query
  • Password = password123 – third query

These are one of the most common password used in the industry. I am sure you can extend this further by modifying it and adding more weak passwords.

Here is the sample output for the above:

Hope this blog would help you in finding weak passwords and make it more complex. Have you ever had a need to use such passwords in your environments? Do let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #003 – How to Write Script for Database Cursor?

Just another day I heard a senior DBA discussing advantages and disadvantages of the cursor. Well, personally I am a firm believer of the set based operations. However, there are few places where cursors are the only solutions. It is quite possible that different experts have different opinion about cursor; some love cursor and some hate it but definitely user can’t ignore cursors.

If you are ever asked to write a cursor, you can just write cursor based on following a script.

USE AdventureWorks2014
GO
DECLARE @ProductID INT
DECLARE
@getProductID CURSOR
SET
@getProductID = CURSOR FOR
SELECT
ProductID
FROM Production.Product
OPEN @getProductID
FETCH NEXT
FROM @getProductID INTO @ProductID
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT
@ProductID
FETCH NEXT
FROM @getProductID INTO @ProductID
END
CLOSE
@getProductID
DEALLOCATE @getProductID
GO            

You can read about cursor examples over here Simple Example of Cursor – Sample Cursor Part 2.

Here are few additional examples of cursors:

Cursor uses WHILE keyword to loop over variables. If you are working with local variable you can also use while loop. Here is an example of WHILE loop where the variable is incremented by 1 at every iteration.

DECLARE @intFlag INT
SET
@intFlag = 1
WHILE (@intFlag <=5)
BEGIN
PRINT
@intFlag
SET @intFlag = @intFlag + 1
END
GO

You can read about this over Simple Example of WHILE Loop With CONTINUE and BREAK Keywords.

This is just a simple example of cursor, in reality there are quite a few different options of the cursor as well. We will discuss this in future blog posts.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Watching Table Variable Data in TempDB

I cannot get enough of working with TempDB and the learning never stops. Previously, when I wrote about SQL SERVER – Is tempDB behaving like a Normal DB?, SQL SERVER – Inside Temp Table Object Creation and Difference TempTable and Table Variable – TempTable in Memory a Myth many did ask me what is the difference when working with table variables is. In one of the email interactions, one of my readers asked – “Is there a way to know table variables are created? Are objects created stored as part of TempDB? Are they written to the T-Log?” 

I thought these questions were interesting. All of us work with Table Variables and assume it is just in-memory and don’t utilize anything from tempDB. Recently, during one of my post session meetings a DBA said their applications were completely built on table variables and their production servers were experiencing huge usage of TempDB. These questions made me write this blog post which I thought is worth a share. Let us look at each step one after another as I explained to my friend:

View Objects Creation and TLog

The first stage for me would be to demystify the object creation process. The thought that table variables are just memory bound objects and nothing gets written to transaction log needs to be clarified first.

Here is a simple steps to clear the doubts. 1) We will clear the T-Log of TempDB. 2) Create a table variable and then check the number of rows added in T-Log of TempDB. 3) We will add rows next. 4) Check the TLogs again and see if they are any different from just table creation. This is my development box and I can assure you there is no other process using tempdb in the meantime on my system.

USE tempdb
GO
CHECKPOINT
GO
-- Output of this is 3
SELECT COUNT(*) [RowNums]
FROM sys.fn_dblog(NULL, NULL)
GO

Next is to create our dummy table variable.

DECLARE @TblVariable TABLE(
id INT NOT NULL IDENTITY (1,1)
,
Name CHAR(30) NOT NULL
,
DOJ DATE NOT NULL
,
Company CHAR(400) NULL DEFAULT 'SQLAuth'
)
-- Output of this is 110+ in my machine
SELECT COUNT(*) [RowNums]
FROM sys.fn_dblog(NULL, NULL)
GO

This small experiment confirms the fact that there is something written to the TLog for table variables creation. I went ahead and cleared the rows and then did a small experiment to see if there are additional rows getting added while I insert rows. So the second script was changed as below.

DECLARE @TblVariable TABLE(
id INT NOT NULL IDENTITY (1,1)
,
Name CHAR(30) NOT NULL
,
DOJ DATE NOT NULL
,
Company CHAR(400) NULL DEFAULT 'SQLAuth'
)
INSERT @TblVariable  (Name, DOJ)
SELECT 'SQLAuth',GETDATE()
-- Output of this is 130+ in my machine
SELECT COUNT(*) [RowNums]
FROM sys.fn_dblog(NULL, NULL)
GO

As you can see there additional rows getting added to TLog showing there is some writes even here. Feel free to look at the description fields of fn_dblog and learn more about the fine prints. My next experiment was to catch the table reference somehow using the sysobjects.

SELECT * FROM sysobjects WHERE TYPE = 'U'

This showed no rows in my runs whenever I execute as I create the table variable. If you wondered why, don’t worry the next steps will demystify.

Seeing Table Variable data

I wanted to try something different so that I can capture the table variable data somehow as it persists in the metadata tables. To mimic the same, I used the below script. Simple, yet powerful.

DECLARE @TblVariable TABLE(
id INT NOT NULL IDENTITY (1,1)
,
Name CHAR(30) NOT NULL
,
DOJ DATE NOT NULL
,
Company CHAR(400) NULL DEFAULT 'SQLAuth'
)
INSERT @TblVariable  (Name, DOJ)
SELECT 'SQLAuth',GETDATE()
SELECT * FROM @TblVariable
WAITFOR DELAY '00:02:00'

Now this script will delay and wait for 2 mins. Since the script is waiting for 2 mins, the batch doesn’t finish and hence the table variable is not destroyed immediately as in the previous case. As the script runs, now we can query sysobjects:

SELECT * FROM sysobjects WHERE TYPE = 'U'

Now, here is our table variable and make sure to note the Object ID. We will next grab the Page which holds the data and examine the same.

SELECT allocated_page_file_id, allocated_page_page_id, *
FROM sys.dm_db_database_page_allocations(2, NULL , 1, NULL, 'DETAILED')
WHERE OBJECT_ID NOT IN (SELECT ID FROM sysobjects WHERE TYPE IN ('S','IT','SQ')) AND page_type=1 AND OBJECT_ID = -1098578155

The output for the above query shows we have Page 306 allocated for data and the next step is to examine the same. For this task, we will use DBCC PAGE and print the output.

DBCC TRACEON(3604)
GO
DBCC PAGE(2,1,306,3)
GO

This confirms our understand that table variables also are written to TempDB and can be very much viewed during its duration of existence just like any normal table. When I showed this to my friend, he was pleasantly surprised. Do let me know if you have ever got a chance to view and debug table variables like this?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Configuration and Performance of SQL Server is Now Easy to Master

This is one of the toughest topics to address, and I am always looking at new tools that can help me learn and understand SQL Server better. In many cases, I fall back to MSDN documentation and official statements from the Microsoft site because a lot of R&D generally has been done by them already. Though I learn about these recommendations, I make it a point to try them out in my limited capacity of testing. I am always of the opinion – “Accept but yet validate.” This reinforces our understanding and we will never forget the concepts thereafter.

Coming back on the topic, what is the tool that you use to check for configuration settings on your SQL Server box? In the past I have talked about SQL Server Management Studio Standard reports and a number of reports that can help. Today, let me tell you about a tool that has helped me learn some new things every now and then – it is called Microsoft SQL Server 2012 Best Practices Analyzer. As I said before, since it comes from Microsoft directly, I use the recommendations and try to learn from the tool. The documentation is crisp and neat too.

As the name suggests, it is a guideline to best practices recommended by Microsoft. This tool can help analyze the configuration setting of your SQL Server box for:

  • Analysis Services
  • Database Engine
  • Integration Services
  • Replication Settings
  • Reporting Services
  • Setup

Let me talk about some of the nice rules available for Database Engine. These are just a subset of 100’s of rules available in the tool. I am just calling out few of them here for reference:

  • It is a security best practice to know if there is a DDOS attack on the server by auditing Failed Logins. Failed Logins are enabled by default, but the rule makes sure there is a check in case it was disabled by the administrator.
  • I have not been a fan of auto-growth happening on the server. Auto-growth is a safety net but administrators need to make sure the files are grown ahead of time during the maintenance window. The rule checks if auto-growth failed for some reason or if it took longer than expected. The answer to this is enabling Instant File Initialization.
  • Placing data and log files on the same volume is never a great idea. It is recommended to place these on different physical drives.
  • If a database is not enabled with CHECKSUM, it will be difficult to identify IO related inconsistencies. Read more from the KB. The rule makes sure these are enabled at the DB level.
  • Having too many VLFs can be a problem. SQL Server 2012 does put some data into the Error Log, but this rules does a check and lets us know.
  • It is a security risk to grant more than read permissions on the BINN folder inside SQL Server. We don’t want the binaries to vanish or get corrupt by malicious users. So it is the case for the DATA folder where we are placing our system files.
  • If password policies are not enforced in the system, the rule raises the same as alert. It is a good practice to have password policies of expiration enabled on the server.

The number of rules can keep going on. As I discussed earlier, these are great learning resources for us. We can read on each of the rule and then do our R&D to learn more.

Monitoring performance using the Database Performance Analyzer

Moving now from standard configuration settings to the performance topic. There is no single starting point when it comes to troubleshooting performance inside any database. When it comes to performance tuning, you need to think differently. With complex systems built on top of databases, you need something that is built radically different. SQL Server did introduce a concept called Wait Stats and it has been a popular mechanism for many to have a macro high level understanding of the various waits inside SQL Server. In fact I have written a book on Wait Stats based on the 28 days series on Wait Stats over my blog.

From what I have seen in Database Performance Analyzer from SolarWinds (you might also know it byt its former name Confio Ignite), it has enough and more reference to these Waits. Using Waits as an indicator to server health is something unique I saw from this tool. If you are a DBA and want to tune your systems from the inside out, then it is worth taking a note of this tool. Wait-based reports are embedded almost all over the place. Let me start by looking at the home dashboard. It is crisp and to-the-point for high level information like CPU, Memory, Disk, Waits. If there is something to be worried about, then a warning symbol indicates areas of concern. We will look at some of them as we start exploring the tool.

From the main dashboard, if we select the server that we are monitoring, we will be presented with the above screen. It is a great way to look at the trends of various statements and the waits happening inside the system. What surprised me is the way the waits are also tracked for Oracle, Sybase, Oracle (RAC), DB2 servers from the same console. The enterprises of today are a heterogeneous environment and tools like these that give a single pane of view of waits are great tools.

The Trends graph shows the waits and maximum time taken by batch of queries. The Advisor pane at the bottom was the icing on the cake, it shows the query that consumed maximum CPU or Memory on the server. Clicking on the “more…” button we can view the query and take corrective measures on reducing the same. Let us look at one such recommendation below:

Here at the details pane, we can see the query that got executed and possible recommendations for a better plan are presented. In this case, we get the information about possible “Missing Indexes” for the given query. As a DBA, it is almost impossible to go to each query and find what the missing indexes are. Or worse, in SQL Server we do get a list of missing indexes but it fails to tell which queries will benefit. This screen is a classic example of both coming together. A nifty addition for sure. The bottom pane shows the various waits that affect the query or we can get a day view to see what the top waits are.

In this above view, I have taken an historical trend of a typical day to see what is causing possible waits inside our systems. The very fact that we have some sort of CPU contention is quite evident with the “Red” bar consistently.

Who doesn’t like a little help from the system? Above is a typical example where we see high WRITELOG waits and if a junior DBA is looking at the system, they will not know where to start. On clicking on each of the waits, we are presented with a simple steps of documentation to what one can do to mitigate these waits. Though these recommendations are given, I highly encourage you to consult your senior DBA before taking any steps.

The storage trends were a nice visualization for me. The snapshot shows the top IO users and orders them based on that. Here you can see the tempDB ldf file seems to be under contention. The data shows a sparkline trend of latency for read/writes.

DPA also has the information of standard performance counters grouped by resources. This is great basic information and for a seasoned DBA, this can be an awesome tool to view from a single interface.

These are standard performance counters from the comfort of a web browser so a DBA can watch what is happening inside the server. One can set thresholds for various grouping and also keep alerts. These in my opinion significantly make a DBA proactive rather than reactive when a problem occurs.

Conclusion

If you are looking for detailed performance analysis, but lack the time and/or knowledge to decipher Wait Stats and server resource metrics, this tool will not disappoint you.  It gives you pointers to work with the essential performance characteristics.  Do let me know your experience working with Database Performance Analyzer, I would surely like to learn your perspective using this tool.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – The Easy Functions of SQL Server – Notes from the Field #062

[Note from Pinal]: In this episode of the Notes from the Field series database expert Kathi Kellenberger explains about easy and amazing functions of SQL Server. Kathi is an amazing instructor, she was the SQL author I have read in my early career. The reason, I love SQL Server because her writing has instigated love for this technology in me. Today she brings a relatively unknown topic for database experts. Read the experience of  Kathi in her own words.


Have you ever tried to format a datetime value in T-SQL? I have seen some pretty crazy code that looked something like this:

DECLARE @date DATETIME = GETDATE();
SELECT CAST(YEAR(@date) AS CHAR(4)) + '/' +
RIGHT('0' + CAST(MONTH(@date) AS VARCHAR(2)),2) + '/' +
RIGHT('0' + CAST(DAY(@date) AS VARCHAR(2)),2) + ' ' +
RIGHT('0' + CAST(DATEPART(HOUR,@date) AS VARCHAR(2)),2) + ':' +
RIGHT('0' + CAST(DATEPART(MINUTE,@date) AS VARCHAR(2)),2);

Starting with SQL Server 2012, there is an even easier method: The FORMAT function. The FORMAT function, can do the same thing as the previous code, but it is so much easier to write. The FORMAT function is one of five functions added with SQL Server 2012 that I like to call The Easy Functions. The Easy Functions are listed in the table below.

The Easy Functions
Category Function Definition
Logical CHOOSE Select one expression from a list
  IIF Inline IF. Evaluate an expression and substitute another expression for true or false
String CONCAT Add strings together
  FORMAT Format a string including “C” for currency and “P” for percent
Date EOMONTH Returns the last day of the month

The following script demonstrates the logical functions.

USE AdventureWorks2014; --Or 2012
GO
SELECT  CustomerID, COUNT(*) AS OrderCount,
CHOOSE(COUNT(*), 'Bronze','Silver','Gold','Platinum') AS MemberLevel,
AVG(TotalDue) AS AvgOrder,
IIF(AVG(TotalDue) > 1000, 'High','Low') AS Priority
FROM Sales.SalesOrderHeader
WHERE CustomerID IN (11060, 11062, 11065, 11068, 11070, 12165)
GROUP BY CustomerID;

The query returns the count of orders for several customers. A MemberLevel is assigned based on the number of orders the customer has placed, and a Priority is assigned based on the average order. If the average order is over $1000, then the customer has a high priority.

Take a look at the CHOOSE expression. The first argument is an integer, the count of the orders. After the order count, there is a list of values. The list of values is actually a 1-based array, and the first argument is an index into the array.

The IIF function requires three arguments. The first argument is an expression to check, the average order. The second argument is a value to return if the expression is TRUE. The third argument is a value to return for FALSE.

The next example demonstrates the string functions.

SELECT C.CustomerID, P.FirstName, P.MiddleName, P.LastName,
CONCAT(P.FirstName, ' ' + P.MiddleName, ' ', P.LastName) AS FullName,
FORMAT(SUM(TotalDue),'C') AS TotalSales,
FORMAT(SUM(TotalDue)/SUM(SUM(TotalDue)) OVER(),'P') AS PercentOfSales,
FORMAT(MIN(OrderDate),'yyyy.MM.dd') AS FirstOrderDate
FROM Sales.SalesOrderHeader AS SOH
JOIN Sales.Customer AS C ON SOH.CustomerID = C.CustomerID
JOIN Person.Person AS P ON P.BusinessEntityID = C.PersonID
WHERE C.CustomerID IN (11060, 11062, 11065, 11068, 11070, 12165)
GROUP BY C.CustomerID, P.FirstName, P.MiddleName, P.LastName;

The query uses the CONCAT function to build a FullName column. The CONCAT function ignores NULL values. Notice that rows 3 and 4 returned the FullName even though the MiddleName values are NULL. To make sure an extra space doesn’t show up when the MiddleName is missing, the space is combined with MiddleName as one of the arguments. NULL added to a space is NULL, which will be ignored.

There are three examples of FORMAT. The first example formats TotalSales as currency, in this case USD. The next example, PercentOfSales, formats a complex expression comparing each customer’s sales to the overall total. It uses a window aggregate function to calculate the total sales for the query. (Window aggregates is a great topic for another post!) The final example formats the FirstOrderDate in a custom format.

Here is one last easy function: EOMONTH.

SELECT CustomerID, MIN(OrderDate) AS FirstOrderDate,
EOMONTH(MIN(OrderDate)) AS LastDayOfMonth,
EOMONTH(MIN(OrderDate),1) AS LastDayOfNextMonth
FROM Sales.SalesOrderHeader
WHERE CustomerID IN (11060, 11062, 11065, 11068, 11070, 12165)
GROUP BY CustomerID;

The EOMONTH function returns the last day of the month for the supplied date argument. The second example, LastDayOfNextMonth, uses the optional parameter that adds another month.

I hope that using The Easy Functions will make your life easier!

If you want to get started with BIML with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)