SQL SERVER – Quiz on knowing DATEPART and DATENAME Behaviors

Playing around with date and time datatypes can be loads of fun. I hope you got the pun intended when I say it is fun. Most of the developers working with date and time datatypes feel it is a pain. I understand their sentiments, but would like to let you know that, it is not a pain as you think. If we understand how date, time and their respective functions work inside SQL Server, then most of the tasks that we complete will be a breeze. Trust me on this. I get stumped by people from time to time with these small tricks and I get into the learning curve of why that happened. It is easier than what you think.

If you did participate in my previous quiz earlier last month (SQL SERVER – Trivia – Days in a Year), then this will be really easier than you think. You will know the answer in a jiffy.

Remember the winners will get a free monthly subscription of Pluralsight. Winners will be randomly picked on March 10th midnight.

Part 1: Understanding DATENAME

The first part of this quiz is understanding what your functions are. Look at the statements below and guess what the answer would be. Then go ahead and try the same inside SQL Server Management Studio.

-- The reason is there is no “Day” specified in the source datetime.
SELECT DATENAME(DAYOFYEAR, '12:12:12.123') [DayOfYear]
SELECT DATENAME(WEEKDAY, '12:12:12.123') [WeekDay]

Should the above code work? What is the output, an error or values? If you did get an error, can you spot the reason? If you got values then can you tell why you got those values?

Part 2: Understanding DATEPART

If you got the above correct, then this should be a breeze for sure. You will get the answer easily. Let me know how many got it right just by watching the statements.

-- What about now?
SELECT DATEPART(weekday, '12:12:12.123') [WeekDay]
SELECT DATEPART(MONTH, '12:12:12.123')[MONTH]

Should the above code work? What is the output, an error or values? If you did get an error, can you spot the reason? If you got values then can you tell why you got those values? Is it different from Part 1 of your guess with DATENAME or same? Can you tell why?

If you got this far, then great. I think you got the basics covered.

Part 3: Bonus Question

I thought it would be interesting to add a twist to the tale and get your guess on what the output should be for the below queries.

-- First
SELECT DATENAME(dayofyear, '1') [DayOfYear]
-- Second
SELECT DATENAME(dayofyear, '1-1') [DayOfYear]
-- Third
SELECT DATENAME(dayofyear, '2:2') [DayOfYear]
-- Fourth
SELECT DATENAME(dayofyear, '23:25') [DayOfYear]
-- Fifth
SELECT DATENAME(dayofyear, '24:25') [DayOfYear]

Now which of the above queries will run without any error?

Let me know via comments to all your guesses and the reasons for those guesses. Based on the response, I will look at giving away some goodie to one lucky winner.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Script to find SQL Startup Account using WMIC

Friends are your lifeline that keep your life interesting. I have various friends’ circles that range from family, acquaintances, work and the most exciting circle is the technical circles. Most of the times the tech circles is what fuels most of the post that lands up here. I was talking to one of my friend and he gave a passing statement without much info. It was in my mind for a long time to research, but am I glad I figured out. I learned new way to find details about SQL Server startup account – using WMIC. I did some more research with this and sharing it with my blog readers.

What is WMIC? It stands for Windows Management Instrumentation Command. In simple words, WMIC is a utility which allows us to interact with WMI from a WMI-aware command-line shell.  All WMI objects and their properties, including their methods, are accessible through the shell, which makes WMIC a super strong console.

WMIC has a parameter for Service application management. To get help we can use /? as shown below:

We can use where clause and get method to get details about services related to SQL Server. Here is the naming which is consistent since old days:

SQL Component Display Name Service Name
SQL Server – Default Instance SQL Server (MSSQLSERVER) MSSQLServer
SQL Server – Named Instance SQL Server (NameOfInstance) MSSQL$NameOfInstance
SQL Server Agent – Default Instance SQL Server Agent (MSSQLSERVER) SQLSERVERAGENT
SQL Server Agent – Named Instance SQL Server Agent (NameOfInstance) SQLAgentNameOfInstance
SQL Server Browser SQL Server Browser SQLBrowser
SQL Server Full text – Default Instance SQL Full-text Filter Daemon Launcher (MSSQLSERVER) MSSQLFDLauncher
SQL Server Full text – Named Instance SQL Full-text Filter Daemon Launcher (NameOfInstance) MSSQLFDLauncher$SQL2014

You can add more to list for Reporting Services, Analysis Services etc.

Here is the WMI query which can list services using like operator. If you can familiar with T-SQL then you would know that % is operator.

wmic service where “name Like ‘%MSSQL%’ or name like ‘%SQL%Agent%'” get Name , StartName

and here is the output

Did you find it interesting? Do let me know what you find in your environments? The learning will never stop here though.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Using MaxTransferSize parameter with SQL Server Backups

Off late I have been writing a lot around SQL Server backups and administration topics. I have seen a lot of my readers have been enjoying some of the deep conversations I have with people in the industry. As I always say, most of the blog posts are inspired by someone asking me a question, me doing the research and then reaching out to them to give a solution. This blog post is no different in that aspect.

Recently I was fortunate to be in Delhi for a session to one of our customers. As many have seen me around, one of the DBA came and asked me an interesting question. The organization had bought a new EMC storage and one of the recommendations was to use a transfer size of 256 KB. The first instinct for me – “Is it? I didn’t know that.” I said I would revert back to them after I reach the hotel and do my bit of research.

On reading a number of documentation and recommendations on the various sites. I got to know, the optimal value for the MaxTransferSize parameter for SQL Server backups for EMC storage systems is 256 KB, while by default SQL server uses a value of 1,024 KB (1 MB). To promote storage array stability and performance, full database, differential and transaction log backups must be performed with MaxTransferSize specified at 262,144 bytes (256 KB).

Well, that was easier said than done. I now wanted to know how this can be done because the DBA wanted to automate this using their scripting inside a SQL Server Agent Job. I turned to MSDN for help and as always it didn’t disappoint me.

Below are some sample T-SQL Backup commands that are optimized for this requirement:

/* Full database backup */
BACKUP DATABASE AdventureWorks TO DISK = 'C:\MS SQL\Bakup Files\AdventureWorks.bak'
WITH INIT, MAXTRANSFERSIZE = 262144, STATS = 1;
/* Differential database backup */
BACKUP DATABASE AdventureWorks TO DISK = 'C:\MS SQL\Bakup Files\AdventureWorks.diff'
WITH DIFFERENTIAL, INIT, MAXTRANSFERSIZE = 262144, STATS = 1;
/* Transaction Log backup */
BACKUP LOG AdventureWorks TO DISK = 'C:\MS SQL\Bakup Files\AdventureWorks.trn'
WITH INIT, MAXTRANSFERSIZE = 262144, STATS = 1;

To my blog readers, I would like to know if you have ever used these settings in your environments. What type of storage are you using and what values are you using for MAXTRANSFERSIZE parameter? Have you seen any performance improvements while using the same? Have you seen any difference in behavior while using these parameters? Do let me know via the comments section.

Finally, as I wrap up – I always talk about the power of using T-SQL over UI and when I write such blog posts, this just gets better. I need to learn the fineprints of using the T-SQL commands and all the parameters in future. Lest assured, you are going to see more of these in this blog for sure.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #013 – Stored Procedure and Its Advantages – How to Create Stored Procedure

There are some questions which are so old that they should not be asked in the interview and gets old. Here is one of the questions I have spotted so many times in the interview that if co-interviewer asks to the candidate, I often feel bored (well, I was caught yawning last time). Here is the question:

Question: “How do you create stored procedure? and What are the advantages of the stored procedure?”

Answer: Well, as I said, I find this question are age old, so if you search on the internet you will find many different answers. Just make sure you read the answers which are from the latest version of SQL Server as stored procedure have changed a bit since the inception of the same.

Here are few of the advantages of the stored procedure:

  • Execution plan retention and reuse
  • Query auto-parameterization
  • Encapsulation of business rules and policies
  • Application modularization
  • Sharing of application logic between applications
  • Access to database objects that is both secure and uniform
  • Consistent, safe data modification
  • Network bandwidth conservation
  • Support for automatic execution at system start-up
  • Enhanced hardware and software capabilities
  • Improved security
  • Reduced development cost and increased reliability
  • Centralized security, administration, and maintenance for common routines

You can read the original article over here.

If you want to create stored procedure you can create with following simple script:

CREATE PROCEDURE MyFirstSP
AS
SELECT
GETDATE();
GO

You can execute above stored procedure by following a script:

EXEC MyFirstSP
GO

The above stored procedure will return results as a current date time.

 

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – System procedures to know SQL Server Version

There are several ways to know the version of SQL Server. But did you know that there are two system procedures through which you can know the version? They are SP_SERVER_INFO and EXEC XP_MSVER

EXEC sp_SERVER_INFO

If you execute the above, you get a result set with informations about the server. The second will tell you the version number

Result is

attribute_id attribute_name attribute_value
———— ——————– —————-
1 DBMS_NAME Microsoft SQL Server
2 DBMS_VER Microsoft SQL Server 2012 – 11.0.3000.0
.
.
.
.
EXEC xp_MSVER

This is also very similar to SP_SERVER_INFO which will show you the version number (second row from the result set)

Result is

Index Name Internal_Value Character_Value
—— ——————————– ————– ——————-
1 ProductName NULL Microsoft SQL Server
2 ProductVersion 720896 11.0.3153.0
.
.
.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Find the Growth Size for All files in All Databases

I am a big time advocate of some common best practices that I see in the industry. These have been ingrained into every session that I do be it SQL Server, MySQL or any other databases. Though I work extensively on SQL Server and from time-to-time on other databases, the concepts and basics have remained the same across all these variants. One of the practices that I always advocate is the use of “Auto Growth” settings inside SQL Server. The setting is good but I always say, it is there as a safety net. In reality this setting should never be triggered and as a DBA, it is important that we know when the files are getting expanded. As a DBA, you need to build capacity in your environment whenever these settings hit the server. For example, a 10% growth doesn’t look alarming at initial but can be dangerous because 10% of a 1GB database file grows 100MB while for a 100GB database that will grow by 10GB. As a DBA, we are required to be cognizant of the setting and know when the growth happens.

Not very long ago, one of my blog readers sent me below email

Hi Pinal,
My apologies to send you direct email, but this was really urgent. Do you know a quick way to find the growth size set for all the databases on a SQL instance. I was able to figure out that I can use sp_helpfile. Unfortunately it gives details about the files of the database in which I’m running the stored procedure sp_helpfile.

Any quick reply would be appreciated. Thanks in advance!

As always, I searched for it at search.sqlauthority.com and was surprised to see that there were no results. Here is the query which I came up with.

SELECT   'Database Name' = DB_NAME(database_id)
,
'FileName' = NAME
,FILE_ID
,'size' = CONVERT(NVARCHAR(15), CONVERT(BIGINT, size) * 8) + N' KB'
,'maxsize' = (
CASE max_size
WHEN - 1
THEN N'Unlimited'
ELSE CONVERT(NVARCHAR(15), CONVERT(BIGINT, max_size) * 8) + N' KB'
END
)
,
'growth' = (
CASE is_percent_growth
WHEN 1
THEN CONVERT(NVARCHAR(15), growth) + N'%'
ELSE CONVERT(NVARCHAR(15), CONVERT(BIGINT, growth) * 8) + N' KB'
END
)
,
'type_desc' = type_desc
FROM sys.master_files
ORDER BY database_id

 

If you find a better way, please share it via comments. The best way to learn is to learn from each other. Do let me know have you ever encountered this situation in your life?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Mistake to Avoid: Duplicate and Overlapping Indexes – Notes from the Field #073

[Note from Pinal]: In this episode of the Notes from the Field series database expert Kathi Kellenberger explains about indexes and its impact. We often believe that indexes will improve the performance of the query, but it is not true always. There are cases when indexes can reduce the performance as well. Read the experience of  Kathi in her own words.


Having the correct indexes in place can improve the performance of your queries immensely. Knowing just what the correct indexes are, however, is not an easy task. SQL Server provides tools to help you create indexes based on a specific query or the workload that has run since the last restart of the instance. While these are handy tools, you must take the guidance they give with the time-honored grain of salt.

Having done my share of index tuning over the past few years, I have seen the consequences of blindly creating indexes based on the suggestions of the Database Engine Tuning Advisor, the missing index DMV, or the missing index recommendation found in the execution plan. It’s easy to copy a slowly running query into the Query window of SSMS and view the execution plan only to find that an index can be created to drastically improve the query performance. The problem is that this recommendation does not consider indexes that are already in place. By creating every index that these tools suggest, you will end up with too many indexes and many that overlap. At first you may not think this is a problem: more is better, right?

Having too many nonclustered indexes can cause numerous problems. First, unneeded indexes take up space. This impacts storage costs, backup and recovery times, and index maintenance times. Indexes must be kept up to date whenever data changes. The performance of inserts, updates, and deletes is impacted by nonclustered indexes. Have you ever heard of a SELECT query that runs more slowly because there are too many indexes on a table? I have seen it happen. When the optimizer comes up with a plan for a query, it must consider the available indexes, three types of joining, order of joins, etc. The number of plan choices increases exponentially. The optimizer won’t take long to come up with a plan, however, and will sometimes stop with a “good enough plan”. It’s possible that the optimizer didn’t have enough time to figure out the best index because there were too many to consider.

To avoid creating unnecessary indexes, always take a look at the existing indexes on the table when you think you should add a new one. Instead of creating a brand new index, maybe you can just add a key or included columns to an existing index. Following this practice will help keep the number of indexes from spiraling out of control. Another thing to watch out for is the cluster key. The cluster key is included in every nonclustered index. It’s there to locate rows in the clustered index, but you won’t see it in the nonclustered index definition. The index tools will often tell you to add the cluster key as included column, but that is not necessary.

By following all of the index recommendations from the tools without considering other indexes, you will end up with tables that resemble the following:

CREATE TABLE TestIndexes(Col1 INT, col2 VARCHAR(10), Col3 DATE, Col4 BIT);
CREATE INDEX ix_TestIndexes_Col1 ON dbo.TestIndexes (col1);
CREATE INDEX ix_TestIndexes_Col1_Col2 ON dbo.TestIndexes (col1, col2);
CREATE INDEX ix_TestIndexes_Col1_Col2_Col3 ON dbo.TestIndexes (col1, col2, Col3);
CREATE INDEX ix_TestIndexes_Col2 ON dbo.TestIndexes (col2);
CREATE INDEX ix_TestIndexes_Col1_includes1 ON dbo.TestIndexes (col1) INCLUDE(Col4);
CREATE INDEX ix_TestIndexes_Col1_includes2 ON dbo.TestIndexes (col1) INCLUDE(Col2);

You may think that this is a contrived example, but I see this pattern all the time. How do you find the overlapping indexes that need to be cleaned up? There are many scripts available, but here is a simple script that just looks at the first two index keys for duplicates:

WITH IndexColumns AS (
SELECT '[' + s.Name + '].[' + T.Name + ']' AS TableName,
i.name AS IndexName,  C.name AS ColumnName, i.index_id,ic.index_column_id,
COUNT(*) OVER(PARTITION BY t.OBJECT_ID, i.index_id) AS ColCount
FROM sys.schemas AS s
JOIN sys.tables AS t ON t.schema_id = s.schema_id
JOIN sys.indexes AS i ON I.OBJECT_ID = T.OBJECT_ID
JOIN sys.index_columns AS IC ON  IC.OBJECT_ID = I.OBJECT_ID
AND IC.index_id = I.index_id
JOIN sys.columns AS C ON  C.OBJECT_ID = IC.OBJECT_ID
AND C.column_id = IC.column_id
WHERE IC.is_included_column = 0
)
SELECT DISTINCT a.TableName, a.IndexName AS Index1, b.IndexName AS Index2
FROM IndexColumns AS a
JOIN IndexColumns AS b ON b.TableName = a.TableName
AND b.IndexName <> a.IndexName
AND b.index_column_id = a.index_column_id
AND  b.ColumnName = a.ColumnName
AND a.index_column_id < 3
AND a.index_id < b.index_id
AND a.ColCount <= B.ColCount
ORDER BY a.TableName, a.IndexName;

Once you find indexes that are subsets or even exact duplicates of other indexes, you should manually review each match and figure out what can be deleted or consolidated. Just be sure to test your changes before implementing them in production. After reviewing the information, I came up with the following changes:

DROP INDEX ix_TestIndexes_Col1 ON TestIndexes;
DROP INDEX ix_TestIndexes_Col1_Col2 ON TestIndexes;
DROP INDEX ix_TestIndexes_Col1_Col2_Col3 ON TestIndexes;
DROP INDEX ix_TestIndexes_Col1_includes1 ON TestIndexes;
DROP INDEX ix_TestIndexes_Col1_includes2 ON TestIndexes;
CREATE INDEX ix_TestIndexes_Col1_Col2_Col3_includes ON TestIndexes
(Col1, Col2, Col3) INCLUDE (Col4);

Now there are two indexes, none of them overlap, and I haven’t lost anything that was in place before.

I always say that index tuning is both a science and an art. Now when you use the science of missing index information you will know how to apply the art of avoiding duplicates.

If you want to get started with BIML with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Knowing Nested Transactions Behavior with SQL Server

SQL Server is an interesting database system that baffles me every single day that I get to learn something new. This blog has been my way to share these learnings. Every time I write something new, my blog readers have been kind enough in pushing me with interesting comments that I get to learn a lot more. I am a firm believer that learnings get enhanced every time share something.

This blog is an extension of what I wrote a couple of weeks back SQL SERVER – Error – Msg 6401, Level 16: Cannot roll back Transaction. One of my blog readers asked me, doesn’t SQL Server create or start a new transaction whenever we do these nested transactions? This was a valid question and understandable. I was thinking of a way to show the concept that SQL Server will not create any new transaction even when it is a nested format. Finally, I came up with this script to show the working.

We will create the script in the following sequence:

  • Create our database and a dummy table
  • Create a transaction and enter some value into the table
  • Create a nested transaction and enter some value into the table
  • Look at the Transaction Log and Transaction ID to see if they are any different
  • Create a ROLLBACK operation and show what is entered in TLogs

-- Create our database for testing
USE MASTER;
GO
CREATE DATABASE TransactionsDemo;
GO
USE TransactionsDemo;
GO
-- Create a table for testing
CREATE TABLE tbl_SQLAuth (Col1 INT);
GO
SET NOCOUNT ON;
GO
ALTER DATABASE [TransactionsDemo] SET RECOVERY SIMPLE WITH NO_WAIT
GO
CHECKPOINT

Let us create the data into our table with transactions.

-- Create an explicit transaction and insert single row
BEGIN TRAN OuterTransaction;
GO
INSERT INTO tbl_SQLAuth VALUES (1);
GO
-- Create an explicit so called nested transaction and insert single row
BEGIN TRAN InnerTransaction;
GO
INSERT INTO tbl_SQLAuth VALUES (2);
GO

At this point, if you check the transaction log records – it shows an interesting note. This can be queried using the undocumented command fn_dblog().

-- Look at the TLogs for entries now.
SELECT Operation, [Transaction ID], Description FROM fn_dblog(NULL, NULL)

If you check the Transaction ID column, you can see the values for both the records point to the same transaction ID. So there is no difference when it comes to creating nested transactions.

To extend our understanding, let us try to rollback the transaction. This will rollback both the rows with a compensating record in the TLog. So let us see what happens.

-- Rollback the OuterTransaction
ROLLBACK TRAN OuterTransaction;
GO

Let us look at the TLog entries now.

As you can see both the rows have been rolled back using a COMPENSATION record for each of the rows that were inserted in the Transaction log. The final ABORT_XACT confirms the completion of our Transaction.

Do let me know if you have ever learnt the transaction context ever like this? It is sometimes fun when someone pushes us to learn the basics the hard way. Hope you like this blog and learnt something new today. Let us learn together.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location

Whenever someone reports some weird error on my blog comments or sends email to know about it, I always ask to share SQL Server ERRORLOG file. There have been many occasions where I need to guide them to find location of ERRORLOG file generated by SQL Server. Most DBA’s are intelligent and know some of these, but this is my try to share my learnings.

I decided to write this blog so that I can reuse it rather than sending steps every time. At this point I must point out that even if the name says ERRORLOG, it contains not only the errors but information message also. Here are various ways to find the location of SQL Server Errorlogs:

A) If SQL Server is running and we are able to connect to SQL Server then we can do various things. So we can connect to SQL Server and run xp_readerrorlog.

USE MASTER
GO
EXEC xp_readerrorlog 0, 1, N'Logging SQL Server messages in file'
GO

If you can’t remember above command just run xp_readerrorlog and find the line which says “Logging SQL Server messages”.

B) If we are not able to connect to SQL Server then we should SQL Server Configuration Manageuse. We need to find startup parameter starting with -e. Below is the place in SQL Server Configuration Manager (SQL 2012 onwards) where we can see them.

C) If you don’t want to use both ways then here is the little unknown secret. The ERRORLOG is one of startup parameter and its values are stored in registry key and here is the key in my server. SQLArg1 shows parameter starting with -e parameters which point to Errorlog file.

Here is the key which I highlighted in image: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL12.SQL2014\MSSQLServer\Parameters\

Note that “MSSQL12.SQL2014” would vary based on SQL Server Version and instance name which is installed. Here is the quick table with version reference

SQL Server Version Key Name
SQL Server 2008 MSSQL10
SQL Server 2008 R2 MSSQL10_50
SQL Server 2012 MSSQL11
SQL Server 2014 MSSQL12

In SQL Server 2005, we would see a key name in the format of MSSQL.n (like MSSQL.1) the number n would vary based on instance ID.

Here is a key where we can get mapping of Instance ID and directory.

In the above image, you can see that this computer has a default instance (Instance Name MSSQLSERVER) of SQL Server 2012 and named instance (Instance Name SQL2014) of SQL Server 2014.

In case you are contacting me for any error, get the Errorlog using this blog.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Trivia – Days in a Year

Every time I wrote a blog, I tend to get back to most of them via the search. Most of you ask me questions and I do few simple search back to make sure I am able to address them. Yesterday as I was searching for an interesting question, Now back to the question that got me there, one of a friend said he was asked in an interview on how to efficiently find the number of days in a given year.

The question was interesting but what baffled me was – do people still ask such questions? It was a challenge that I wanted to share with you. I wanted to take the challenge and immediately got to my SQL Server machine to solve. The instinctive behavior was to use some basic methods that involved DATEADD and DATEPART options.

The first solution that I got was:

--This method will work on SQL server 2005 and above.
DECLARE @year AS INT
SET
@year=2012
SELECT DATEPART(dy,(DATEADD(YEAR,@year-1899,0)-1))
AS [TOTAL NO OF DAYS]
GO

Immediately I sent the above option. Then he quizzed me asking, can you use some of the New Date time functions that were introduced with SQL Server 2012. Now, I was not prepared for the googly that came my way. But after getting my heads around it, I came up with the below query.

--This technique will work in SQL Server 2012 and above.
DECLARE @year AS INT
SET
@year=2012
SELECT DATEPART(dy,DATEFROMPARTS(@Year,12,31))
AS [TOTAL NO OF DAYS]

Woot !!! That was not simple as I had to search my blog for ideas.

Quiz: Can you come up with some solution which will have lesser number of functions involved? Can you use some of the new SQL Server 2012 Date functions in your solution? Let me know via comments.

Reference: Pinal Dave (http://blog.sqlauthority.com)