Interview Question of the Week #016 – How to Take Database Offline

Here is the question, I received in the recent conference where I was presenting on Database Technology.

Question: What does it mean by Taking Database Offline and How to do it?

Answer: Taking database offline means, it will be no more available for database operations. Here are is how you can take your database offline.

-- Take the Database Offline
ALTER DATABASE [myDB] SET OFFLINE WITH
ROLLBACK IMMEDIATE
GO

If you have previously taken database offline, you can once again take it online by running the following command:

-- Take the Database Online
ALTER DATABASE [myDB] SET ONLINE
GO

You can read more about it in following blogs:

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Create Login with SID – Way to Synchronize Logins on Secondary Server

How many times you had a problem where you have deployed a high availability solution, but the application is not able to use database and getting login failed for user after failover? Note that this issue would happen only with SQL Logins not Windows Login. You would understand the reason shortly. Here are the situations where you might face login failure.

  1. Log Shipping (reading from stand-by secondary database)
  2. AlwaysOn Availability Group. (reading from read-only secondary replica)
  3. Database Mirroring (after failover)

Let’s assume that we have SQLServerA and SQLSevrerB which has one database AppDB. As per architecture of the users and login in SQL Server – there would be a user in database mapped to login. For simplicity let’s say login and user is AppUser which is available in AppDB.

USE AppDB GO SELECT name, sid FROM sys.sysusers WHERE name = 'AppUser' GO USE MASTER GO SELECT name, sid FROM sys.sql_logins WHERE name = 'AppUser' GO

As we can see that SID is matching that’s why user is mapped to same login.

Now, if we create AlwaysOn Availability Group or configure database mirroring or log shipping – we would not be able to map the user using sp_change_user_login because secondary database is not writeable, its only read-only mode.

Here is what we would see on secondary server if login is created using UI.

The solution of this would be to drop and create login with known SID which is stored in sys.sysusers in the database.

CREATE Login AppUser WITH password = 'password@123', SID = 0x59B662112A43D24585BFE2BF80D9BE19

Once this is done, application can connect to secondary database. Hopefully this is something which would help you in fixing issue when there are orphan users in database.

Have you ever used this syntax? Leave a comment and let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQLAuthority News – Presenting 3 Technology Session at GIDS 2015

Great Indian Developer Summit is my favorite technology event and I have been extremely happy to present technology sessions here for over 5 years. Just like every year, this year, I will be presenting three technology session on SQL Server 2014. This time the event is at two locations. First one is Bangalore and the second one is in Hyderabad.

If you are able to attend this event in person – do show up in my session as I will have some goodies to share. Here is the link to the website of the event. If you are not going to be in an event, I suggest you sign up for my newsletter as I will be sending all the scripts, demos for this event in email over here. The event organizer is not planning to record the sessions.

Performance in 60 Seconds – SQL Tricks Everybody MUST Know

Date and Time: APRIL 21, 2015 14:00-15:00
Location: J. N. Tata Auditorium, National Science Symposium Complex (NSSC), Sir C.V.Raman Avenue, Bangalore, India

Data and Database is a very important aspect of application development for businesses. Developers often come across situations where they face a slow server response, even though their hardware specifications are above par. This session is for all the Developers who want their server to perform at blazing fast speed, but want to invest very little time to make it happen. We will go over various database tricks which require absolutely no time to master and require practically no SQL coding at all. After attending this session, Developers will only need 60 seconds to improve performance of their database server in their implementation. We will have a quiz during the session to keep the conversation alive. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Additionally, all attendees of the session will have access to learning material presented in the session.

Troubleshooting CPU Performance Issue for SQL Developers

Date and Time: APRIL 21, 2015 17:30-18:30
Location: J. N. Tata Auditorium, National Science Symposium Complex (NSSC), Sir C.V.Raman Avenue, Bangalore, India

Developers are in the most challenging situations when they see CPU running 100%. There are many solutions of this situation, but there is very little time to implement those solutions. In this critical situation developers need a sure solution which gives stability to their system and buys more time to troubleshoot the problem. Many believe Performance Tuning and Troubleshooting is an art which has been lost in time. The truth is that art has evolved with the time and there are more tools and techniques to overcome ancient troublesome scenarios. There three major resource when bottlenecked creates performance problem: CPU, IO, and Memory. In this session we will focus on some of the common performance issues and their resolution. If time permits we will cover other performance related tips and tricks. At the end of the session attendee will have clear ideas and action items regarding what to do in when facing any of the above resource intensive scenarios.

SQL Server 2014 – A 60 Minutes Crash Course

Date and Time: APRIL 25, 2015 09:55-10:55
Hyderabad Marriott Hotel & Convention Centre, Tank Bund Road, Opposite Hussain Sagar Lake, Hyderabad, India

Every new release of SQL Server brings a whole load of new features that an administrator can add to their arsenal of efficiency. SQL Server 2014 has introduced many new features. In this 60 minute session we will be learning quite a few of the new features of SQL Server 2014. Here is the glimpse of the features we will cover in this 60 minute session.

Live plans for long running queries
Transaction durability and its impact on queries
New cardinality estimate for optimal performance
In-memory OLTP optimization for superior query performance
Resource governor and IO enhancements
Columnstore indexes and performance tuning
And many more tricks and tips
Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Additionally, all attendees of the session will have access to learning material presented in the session. This one hour will be the most productive one hour for any developer who wants to quickly jump start with SQL Server 2014 and its new features.

Remember – do not worry if you can’t attend the event. Just subscribe to newsletter and I will share all scripts and slides in the email right after the event.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Walking the Table Hierarchy in Microsoft SQL Server Database – Notes from the Field #076

[Note from Pinal]: This is a 76th episode of Notes from the Field series. Hierarchy is one of the most important concepts in SQL Server but there are not clear tutorial for it. I have often observed that this simple concept is often ignored or poorly handled due to lack of understanding.

In this episode of the Notes from the Field series database expert Kevin Hazzard explains Table Hierarchy in Microsoft SQL Server Database. Read the experience of Kevin in his own words.


When you need to manage a set of tables in Microsoft SQL Server, it’s good to know the required order of operations. The order could be hard-coded into the process but such approaches tend to fail when the database schema evolves. Instead, I prefer to use the catalog view named [sys].[foreign_keys] to discover the relationships between tables dynamically. Long ago, I wrote a function called [LoadLevels] that I’ve used in hundreds of processes to make them reusable and more resilient. The code for that function is shown in Listing 1:

-- ==========================================================================
-- Description: Get the load levels by tracing foreign keys in the database.
-- License:     Creative Commons (Free / Public Domain)
-- Rights:      This work (Linchpin People LLC Database Load Levels Function,
--              by W. Kevin Hazzard), identified by Linchpin People LLC, is
--              free of known copyright restrictions.
-- Warranties:  This code comes with no implicit or explicit warranties.
--              Linchpin People LLC and W. Kevin Hazzard are not responsible
--              for the use of this work or its derivatives.
-- ==========================================================================
CREATE FUNCTION [dbo].[LoadLevels]()
RETURNS @results TABLE
(
[SchemaName] SYSNAME,
[TableName] SYSNAME,
[LoadLevel] INT
)
AS
BEGIN
WITH
[key_info] AS
(
SELECT
[parent_object_id] AS [from_table_id],
[referenced_object_id] AS [to_table_id]
FROM [sys].[foreign_keys]
WHERE
[parent_object_id] <> [referenced_object_id]
AND [is_disabled] = 0
),
[level_info] AS
(
SELECT -- anchor part
[st].[object_id] AS [to_table_id],
0 AS [LoadLevel]
FROM [sys].[tables] AS [st]
LEFT OUTER JOIN [key_info] AS [ki] ON
[st].[object_id] = [ki].[from_table_id]
WHERE [ki].[from_table_id] IS NULL
UNION ALL
SELECT -- recursive part
[ki].[from_table_id],
[li].[LoadLevel] + 1
FROM [key_info] AS [ki]
INNER JOIN [level_info] AS [li] ON
[ki].[to_table_id] = [li].[to_table_id]
)
INSERT @results
SELECT
OBJECT_SCHEMA_NAME([to_table_id]) AS [SchemaName],
OBJECT_NAME([to_table_id]) AS [TableName],
MAX([LoadLevel]) AS [LoadLevel]
FROM [level_info]
GROUP BY [to_table_id];
RETURN
END

The [LoadLevels] function walks through the table relationships in the database to discover how they’re connected to one another. As the function moves from one relationship to the next, it records the levels where they exist in the hierarchy. A partial output of the function as executed against Microsoft’s AdventureWorks2014 sample database is shown in Figure 1.

Ordering to show the highest load levels first, notice that the most dependent table in the AdventureWorks2014 database is [Sales].[SalesOrderDetails]. Since the load levels are zero-based in the function output, that table is eight levels high in the hierarchy. In other words, if I were developing an Extract, Transform & Load (ETL) system for [Sales].[SalesOrderDetails], there are at least seven other tables that need to be loaded before it. For all 71 tables in the AdventureWorks2014 database, the function reveals some interesting facts about the load order:

  • Level 0 – 25 tables, these can be loaded first
  • Level 1 – 8 tables, these can be loaded after level 0
  • Level 2 – 8 tables, these can be loaded after level 1
  • Level 3 – 19 tables, …
  • Level 4 – 7 tables
  • Level 5 – 1 table
  • Level 6 – 1 table
  • Level 7 – 2 tables, these must be loaded last

The [LoadLevels] function uses two Common Table Expressions (CTE) to do its work. The first one is called [key_info]. It is non-recursive and gathers just the foreign keys in the database that aren’t self-referencing and aren’t disabled. The second CTE is called [level_info] and it is recursive. It starts by left joining the tables in the database to the foreign keys from the first CTE, picking out just those tables that have no dependencies. For the AdventureWorks2014 database, for example, these would be the 25 tables at level zero (0).

Then the recursion begins by joining the output from the previous iteration back to the key information. This time however, the focus is on the target of each foreign key. Whenever matches are found, the reference level is incremented by one to indicate the layer of recursion where the relationship was discovered. Finally, the results are harvested from the [level_info] CTE by grouping the table object identifiers, resolving the schema and table names, and picking off the maximum load level discovered for each entity.

The reason for grouping and selecting the maximum load level for any table becomes clear if you remove the GROUP BY clause and the MAX() operator from the code. Doing that reveals every foreign key relationship in the database. So for example, in the AdventureWorks2014 database, the [Sales].[SalesOrderDetails] table appears in 22 different relationships ranging from three levels high in the hierarchy to eight levels high, output as [LoadLevel] 7 in Figure 1. By grouping and selecting the maximum level for any table, I’m certain to avoid loading tables too early in my dynamic ETL processes.

In summary, you can use the [LoadLevels] function to identify dependencies enforced by foreign key constraints between tables in your database. This information is very useful when developing a process to copy this data to another database while preserving the referential integrity of the source data.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Finding What Policies Are Enabled on Our Databases

When I wrote about Policy Management last week (SQL SERVER – Introduction to Policy Management), lesser did I know I will get queries from blog readers on the basics. As I always said, Policy Management is always an underappreciated capability inside SQL Server. In one of the mails, I was asked – “How do I know which policy is enabled on my Server?” Before I can answer the same with a simple query to our metadata, let me walk through what happens in words before going into the solution.

When SQL Server “policy engine” executes a policy, it determines which objects to evaluate against the policy’s condition. It uses the target set and target set level data to determine this. The information can be viewed in the syspolicy_target_sets and syspolicy_target_set_level views. A target set has data for the target type (for example, [Database]) and target skeleton (for example, [Server/Database]). This target skeleton includes all databases on the server. The target set can be further narrowed with a filter. This filter can be another condition. This is the data stored in the target set level. Below is a query to help see this metadata.

/* Query the metadata to find the Policy, Condition, Target Set informations on our database */
SELECT p.policy_id,
p.is_enabled,
p.name AS 'policy_name',
c.condition_id,
c.name AS 'condition_name',
c.expression AS 'condition_expression',
ts.target_set_id,
ts.TYPE,
ts.type_skeleton,
tsl.condition_id AS 'target_set_condition_id'
FROM msdb.dbo.syspolicy_policies p
INNER JOIN msdb.dbo.syspolicy_conditions c
ON p.condition_id = c.condition_id
INNER JOIN msdb.dbo.syspolicy_target_sets ts
ON ts.object_set_id = p.object_set_id
INNER JOIN msdb.dbo.syspolicy_target_set_levels tsl
ON ts.target_set_id = tsl.target_set_id
-- WHERE p.is_enabled <> 0 -- Use this to get only enabled Policies on the DB

If you plan to run these on a SQL Server 2012+ server box, then you are likely to see the AlwaysOn related policies for reference. Go ahead and uncomment the last line to see only policies that are enabled on your SQL Server box. I would be curious to know how many of you actively use this capability and for what reasons. Do let me know via your comments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Script – Removing Multiple Databases from Log Shipping

Have you ever come across a situation where you have large number of databases in log shipping and you have to remove all of them? If you use SQL Server Management Studio, it would take a long time because you have to Right Click on each database, go to properties, choose Transaction Log Shipping tab, choose remove for secondary, uncheck the box and then hit OK. Though monotonous, these are painful when the number of databases we are really huge.

In the background, it executes stored procedures to remove the metadata from the log shipping related tables in MSDB database.

Below is the sample which runs on primary. I have changed input parameters.

-- primary
EXEC MASTER.dbo.sp_delete_log_shipping_primary_secondary
@primary_database = N'Primary_Database_Name'
,@secondary_server = N'Secondary_Server_Name'
,@secondary_database = N'Secondary_Database_Name'
GO
EXEC MASTER.dbo.sp_delete_log_shipping_primary_database @database = N'Primary_Database_Name'
GO

And below runs on secondary (here also I have changed input parameter)

-- secondary
EXEC MASTER.dbo.sp_delete_log_shipping_secondary_database
@secondary_database = N'Seconday_Database_Name'

Essentially, if we want to remove log shipping we need to get primary database name, secondary database name and secondary server name. I have used metadata table to find that details.

-- Script for removal of Log Shipping from primary
SET NOCOUNT ON
GO
DECLARE @ExecString VARCHAR(MAX)
SELECT @ExecString = 'EXEC master.dbo.sp_delete_log_shipping_primary_secondary
@primary_database = N'''
+ pd.primary_database +'''
,@secondary_server = N'''
+ ps.secondary_server+ '''
,@secondary_database = N'''
+ ps.secondary_database + ''''
+'
go'
FROM   msdb.dbo.log_shipping_primary_secondaries ps,
msdb.dbo.log_shipping_primary_databases pd
WHERE ps.primary_id = pd.primary_id
SELECT @ExecString
GO
DECLARE @ExecString VARCHAR(MAX)
SELECT @ExecString = 'EXEC master.dbo.sp_delete_log_shipping_primary_database @database = N'''+primary_database+'''
go'
FROM msdb.dbo.log_shipping_primary_databases
SELECT @ExecString
GO

Once you run the script, you would get output with execute statement, just copy paste and run into new query window.

Here is the similar script for secondary server.

-- Script for removal of LS from Secondary
DECLARE @ExecString VARCHAR(MAX)
SELECT @ExecString = 'EXEC master.dbo.sp_delete_log_shipping_secondary_database @secondary_database = N'''+secondary_database+'''
go'
FROM log_shipping_secondary_databases
SELECT @ExecString
GO

Note: Above scripts would generate which you need to copy paste in new query window on respective servers.  Please verify before running the output on production server.

Trying to automate and use the power of T-SQL is one of the best things I have always felt. Do let me know if you did these sort of things in your environments? Can you share some of the experiences?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Is XP_CMDSHELL Enabled on the Server?

I am not a big fan of using command line utilities in general. But from time to time I do explore and play around with command line tools that make my life easier. Having said that, when working with SQL Server, I do often give out the recommendation of not trying to use the native xp_cmdshell commands with SQL Server. Even the SQL_Server_2012_Security_Best_Practice_Whitepaper talks about xp_cmdshell and recommends to avoid the same if it is possible. These are basic building blocks of security that I highly recommend my customers from time to time.

In this blog, let me show you two ways of doing the same.

TSQL Way

The easier way using standard T-SQL commands is shown below.

-- To allow advanced options to be changed.
EXEC sp_configure 'show advanced options', 1
GO
-- To update the currently configured value for advanced options.
RECONFIGURE
GO
-- Disable xp_cmdshell option now
EXEC sp_configure 'xp_cmdshell', 0
GO
RECONFIGURE
GO

This works great when you have direct access to the SQL Server box and you want to just do it on the box under question.

PowerShell Way

For the world of administrators who manager literally 10’s or 100’s of servers in a datacenter, doing a line-by-line or a server-by-server T-SQL script doesn’t work. They want something as a script. Below is a PowerShell script that will find out if the xp_cmdshell option is enabled on a server or not.

[Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo")  | Out-Null;
 $srv = new-object Microsoft.SqlServer.Management.Smo.Server("MSSQLSERVER")
 if ($srv.Configuration.XPCmdShellEnabled -eq $TRUE)
 {
 Write-Host "xp_cmdshell is enabled in instance" $srv.Name
 }
 else
 {
 Write-Host "xp_cmdshell is Disabled in instance" $srv.Name
 }

I am sure this can be run in a loop for all your servers. Here the default instance MSSQLSERVER is being used. I am sure you will be able to run the same script in PowerShell ISE to know your instance status wrt xp_cmdshell. I would highly recommend you to share your experience and how you ever used such scripts in your environments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #015 – How to Move TempDB to Different Drive

Here is one of the most popular questions I often come across-

Question – How to move the TempDB to different drive when the log files are filled?

Answer – In most of the cases which I have observed one has to move the TempDB to different drive when TempDB log file is filled up or one believes when moving to different drive will help the growth of the file. Sometimes user also moves to different drive due to performance reasons as keeping TempDB on a different drive from your main database helps.

Here is the error user usually engage when they come across TempDB log file growth.

The LOG FILE FOR DATABASE ‘tempdb’ IS FULL.
Back up the TRANSACTION LOG FOR the DATABASE TO free up SOME LOG SPACE

Here is my earlier blog post where I have described how one can change the TempDB location to another drive SQL SERVER – TempDB is Full. Move TempDB from one drive to another drive.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – CONCAT function and NULL values

CONCAT is the new T-SQL function introduced in SQL Server 2012. It is used to concatenate the values. It accepts many parameter values seperated by comma. All parameter values are concatenated to a single string.

A simple example is

SELECT CONCAT('SQL Server',' 2012')

which results to SQL Server 2012

The same can be done using + in the earlier versions

SELECT 'SQL Server'+' 2012'

which results to SQL Server 2012

But did you know the advantage of CONCAT function over +?

SELECT 'SQL Server'+' 2012'+NULL

When you execute the above, the result is NULL

But the CONCAT function will simply ignore NULL values

SELECT CONCAT('SQL Server',' 2012',NULL)

The result is SQL Server 2012

So by using CONCAT function, you do not need to worry about handling NULL values.

How many of you know this?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Login Failed For User – Reason Server is in Script Upgrade Mode

I just can’t get enough of the error messages that land into my Inbox almost every single day. I do my due diligence of making a search in my blogs and I try to hang around in the forums from time to time to learn what people are getting as errors. While browsing through forums, I have been able to find that this error was faced by many users. It is one of the most common errors that seem to land, let me know if you ever faced the following error in your environments:

Login failed for user ‘MyDomain\Username. Reason: Server is in script upgrade mode. Only administrator can connect at this time. (.Net SqlClient Data Provider)

I did some more research and found below facts:

  • Whenever any SQL Server patch is applied, setup would patch the binaries first.
  • During the restart of instance, SQL Server startup would go though “script upgrade mode” during recovery phase.
  • Script upgrade mode is the phase where objects inside the databases are upgraded based on recently patch applied.
  • Based on features installed and number of databases available, it would take varying amount of time.

The best way to look at the upgrade script status is to keep observing ERRORLOG continuously. Since we would be not able to connect to SQL via SSMS or client tools, we need to open ERRORLOG via windows explorer by going to physical location. Once “Recovery is complete” message is printed in ERRORLOG, we should not see the message any further.

Reference: Pinal Dave (http://blog.sqlauthority.com)