SQL SERVER – How to Connect Using NT AUTHORITY\SYSTEM Account?

Sometimes it is needed to connect to SQL Server using System account. Don’t literally ask me if this is a valid scenario in first place. Someone just pinged to say – they want to do it. I was personally not sure why, but they had their own internal requirement to do the same.

It is not possible to provide windows credential in SSMS and they are always grayed out. My exploration is always to keep trying to find a solution to such typical use cases. I was able to find that PSExec can be used to achieve this.  I am a big fan of some of the tools from sysinternals. These are handy and quite small footprint of the servers. One of the tools I have day-in day-out is ZoomIt while doing presentations.

Step 1: Download PSTools from https://technet.microsoft.com/en-us/sysinternals/psexec.aspx

Step 2: Extract and open command prompt to the extracted location.

Step 3: Provide below command

psexec -i -s ssms.exe

-i parameter allow the program to run so that it interacts with the desktop of the specified session on the remote system.

-s parameter launches the process using SYSTEM account.

Here is what we would see on the command prompt.

NtAuth 01 SQL SERVER   How to Connect Using NT AUTHORITY\SYSTEM Account?

And it would launch SSMS program with User name populated as “NT AUTHORITY\SYSTEM”

NtAuth 02 SQL SERVER   How to Connect Using NT AUTHORITY\SYSTEM Account?

When I disable that account, I was getting below error in ERRORLOG

2016-02-03 15:31:07.560 Logon        Error: 18470, Severity: 14, State: 1.
2016-02-03 15:31:07.560 Logon        Login failed for user ‘WORKGROUP\SQLSERVER2016$’. Reason: The account is disabled. [CLIENT: <local machine>]

Notice that we do not see NT AUTHORITY\SYSTEM in Errorlog. We are seeing machine name followed by a dollar symbol. Since my machine is in a Workgroup called “WORKGROUP” and machine name is SQLSERVER2016, we are seeing WORKGROUP\SQLSERVER2016$

Have you ever used any such tool to troubleshoot anytime in your environment? What use cases were you able to use these tools in the past? Do let me know via comments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Someone was trying to hack my SQL Server Logins in Azure!

When I started to work with SQL Server on Azure VMs, I was completely excited because now the need to build local VMs has become super easy. If I need something for testing, then I know I can get a SQL Server instance in a jiffy and it can be brought down as soon as my work is done. The more we start working with softwares on the cloud, it is important to also understand the nuances of security that we need to be aware. I recently noticed the ERRORLOG on my virtual machine sitting in Azure and was surprise to see below error in ERRORLOG multiple times every minute.

Error: 18456, Severity: 14, State: 5.
Login failed for user ‘KISAdmin’. Reason: Could not find a login matching the name provided. [CLIENT: xx.174.233.xx]
Error: 18456, Severity: 14, State: 7.
Login failed for user ‘sa’. Reason: An error occurred while evaluating the password. [CLIENT: xx.174.233.xx]
Error: 18456, Severity: 14, State: 5.
Login failed for user ‘KHB’. Reason: Could not find a login matching the name provided. [CLIENT: xx.174.233.xx]
Error: 18456, Severity: 14, State: 5.
Login failed for user ‘Chred1433’. Reason: Could not find a login matching the name provided. [CLIENT: xx.174.233.xx]

(I have masked the IPs in error messages above)

I realized that

  1. The IP address listed was not an IP which I know
  2. Account shown in login failed messages doesn’t exist on my SQL server instance.

Since this is a virtual machine in Microsoft Azure, I checked the endpoints for this VM as they are responsible for any entry to the sever from the outside world. Here is what I saw.

hack 01 SQL SERVER   Someone was trying to hack my SQL Server Logins in Azure!

As we can see above that I have created endpoint for SQL Server on port 1433 and it’s a very well-known port used by SQL Server. My theory about hacking became more firm when I attempted to do remote desktop on the IP given in the error message.

hack 02 SQL SERVER   Someone was trying to hack my SQL Server Logins in Azure!

Here are my suggestions:

  1. If Login failed message says <local machine> it is less likely to be hacker, but some process somewhere on the machine.
  2. Above is also true if the IP address is within your organization.
  3. If the IP Address is outside your organization, then answer is simple – don’t expose your SQL Server on the Internet. Else you may want to take below steps
    1. Change SA password and make sure its complex.
    2. If you don’t need SQL Authentication, don’t use it. Change the mode to Windows only.

In my case, since its and VM in Azure and I have to stay with SQL authentication so I changed the public port to something different and then there were no more Login failed attempts from that machine. When I searched, I also found article from Microsoft as below

https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-sql-server-security-considerations/

Interesting my simple idea of using non-default endpoint of not listed there but they do ask to use ACL for endpoints for better security of SQL running on Azure virtual machine.

Do you check SQL ERRORLOG regularly to monitor the server’s security?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER 2016 – New T-SQL Functions – COMPRESS and DECOMPRESS

With every release of SQL Server, Microsoft has been adding enhancements to the product. Earlier I have written below blogs talking about new features/enhancements.

SQL Server – 2016 – New Feature: Dynamic Data Masking

SQL Server – 2016 – T-SQL Enhancement “Drop if Exists” clause

SQL SERVER 2016 – Comparing Execution Plans

SQL SERVER – 2016 – Opening JSON with OPENJSON()

In this blog, I would explore new string functions COMPRESS and DECOMPRESS which are newly available in SQL Server 2016 (CTP 3.1 onwards)

If you try this in an earlier version, then you would be welcomed with an error message.

Msg 195, Level 15, State 10, Line <>
‘COMPRESS’ is not a recognized built-in function name.
Msg 195, Level 15, State 10, Line <>
‘DECOMPRESS’ is not a recognized built-in function name.

Here is the version where it would work. Any version more than 801 should work.

Microsoft SQL Server 2016 (CTP3.1) – 13.0.801.12 (X64)
Dec  1 2015 15:41:43
Copyright (c) Microsoft Corporation
Enterprise Evaluation Edition (64-bit) on Windows Server 2012 R2 Datacenter 6.3 <X64> (Build 9600: ) (Hypervisor)

Now, let’s understand the usage of COMPRESS and DECOMPRESS functions. The compression techniques which were available with earlier version of SQL Server was page level and row level compression. With the new function, we can specify string which needs to be compressed and insert directly. We need to remember that COMPRESS function gives output as byte array of VARBINARY(MAX) type. The algorithm used by these function is GZip.  https://en.wikipedia.org/wiki/Gzip So, an application can compress the data using standard Gzip algorithm and send it to SQL Server. Or Select compress data and decompress in the application.

Here is a quick example.

SET NOCOUNT ON
GO
USE tempdb
GO
DROP TABLE IF EXISTS Team_SQLAuthority;
GO
CREATE TABLE Team_SQLAuthority (
id INT PRIMARY KEY IDENTITY
,name NVARCHAR(MAX)
,
surname NVARCHAR(MAX)
,
info VARBINARY(MAX)
)
GO
INSERT INTO Team_SQLAuthority (
name
,surname
,info
)
VALUES (
'Pinal'
,'Dave'
,COMPRESS('I love SQL Server')
)
SELECT id
,info AS 'COMPRESSED-ed'
,CAST(DECOMPRESS(info) AS VARCHAR(MAX)) 'DECOMPRESS-ed'
FROM Team_SQLAuthority
GO

compress 01 SQL SERVER 2016   New T SQL Functions   COMPRESS and DECOMPRESS

In above example, we are inserting compressed data in SQL Server. We can also select normal data, compress it on the fly via select statement and later client or application can decompress it using standard Gzip algorithm. This would reduce network usage.

The amount of compression would be dependent on type of data. If we have XML or JSON data, it might be compressed more.

DECLARE @STR1 VARCHAR(MAX)
DECLARE @STR2 VARCHAR(MAX)
SELECT @STR1 = 'I LOVE SQL SERVER'
SELECT @STR2 = 'I LOVE SQL SERVER AND ALL OTHER DATABASE PRODUCTS. LET US ADD MORE DATA TO SHOW THAT MORE
LENTH OF THE STRING CAN SHOW BETTER COMPRESSION BECAUSE THERE IS A OVERHEAD DUE TO COMPRESION ITSELF'
SELECT  DATALENGTH(@STR1) 'Original-1',
DATALENGTH(COMPRESS(@STR1)) 'Compressed-1',
DATALENGTH(@STR2) 'Original-2',
DATALENGTH(COMPRESS(@STR2)) 'Compressed-2'

Here is the output.

compress 02 SQL SERVER 2016   New T SQL Functions   COMPRESS and DECOMPRESS

As we can see, longer string gets better compressed.

Do you think this feature would help you?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Creating a Database Administrator Group Using User-Defined Server Roles

When it comes to security related topics, I always read and learn something new from my friend Vinod. He always comes-up with something interesting and when he was at GIDS event, one of the attendees walked and asked a security question. He was interested in knowing how we can have near to DBA privileges and still mask the data from the DBA because it has sensitive information. Without an iota of doubt, I just pointed to Vinod to take the conversation forward. Curious enough, I made it a point to join the discussion immediately. Who would want to miss an opportunity to learn something new anyways?

By the time I reached, the explanation had started and here is what I was hearing. Once that learning – which in my opinion was in air in general. I had to tryout what he said. Let me do a recap of the same:

The sysadmin role has access to everything inside SQL Server. It is not possible to restrict access to logins who are members of the sysadmin role.  In some cases, you may want to have users who are DBAs but you do not want them to access sensitive tables or change certain database schemas.  In SQL Server 2012 you can leverage a new feature called user-defined server roles to provide a solution to this scenario.

In this exercise, you have a SALES database that contains two users, “PinalUser” and “SQLAuthUser”.  SQLAuthUser owns a schema called, “CRM”.  In this schema there is an Employees table which contains sensitive information. PinalUser is a DBA and our task is to create a server scoped user role called, “DBA Role” which PinalLogin is a member.  Note: PinalLogin is mapped to PinalUser.  By the end of this lab you will see how even though PinalLogin is a member of DBA Role and that role has CONTROL SERVER permissions, he can’t SELECT the sensitive information from the CRM schema.

To simulate and showcase the same, below are the 6 steps to be taken:

Create Sales database, logins and users

Open SQL Server Management Studio, In the Query Editor that was created as a result of the previous step, enter the following code and press F5 (or click on the “Execute” button on the toolbar):

USE MASTER
GO
CREATE DATABASE Sales
GO
CREATE LOGIN PinalLogin WITH PASSWORD='pass@word1'
GO
CREATE LOGIN SQLAuthLogin WITH PASSWORD='pass@word1'
GO
USE Sales
GO
CREATE USER PinalUser FOR LOGIN PinalLogin
GO
CREATE USER SQLAuthUser FOR LOGIN SQLAuthLogin
GO
CREATE SCHEMA CRM AUTHORIZATION SQLAuthUser
GO
CREATE TABLE CRM.Employees
(Employee_name VARCHAR(50) NOT NULL,
Emplpoyee_SS CHAR(12) NOT NULL,
Employee_DOB DATE NOT NULL)
GO
INSERT INTO CRM.Employees VALUES('Vinod','123-22-3456','12/1/1980'),('Balu','345-55-1234','4/3/1985')
GO

Connect to SQL Server as PinalLogin and see if he has access to the Employees table

In Object Explorer click on the Connect button and select, “Database Engine…”. Connect as PinalLogin. If the connection was successful you will see another node in the object explorer for the connection to the database under PinalLogin context. You can tell the connection context by looking at the last part of the name of the server node. Navigate down PinalLogin’s connection to the Databases node, then to the Sales node then to the Tables node.

Note: The CRM.Employees table does not exist as Pinal doesn’t have access to the CRM schema.

As the sysadmin, create a server level role, DBA Role and add Pinal to it

In a query editor, please type the below commands:

USE MASTER
GO
CREATE SERVER ROLE [DBA Role]
GO
GRANT CONTROL SERVER TO [DBA Role]
GO
DENY ALTER ANY SERVER AUDIT TO [DBA Role]
GO
DENY ALTER ANY LOGIN TO [DBA Role]
GO
DENY ALTER ANY SERVER ROLE TO [DBA Role]
GO
ALTER SERVER ROLE [DBA Role] ADD MEMBER PinalLogin
GO

Note: If these commands failed, make sure your query editor window is running under the context of the administrator.  You can tell this by looking at the connection properties dialog (hitting F4).  The Login name will show you the login context of the current query editor window.

Now that PinalLogin is a member of the DBA Role and this role has CONTROL SERVER permission let’s see if PinalLogin can now access the table.

As PinalLogin, query the CRM.Employees table

  1. In Object Explorer, select the server node that is connected as “PinalLogin”
  2. Right click and select, “Refresh” from the context menu
  3. Navigate down to, “Databases” then “Sales” then “Tables”
  4. Click on Tables node and select the “CRM.Employees” table
  5. Right click and select, “Select Top 1000 Rows” from the context menu

A new query editor window should open showing you the contents of the CRM.Employees table. This is because PinalLogin has CONTROL SERVER permission. A permission that is almost equivalent to sysadmin. Close the document window showing you the CRM.Employees result set by clicking on the “X” or by selecting, “Close” from the “File” menu.

As an administrator, deny select to PinalUser

PinalUser is the database user that is mapped to the PinalLogin server principal.  To ensure Pinal can’t see out sensitive employee table, we simply issue a DENY statement on PinalUser. In the query editor window enter the following code:

USE SALES
GO
DENY SELECT ON SCHEMA::CRM TO PinalUser
GO

Note: If these commands failed, make sure your query editor window is running under the context of the administrator.  You can tell this by looking at the connection properties dialog (hitting F4).  The Login name will show you the login context of the current query editor window.

The big difference between sysadmin and someone with CONTROL SERVER permission is SQL Server will respect a DENY on an object even when the user has CONTROL SERVER. In our example, even though PinalLogin has CONTROL SERVER, since database user PinalUser has a DENY on the CRM.Employees table, he can’t see the data.

As  PinalLogin try to query the CRM.Employees table

Follow these steps:

  1. In Object Explorer, select the server node that is connected as “PinalLogin”
  2. Right click and select, “Refresh” from the context menu
  3. Navigate down to, “Databases” then “Sales” then “Tables”
  4. Click on Tables node and select the “CRM.Employees” table
  5. Right click and select, “Select Top 1000 Rows” from the context menu

Expected Outcome: 

Msg 229, Level 14, State 5, Line 2
The SELECT permission was denied on the object ‘Employees’, database ‘Sales’, schema ‘CRM’.

This completes our steps for restricting a DBA user to access sensitive data.  I am curious to know; have you ever done this before? I would love to hear from you and your learnings on this subject.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Three Simple Guidelines for System Maintenance – Notes from the Field #098

[Note from Pinal]: This is a 98th episode of Notes from the Field series. Maintenance of the database is a very critical activity and I have always seen DBA taking it very seriously. There is a only one problem – there is no single solution or guidance for how to maintain the database. Everybody has their own opinion and way to do different tasks. System maintenance is very straight forward task but I have seen quite often experts even getting confused with the same. Many only focus on index maintenance, statistics maintenance and a few other tasks are common, but understanding the real philosophy of this task is something beyond indexes. When I asked my good friend Stuart about system maintenance, he came up with very interesting guidelines for system maintenance.

In this episode of the Notes from the Field series database expert Stuart Ainsworth explains about three simple guidelines for system maintenance.


 SQL SERVER   Three Simple Guidelines for System Maintenance   Notes from the Field #098

This week, I was helping a client review their backup and disaster recovery strategy when they asked me how to handle an ongoing maintenance issue; it wasn’t really a SQL issue, but they were concerned about what to do for a virtualized server running an older, critical website. They’re in the process of replacing the site, but they have to maintain the machine for at least another 6 months.

A recent patch had crippled the machine, and they had to restore the VM from backup, which took the site down for a few hours. Thankfully, their backups worked, but a retrospective look at what happened showed some flaws in their process:

  1. They knew that the server was critical AND having issues before they applied the patch, and
  2. Their system admin was inexperienced with this particular type of restore, and that cost them some time during their recovery.

At Linchpin People, we usually get asked questions like this as part of the WellDBA exam. Here’s my short list of guidelines for system maintenance; they can be applied to more than just backups.

Automate what you can

Most modern system tools will allow you to do some form of automatic maintenance; SQL Server runs jobs on SQL Agent, Windows Server has Task Scheduler, and Linux has cron. Lots of third-party tools offer not only backup capabilities, but also restore verification capabilities and reporting. The security and time saved by using a tool are usually worth the investment; it frees your biggest expense (your people) up in order to focus on other things.

maintenance SQL SERVER   Three Simple Guidelines for System Maintenance   Notes from the Field #098If you must have a manual process, make it simple, and document it

Sometimes you can’t escape manual processes. For this client, the critical nature of this web site meant that they didn’t want to have an automatic reboot cycle, so while patches got pushed automatically, they rebooted manually. Given the fragility of their recent experience, they’ve decided to stop automatically patching, and doing the following steps each month:

  • Snapshot the server (for quicker restores).
  • Patch and reboot.
  • If patch is unsuccessful, restore from snapshot.
  • If patch is successful, delete the snapshot.

Simple enough. However, my suggestion was that they write those steps down. Operating from a checklist ensures that procedures are followed consistently, elevating the process to near automatic status. It also gives you an opportunity to periodically review manual processes, and automate pieces as situations and technology changes.

Validate, validate, validate…

So, if you’re automating your processes as much as you can, and simplifying your manual processes, what are your engineers spending their time on? They should be periodically validating that the maintenance plans you implemented are successfully working, including routinely doing recovery drills. Just like a fire drill, IT professionals need to practice what they would do in case of a recovery scenario in order to minimize the downtime associated with an actual recovery.

This is one of those activities that is well received in theory, but rarely done in reality. Support queues are always full, new projects and feature requests are always pressing, and maintenance issues are usually at the bottom of the pile. However, in the event of an outage, routines that have been well practiced are the ones that are the fastest to recover; a crisis situation is the absolute worst time for an engineer to feel like they don’t understand what to do.

Summary

Routine maintenance isn’t difficult, but it can be time consuming. To minimize time, automate what you can, and simplify your manual processes. Then, invest your time in preparing for the inevitable. Technology breaks; don’t let it break you.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Security Considerations for Contained Databases

cloudsec SQL SERVER   Security Considerations for Contained DatabasesI have written a number of blogs around contained databases in the past which you can search. In today’s blog we will talk about security considerations when working with contained databases. With contained authentication, the database can contain authentication information for the database users. This makes it considerably easier to move databases between servers. Some of the other blogs for reference are:

Importance of User Without Login – T-SQL Demo Script

How to Migrate Existing Database to Contained Databases

SQL Authentication process

For SQL Server Authentication against a contained database, the connection attempt must specify ancloudsec1 SQL SERVER   Security Considerations for Contained Databases initial catalog, which should be the contained database. If so, authentication is first attempted against contained users. If no such user exists, SQL Server falls back to server level authentication.

If the user exists and the password does not match, this is a typical authentication failure. No second chances are provided. SQL Server does not attempt a server level authentication in this situation. Therefore, if a contained user and a SQL account in master share the same username, connection attempts can fail.

Windows Authentication process

For Windows authentication against a contained database, an initial catalog must be specified. This initial catalog should be a contained database. If so, authentication is first attempted at the server level. If no matching login or group exists in the master, SQL falls back to database level authentication.

If a contained user with matching windows account or group name does not exist in the database, it also is an authentication failure.

Note: The two levels of authentication possible are database level and server level. The order is opposite for SQL Server authentication and Windows authentication. This is important knowledge while troubleshooting login failures.

Contained authentication brings with it some additional security caveats about which a database administrator has to be aware. Here are some of the important considerations:

  • Delegation of access control – Database containment delinks Server administration from database maintenance to a certain extent. Administrators need to be aware that contained users with ALTER ANY USER privilege can add other users. This privilege should be carefully delegated. The users in contained databases should be periodically audited.
  • Guest account can allow access to other databases – Contained users can access other databases where a guest account is enabled. To avoid this, ensure guest account is disabled for all user databases.
  • Duplicate logins – In cases where SQL authentication is used, a contained user with a different password, but with the same name as his login ID can intentionally or accidentally cause Denial of Service to that login. Windows authentication is attempted first at server level, so that it is not as severely affected.
  • Users with password cannot take advantage of password policies – This makes it harder to enforce password lifetimes and history requirements.
  • Contained Database should not have AUTO_CLOSE set – Contained databases marked for AUTO_CLOSE can significantly increase the cost of authentication, possibly making Denial of Service attacks easier.

As I conclude this blog, I wanted to bring out some of these nuances to readers as it is not very well documented or known when working with contained databases.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Who Dropped Table or Database?

I used to run a series on Hero’s and Sherlock Holmes is one of my favorite. There is always a need to get to the bottom of the problem statement and whether you are a developer or a DBA, at least once you might have been asked one of the following “Who” question after a disaster.

  • Who dropped table in the database? From which application? When?
  • Who dropped database? What was the date and time?
  • Who created database on production server?
  • Who altered the database?
  • Who dropped the schema?
  • Who altered the schema?

And there are many other similar questions. Generally rich companies have auditing enabled and they have many tools which might be capturing all DDLs via either Auditing or via 3rd party tools.  Many of DBAs and developer don’t know that there is a default trace which runs in the background in every SQL Server installation

Here are few usage of default traces which are via SSMS.

SQL SERVER – SSMS: Configuration Changes History

SQL SERVER – SSMS: Schema Change History Report

Let’s look at the events captured by the default trace.

SELECT DISTINCT Trace.EventID, TraceEvents.NAME AS Event_Desc
FROM   ::fn_trace_geteventinfo(1) Trace
,sys.trace_events TraceEvents
WHERE Trace.eventID = TraceEvents.trace_event_id

Here is what we would get in SQL Server 2014

Event_ID Event_Desc
18 Audit Server Starts And Stops
20 Audit Login Failed
22 ErrorLog
46 Object:Created
47 Object:Deleted
55 Hash Warning
69 Sort Warnings
79 Missing Column Statistics
80 Missing Join Predicate
81 Server Memory Change
92 Data File Auto Grow
93 Log File Auto Grow
94 Data File Auto Shrink
95 Log File Auto Shrink
102 Audit Database Scope GDR Event
103 Audit Schema Object GDR Event
104 Audit Addlogin Event
105 Audit Login GDR Event
106 Audit Login Change Property Event
108 Audit Add Login to Server Role Event
109 Audit Add DB User Event
110 Audit Add Member to DB Role Event
111 Audit Add Role Event
115 Audit Backup/Restore Event
116 Audit DBCC Event
117 Audit Change Audit Event
152 Audit Change Database Owner
153 Audit Schema Object Take Ownership Event
155 FT:Crawl Started
156 FT:Crawl Stopped
164 Object:Altered
167 Database Mirroring State Change
175 Audit Server Alter Trace Event
218 Plan Guide Unsuccessful

As we can see there are various interesting events. Object:Altered, Object:Created and Object:Deleted can help us in identifying who dropped, altered or created any object. Once we get event class ID for the event of interest, we can use below query and replace the Event class ID.

-- read all available traces.
DECLARE @current VARCHAR(500);
DECLARE @start VARCHAR(500);
DECLARE @indx INT;
SELECT @current = path
FROM sys.traces
WHERE is_default = 1;
SET @current = REVERSE(@current)
SELECT @indx = PATINDEX('%\%', @current)
SET @current = REVERSE(@current)
SET @start = LEFT(@current, LEN(@current) - @indx) + '\log.trc';
-- CHNAGE FILER AS NEEDED
SELECT *
FROM::fn_trace_gettable(@start, DEFAULT)
WHERE EventClass IN (92,93) -- growth event
ORDER BY StartTime DESC

In above, I have used EventClass 92 and 93 to track database auto-growth events. Here is the query to find who dropped / created or altered object in database or database itself.

-- read all available traces.
DECLARE @current VARCHAR(500);
DECLARE @start VARCHAR(500);
DECLARE @indx INT;
SELECT @current = path
FROM sys.traces
WHERE is_default = 1;
SET @current = REVERSE(@current)
SELECT @indx = PATINDEX('%\%', @current)
SET @current = REVERSE(@current)
SET @start = LEFT(@current, LEN(@current) - @indx) + '\log.trc';
-- CHNAGE FILER AS NEEDED
SELECT CASE EventClass
WHEN 46 THEN 'Object:Created'
WHEN 47 THEN 'Object:Deleted'
WHEN 164 THEN 'Object:Altered'
END, DatabaseName, ObjectName, HostName, ApplicationName, LoginName, StartTime
FROM::fn_trace_gettable
(@start, DEFAULT)
WHERE EventClass IN (46,47,164) AND EventSubclass = 0 AND DatabaseID <> 2
ORDER BY StartTime DESC

Have you ever heard of someone being fired based on such auditing?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Transparent Data Encryption and Frequently Asked Questions

In the recent past, I have been traveling to Delhi to meet customers and also attend a few conferences which my company has been sponsoring. These travel experiences give me an opportunity to meet folks who read my blog regularly as a face-to-face interaction. Most of these interactions mean I get to spend time with really smart people who quiz me for which I get back to my blog for answers. Last week, when I was at a conference, one DBA walked up to me and said – “Hey Pinal, I am a DBA from a reputed banking company. We are in the process of deploying TDE for one of our databases which is in SQL Server 2008 R2. I had a few doubts, can I ask you?” Well, as simple as this interaction went, the whole conversation spanned for close to 30 mins and I am doing a summary of the conversation in this blog for your reference.

When does TDE encrypt the DB?

Once Transparent Data Encryption is enabled by issuing the “Alter Database” command, SQL Server performs basic checks such as Edition Check, Read-only Filegroups, and presence of DEK etc. Once the checks are complete, the command returns immediately with success. This does not mean that the database encryption is complete. Since Encryption is done in the I/O path, all the data pages that are already written to the disk prior to enabling the Encryption have to be read into the memory and then written back to the disk after encrypting the page. This process is also referred to as “Encryption scan”. This task is carried out by Background processes (system SPIDS). The encryption scan, which runs asynchronously to the DDL, takes a shared lock on the database. All normal operations that do not conflict with these locks can proceed without being blocked.

You mentioned READONLY DB, tell me more?

Transparent data Encryption does not work on a database that contains any filegroups that are marked Read-Only or any files that are marked as Read-only. Alter Database fails with an error message (33118) clearly indicating the reason for the failure. Users can enable read-only property on the filegroups once the encryption scan is completed. However, no DDL related to TDE (DEK change) can be run on the database until the read-only property is removed.

What happens to TLog files?

Encryption works differently on a Transaction Log and is complicated. Since Transaction Log is designed to be Write-Once fail safe, TDE does not attempt to Encrypt the contents of the Log file that were written to the disk already. Similarly, the log header cannot be re-written because of this write-once principle so there is no guarantee that log records written to the log even after TDE is enabled will be encrypted. The smallest unit of encryption for log files is a virtual log file (VLF). So either an entire virtual log file (VLF) is encrypted or it’s not. Also the entire VLF is encrypted with the same key. When encryption is turned on for a database, the next time the log manager moves to a new VLF, this new VLF will be encrypted. So there is no deterministic order between when data pages are encrypted by the scan vs. when the log is encrypted.

I thought these were some of the interesting conversations I have had in the recent past that are worth a mention to share. If you have used TDE in your environment, do let me know. I would love to know your experiences in working with it. Is there any catch that you want me to explain next time? Write it over as comments and would love to explore more.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SPID is KILLED/ROLLBACK state. What to Do Next?

I always recall my fun days after talking to my friends and blog readers. Recently killed/rollback discussion came back when my close DBA friend called me for help. Our discussion was worth blogging.

If you are a SQL DBA or developer, I am sure you must have seen something like below:

killed 01 SQL SERVER – SPID is KILLED/ROLLBACK state. What to Do Next?

The first question which would come to mind would be “what is killed/rollback command?” The answer is very simple – sometimes, DBA/Developer use “KILL” command to end processes that appear in a hung state. Once that happen, if SQL knows how to rollback that work, it would start “real” rollback process and would undo the work done. There are situations where SQL doesn’t know and has no control of work done of the work done and it has no control to roll back the work. These are the cases where its stays in KILLED/ROLLBACK state forever.

How would you decide which action to take? It completely depends on the action done by the SPID which was killed. There are only a few things, which you can do:

  1. Have patience and wait for rollback to finish. Some large operations may take a long time to rollback. Such SPID should show a change in CPU, Memory and IO column in size. sys.sysprocesses DMV (query in the next step).
  2. If the SPID is not showing any change in CPU, Memory and IO column then it is doing nothing. Here is the query which can tell us more about the SPID and from where the connection might have originated.

SELECT spid
,kpid
,login_time
,last_batch
,status
,hostname
,nt_username
,loginame
,hostprocess
,cpu
,memusage
,physical_io
FROM sys.sysprocesses
WHERE cmd = 'KILLED/ROLLBACK'

If the process is originated from different machine, then using task manager and doing end process on that machine might help. This decision has to be taken very carefully as it might be a process of a 3rd party application.

After figuring out the SPID we can run the DBCC INPUTBUFFER (<SPID>) to know what is the batch running under this session.

  1. Restart SQL Service if it was killed and doing nothing.

Can you reproduce it? The simplest way is to use xp_cmdshell and open notepad.exe using below command. DON’T DO THIS ON PRODUCTION

xp_cmdshell 'notepad.exe'

Now, it you kill this SPID it would go to KILLED/ROLLBACK state.

In summary, it is important to understand the cause. Killed/Rollback SPIDs are waiting for an event that will never happen. It could be a real “rollback” situation where SPID is rolling back and trying to get the database into a consistent state. By restarting the SQL Server service, you are postponing that rollback (or roll forward) work to the recovery step that runs before the database becomes available.

Have you ever come across such situation?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – FIX – Server principal ‘Login Name’ has granted one or more permission(s). Revoke the permission(s) before dropping the server principal

There are parts of SQL Server where I always like to take expert advice. Sometimes a quick call to a friend can save you tons of time. When I see topics around AlwaysOn, though I know what the basic steps are to building and working with AlwaysOn, the hardcore troubleshooting is not the forte perse. But when I encounter questions around it, I put my learning hat to explore the possible reasons. There was a mail from one of the readers which has been on my Inbox for couple of weeks and I wanted to find the answer. His question goes like this:

While working with AlwaysOn availability group and cleaning it up, I am encountering below error while dropping a login.

DROP LOGIN Pinal
GO

Msg 15173, Level 16, State 1, Line 1
Server principal ‘Pinal’ has granted one or more permission(s). Revoke the permission(s) before dropping the server principal.

drop error 02 SQL SERVER   FIX – Server principal ‘Login Name’ has granted one or more permission(s). Revoke the permission(s) before dropping the server principal

Even trying from UI also gives same error

drop error 01 SQL SERVER   FIX – Server principal ‘Login Name’ has granted one or more permission(s). Revoke the permission(s) before dropping the server principal

Next task was to find what is causing the error, so I made below query which can tell the exact objects that are dependent and making our DROP command to fail.

SELECT class_desc,*
FROM sys.server_permissions
WHERE grantor_principal_id = (
SELECT principal_id
FROM sys.server_principals
WHERE NAME = N'Pinal')

SELECT NAME
,type_desc
FROM sys.server_principals
WHERE principal_id IN (
SELECT grantee_principal_id
FROM sys.server_permissions
WHERE grantor_principal_id = (
SELECT principal_id
FROM sys.server_principals
WHERE NAME = N'Pinal'))

drop error 03 SQL SERVER   FIX – Server principal ‘Login Name’ has granted one or more permission(s). Revoke the permission(s) before dropping the server principal

Based on output, it means there is an endpoint on which Pinal has been granted permission. Second result shows that I have used my account i.e. Pinal to create endpoint and granted permission to AppUser account using below command:

CREATE ENDPOINT [hadr_endpoint]
STATE
=STARTED
AS TCP (LISTENER_PORT = 5023, LISTENER_IP = ALL)
FOR DATA_MIRRORING (ROLE = ALL, AUTHENTICATION = WINDOWS NEGOTIATE
, ENCRYPTION = REQUIRED ALGORITHM AES)
GO
GRANT CONNECT ON ENDPOINT::[hadr_endpoint] TO AppUser
GO

As the error message suggested, I revoked permission using the below command:

REVOKE CONNECT ON ENDPOINT::[hadr_endpoint] TO AppUser
GO

After the above step, I was able to delete login. Yes, this was a great learning for me today. Do let me know if you ever encountered similar errors in your environments?

Reference: Pinal Dave (http://blog.sqlauthority.com)