Interview Question of the Week #027 – Move TempDB from One Drive to Another Drive When Low Disk Space

I am often invited to attend various interviews and once in a while, I see practical questions discussed in the interview. Here is one of the important interview question which is related to TempDB.

Question – “What will you do when your log file of TempDB is full?”

Answer – The best solution when Log file of TempDB is full is to move it to a new drive. Here is more details about the same.

Make sure that TempDB is set to autogrow and do not set a maximum size for TempDB. If the current drive is too full to allow autogrow events, then arrange a bigger drive, or add files to TempDB on another device (using ALTER DATABASE as described below and allow those files to autogrow.

Move TempDB from one drive to another drive. There are major two reasons why TempDB needs to move from one drive to another drive.
1) TempDB grows bigger and the existing drive does not have enough space.
2) Moving TempDB to another file group which is on a different physical drive helps to improve database disk read, as they can be read simultaneously.

Follow direction below exactly to move the database and log from one drive (c:) to another drive (d:) and (e:).

Open Query Analyzer and connect to your server. Run this script to get the names of the files used for TempDB.

USE TempDB
GO
EXEC sp_helpfile
GO

Results will be something like:
name fileid filename filegroup size
——- —— ————————————————————– ———- ——-
tempdev 1 C:Program FilesMicrosoft SQL ServerMSSQLdatatempdb.mdf PRIMARY 16000 KB
templog 2 C:Program FilesMicrosoft SQL ServerMSSQLdatatemplog.ldf NULL 1024 KB

Along with other information related to the database. The names of the files are usually tempdev and templog by default. These names will be used in the next statement. Run following code, to move mdf and ldf files.

USE MASTER
GO
ALTER DATABASE TempDB MODIFY FILE
(NAME = tempdev, FILENAME = 'd:\datatempdb.mdf')
GO
ALTER DATABASE TempDB MODIFY FILE
(NAME = templog, FILENAME = 'e:\datatemplog.ldf')
GO

The definition of the TempDB is changed. However, no changes are made to TempDB till SQL Server restarts. Please stop and restart SQL Server and it will create TempDB files in new locations.

You can read more about this over here.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Secure by Default – The Mantra

AskTheExperts SQL SERVER   Secure by Default   The MantraGenerally the talks I give at conferences revolve around performance and tuning. These are always the most sought after topic and I don’t disappoint the requests because they give me an opportunity to learn something new too. Last month when I visited Sri Lanka for the first ever SQLSaturday event, the response was the same. We had a fantastic Q&A discussion which involved some random questions from attendees on a variety of topics. I think every conference should have something like this – “Ask the Experts” concept. This gets everyone together and we are having a combined fun time to say the least.

One of the topic one attendee asked was around Security. “Hey Pinal, can you recollect some top 10 things we need to take care as security practices when working with SQL Server?”. It was surely a loaded question and I was not prepared to say the least. I said, “Give me a breather buddy. I can actually do a complete end-to-end session when I come next time if you are interested. (this gave me some seconds to gather my thoughts) Now that you asked, let me throw some of them for your reference.”

The session got over and the gentleman who had asked the question had sent me a great summary of the conversation during the Q&A session. I thought to post them as a blog post so that we can elaborate over them for the future. But these are some great starters for reference.

  • Application Layer Security: SQL Injection is a powerful method to damage or compromise a SQL Server. SQL Injection is the result of failing to validate user input at the application layer. So please validate all the user inputs every single time.
  • Be strict about what represents valid input and reject everything else.
  • Use parameterized queries and avoid string concatenation to build queries.
  • Do not divulge too much information to the end user. The end user may be an attacker. Let your error message not reveal the exact problem.
  • Connect to the server using a least privilege account. Do not connect with System Admin privileges. I know many of you are doing this today, but please refrain henceforth.
  • Secure the communication channel between Client and SQL Server. SSL encryption be mandated for client server communication if this is critical. SSL is provided through the use of certificates issued by a Certificate Authority (CA). SSL encryption ensures the packets between the client and server are encrypted and mitigates the detection of user credentials or data by a network sniffer.
  • Protect the SQL Server files on the physical level. The directory structure for SQL Server should be restricted to the SQL Server service and relevant accounts.
  • Protect the permissions for the registry entries for SQL Server. They should be restricted to Administrators and the SQL Server account.
  • Audit Logins by default. Logins should be audited at the Windows and SQL Server level. At the least, failed login attempts should be logged as a flag to possible intrusion attempts.

Tips shared about SQL Account Management

  • Make the SQL Server accounts least privileged.
  • Delete or disable any unused user accounts.
  • Disable the Windows guest account (this should be automatic for a server).
  • Enforce strong password policies. This is normally controlled via Group Policy.
  • Restrict remote logins to the server.
  • Rename administrator account to stop DDOS attacks. Accounts like “sa” must be disabled/renamed at a minimum.

Seeing the mail, I thought this is a great start to talk when it comes to security and SQL Server. I thought this blog will be one of many to follow later on this topic because I am sure a number of you will also be interested in this topic. Do let me know some of the things you do in your environments so that we can share and learn from each other.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Difference Between Read Committed Snapshot and Snapshot Isolation Level

It is quite common that when a new feature gets introduced, there is a lot of talk about them. Sometimes even the old capabilities are lesser known because they can be used interchangeably in conversations. I have seen many DBAs getting confused between these two terms because both of them have a word snapshot in it.

  1. Read Committed Snapshot
  2. Snapshot Isolation level

I must point out that first one is NOT an isolation level. It is the behavior in reading committed isolation level, which gets activated only if we turn on database level property. Here are the commands to change them.

  1. ALTER DATABASE SQLAuthority SET READ_COMMITTED_SNAPSHOT ON
  2. ALTER DATABASE SQLAuthority SET ALLOW_SNAPSHOT_ISOLATION ON

If you are having problem with blocking between readers (SELECT) and writers (INSERT/UPDATE/DELETE), then you can enable first property without changing anything from the application. Which means application would still run under read committed isolation and will still read only committed data.

Let us have a look at this with an example:

Data Preparation

SET NOCOUNT ON
GO
USE MASTER
GO
IF DB_ID('SQLAuthority') IS NOT NULL
BEGIN
ALTER DATABASE
SQLAuthority   SET single_user WITH ROLLBACK IMMEDIATE
DROP DATABASE
SQLAuthority
END
GO
CREATE DATABASE SQLAuthority
GO
USE SQLAuthority
GO
CREATE TABLE DemoTable(
i INT
,j VARCHAR(100)
)
GO
USE SQLAuthority
GO
INSERT INTO DemoTable VALUES (1, 'ONE')
GO
INSERT INTO DemoTable VALUES (2, 'TWO')
GO
INSERT INTO DemoTable VALUES (3, 'THREE')
GO

Read Committed (Default Behavior)

Session 1

USE SQLAuthority
GO
BEGIN TRAN
UPDATE
DemoTable
SET i = 4
WHERE i = 1

Session 2

USE SQLAuthority
GO
BEGIN TRAN
SELECT
*
FROM   DemoTable
WHERE i = 1

Result – Query in Session 2 is blocked.

Session 1

ROLLBACK

Result – Query in Session 2 would be unlocked and would show result. It would show last committed data. Since we have done rollback in session 1, we would see original values. (1, ONE)

Read Committed Snapshot

Change the database property as below

ALTER DATABASE SQLAuthority
SET READ_COMMITTED_SNAPSHOT ON WITH ROLLBACK IMMEDIATE
GO

Session 1

USE SQLAuthority
GO
BEGIN TRAN
UPDATE
DemoTable
SET i = 4
WHERE i = 1

Session 2

USE SQLAuthority
GO
BEGIN TRAN
SELECT
*
FROM   DemoTable
WHERE i = 1

Result – Query in Session 2 shows old value (1, ONE) because current transaction is NOT committed. This is the way to avoid blocking and read committed data also.

Session 1

COMMIT

Session 2

USE SQLAuthority
GO
SELECT *
FROM   DemoTable
WHERE i = 1

Result – Query in Session 2 shows no rows because row is updated in session 1. So again, we are seeing committed data.

Snapshot Isolation Level

This is the new isolation level, which was available from SQL Server 2005 onwards. For this feature, there is a change needed in the application as it has to use a new isolation level.

Change database setting using below. We need to make sure that there is no transaction in the database.

ALTER DATABASE SQLAuthority SET AllOW_SNAPSHOT_ISOLATION ON

Now, we also need to change the isolation level of connection by using below

Session 1

USE SQLAuthority
GO
BEGIN TRAN
UPDATE
DemoTable
SET i = 10
WHERE i = 2

Session 2

SET TRANSACTION ISOLATION LEVEL SNAPSHOT
GO
USE SQLAuthority
GO
BEGIN TRAN
SELECT
*
FROM   DemoTable
WHERE i = 2

Result- Even if we have changed the value to 10, we will still see old record in session 2 (2, TWO).

Now, let’s commit transaction in session 1

Session 1

COMMIT

Let’s come back to session 2 and run select again.

Session 2

SELECT *
FROM   DemoTable
WHERE i = 2

We will still see the record because session 2 has stated the transaction with snapshot isolation. Unless we complete the transaction, we will not see latest record.

Session 2

COMMIT
SELECT
*
FROM   DemoTable
WHERE i = 2

Now, we should not see the row as its already updated.

In summary, SQL Server 2005 onwards, SQL engine provides only one new isolation level AND an optimistic implementation of READ COMMITTED. Isolation level SNAPSHOT is a new isolation level and READ COMMITTED SNAPSHOT is the same isolation level as READ COMMITTED but is the optimistic implementation of it.

I would be glad and happy to know the scenarios you have used to play around with Snapshot isolation for your servers. Do let me know via comments below.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Why Haven’t You Disabled CPU Power Saving Yet?! – Notes from the Field #087

[Note from Pinal]: This is an 87th episode of Notes from the Field series. Every week, I personally wait for the notes from the fields from Mike because it contains lessons of life which directly impacts DBA and Developer’s life positively. Mike Walsh is an amazing person to meet and know. I am fortunate to be his friend. Every single time I meet him, I learn something as a human. He is always looking for an opportunity to improve the life of DBAs and Developers.

mikewalsh SQL SERVER   Why Haven’t You Disabled CPU Power Saving Yet?!   Notes from the Field #087Mike normally talks about something that impacts a client, impacts DBAs or a professional development topic. However, in today’s blog post Mike has something very interesting and very helpful to the ever DBA in the world.

Read the entire story in his own words.


Intel CPU Core i7 2600K SQL SERVER   Why Haven’t You Disabled CPU Power Saving Yet?!   Notes from the Field #087Today’s post is going to be a very short public service announcement that I shouldn’t have to make, but part of my job as a consultant at Linchpin People is going to customers and performing WellDBA exams. A WellDBA exam is basically an extensive health check of a SQL Server environment. We are looking for small and simple changes to help performance, reliability or manageability. We are looking for deeper changes that could be made to help in these areas and we are helping clients see what sort of architecture decisions and foundational decisions should be addressed to scale into the future.

We have a free checklist you can use to check your own servers also. It describes some of our most common findings and helps you get on the road to having a healthy and a scalable SQL Server environment. You can download it and review your own environment to make sure you are following best practices.

Power Saving?! Hasn’t the SQL Family Talked About That A Lot?!?

Yes! There are posts everywhere about it and why it isn’t great for SQL Servers. Even still – it is something we find on at least 70% of the WellDBA exams we provide to customers. These are customers of all sorts and sizes with all manner of DBA teams or accidental DBAs at the helm.

So… Yes. A reminder about this setting.

What’s This All About?

Processors today have a Power Saving setting on them. They allow themselves to basically be told to operate at a lower performance profile to cut down on power costs. When the CPU gets really busy – the performance may be ramped up – then it ramps back down.

This sounds great in theory – but SQL Server isn’t normally just pegging the CPU – so the CPU performance may never go all the way up – and when it does, it often will go up just as the demand period is over and the boost isn’t needed.

The default on most Windows Server installations is “Balanced Mode” – this means out of the box you are allowing Windows to under clock your CPU experience and only increase it as/when/if needed.

Another way of saying it – you’ve taken your SQL Server instance – which you’ve licensed by CPU core nowadays – and forced it to run on lower power CPUs then you paid for when buying the server and paid for when licensing SQL Server.

What can you do?

Three simple steps really:

  1. See if you are running in balanced mode – You can go to start and then run and type in powercfg.cpl (or browse to control panel and then power settings). And see if you are running in balanced or High Performance.
  2. If running in balanced – change it to High Performance
  3. Consider making a group policy for your SQL Server servers and make sure they always run in High Performance mode.
  4. Celebrate that you are getting the performance you expected and paid for.

That’s it. I could pile on more. Over at the Linchpin blog, I put up a blog post a year ago that shows the impact of this setting and how to check it. There are more details and words there – but the important truth is found in this article that you are reading.

My plea for you is that you will just check this setting yourself and fix it for yourself. You won’t see queries take 200% less time, but you could get a boost in performance to some degree, sometimes even a 10-30% increase.

If you want to get started with performance analytics and Database Healthcheck of SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Steps to Generate Windows Cluster Log?

Every now and then I sit next to DBA to learn some of their tricks. One of this recent learning is what I plan to share here. It is tough to learn everything all by ourselves, so learning is best done by sharing. There are many incidents with SQL cluster where we need to know what is happening with cluster and the best way to know history about cluster related event is to generate a cluster.log file. Event log has events related to Failover clustering but it is not very verbose. Typically, when SQL Resource doesn’t come online, I look at cluster log.

The command to generate cluster log depends on operating system the command with change. I am not an expert on cluster, but I generally use the commands to generate log and see if there is any glaring error message. In Windows Server 2003, the cluster log was written automatically, but Windows 2008 onwards the needs to be generated. I think it was a good decision by Microsoft to not to write on this all the time. Generation of cluster log is done using the cluster.exe command. Here are the steps

  • From one of the nodes of the cluster, open a Command Prompt with Administrator rights (Right click > Run As Administrator)
  • The easiest command to generate the log is to type cluster log /g and hit enter. Parameter g stands for generate.
  • A file with name cluster.log will be generated and stored in the %windir%\Cluster\Reports directory on each node of the cluster.

Here is the output on my lab machine having 4 nodes cluster. If any of the nodes is not working, it would continue with another node.

cluster.log 01 SQL SERVER   Steps to Generate Windows Cluster Log?

Here are the files generated in C:\Windows\Cluster\Reports

cluster.log 02 SQL SERVER   Steps to Generate Windows Cluster Log?

In later version, cluster.exe is not available (unless you install legacy components) and then you have to use PowerShell method of generating cluster log. The command which is required is Get-ClusterLog

PS C:\Windows\system32> Get-ClusterLog
The term ‘get-clusterlog’ is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:15
+ get-clusterlog <<<<
+ CategoryInfo          : ObjectNotFound: (get-clusterlog:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException

cluster.log 03 SQL SERVER   Steps to Generate Windows Cluster Log?

If this is the error, then you need to import failover cluster module using “Import-Module FailoverClusters” command. Here is the screenshot. Since we have a 4 node cluster, we are seeing cluster.log four times.

cluster.log 04 SQL SERVER   Steps to Generate Windows Cluster Log?

Here is the syntax of the command from Get-Help Get-ClusterLog command.

SYNTAX
Get-ClusterLog [-InputObject <psobject>] [[-Node] <StringCollection>] [-Cluster <string>] [-Destination <string>] [-TimeSpan <UInt32>] [<CommonParameters>]

You can customize the file based on your need but to be very honest, I have never used the additional parameters.

Have you come across a situation where cluster log has helped you? I am sure in future more of these will get added and your experience in using the same would be of great help to others too. So do let us know if you did something similar in your servers too.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Knowing the Source Application Using APP_NAME() Function

APP_NAME() function in SQL Server returns the application name of the current session in which it is called.

When you execute the following code

SELECT APP_NAME() AS application_name

You get the following result

appnamefun SQL SERVER   Knowing the Source Application Using APP NAME() Function

application_name
———————————————-
Microsoft SQL Server Management Studio – Query

This function can be effectively used to audit where the data come from. If you call this function inside a .NET application say VB.NET, it may return VB.NET (not sure the exact name but it will be the application name)

If you want to do some operation based on where it is called, you can make use of this function. If you want to audit your data points about the source, you can create a table with a column that has default value of APP_NAME() which will automatically stores the application name in which data come from.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Getting started and creating JSON using SQL Server 2016

With SQL Server 2016 CTP out, I know many have started to play around with it and exploring some of the new capabilities. One of the new capabilities that has caught my attention is the introduction of JSON support. In this blog, we will get started in creating a JSON document using our SQL Server instance. Let me first introduce what JSON is and then we will move on.

SQL Server 2016 will support JSON (JavaScript Object Notation). JSON is an open, text-based exchange format based on JavaScript’s object literal notation. JSON is a popular data-interchange format used in modern web and mobile applications, as well for storing unstructured data. It is supported in several DB/NoSQL engines and this improvement will enable developers to put some JSON processing logic in the SQL Server that will enable them to parse, query, analyze, and update JSON data.

In this blog we will query a standard table and then convert it into JSON format. For this example, we are querying the standard AdventureWorks database:

SELECT TOP(2) PP.[PersonType], PP.[FirstName], PP.[MiddleName], PP.[LastName], EA.[EmailAddress]
FROM Person.Person PP
INNER JOIN [Person].[EmailAddress] EA
ON EA.[BusinessEntityID] = PP.[BusinessEntityID]

 SQL SERVER   Getting started and creating JSON using SQL Server 2016

Sample output for rows inside SQL Server Management Studio looks like above. Now we will add the FOR JSON AUTO support to get the same output in JSON format.

SELECT TOP(2) PP.[PersonType], PP.[FirstName], PP.[MiddleName], PP.[LastName], EA.[EmailAddress]
FROM Person.Person PP
INNER JOIN [Person].[EmailAddress] EA
ON EA.[BusinessEntityID] = PP.[BusinessEntityID]
FOR JSON AUTO

The output looks like below:

[
{
"PersonType":"EM",
"FirstName":"Ken",
"MiddleName":"J",
"LastName":"Sánchez",
"EA":[{"EmailAddress":"ken0@adventure-works.com"}]
},
{"PersonType":"EM",
"FirstName":"Terri",
"MiddleName":"Lee",
"LastName":"Duffy",
"EA":[{"EmailAddress":"terri0@adventure-works.com"}]
}
]

I will write about writing nested queries for a different time, but we can also generate JSON using the PATH option like:

SELECT TOP(2) PP.[PersonType], PP.[FirstName], PP.[MiddleName], PP.[LastName], EA.[EmailAddress]
FROM Person.Person PP
INNER JOIN [Person].[EmailAddress] EA
ON EA.[BusinessEntityID] = PP.[BusinessEntityID]
FOR JSON PATH

The output for this query would look little different like:

[
{"PersonType":"EM", "FirstName":"Ken”, “MiddleName":"J”, “LastName":"Sánchez”, “EmailAddress":"ken0@adventure-works.com"},{"PersonType":"EM”, “FirstName":"Terri”, “MiddleName":"Lee”, “LastName":"Duffy”, “EmailAddress":"terri0@adventure-works.com"}
]

Now that we are able to generate few simple output using JSON keyword, we will blog about other options in future blogs.

I would be interested to know if you will ever use JSON in your environment inside SQL Server? What would be your usecase for the same? Let me know and I would be more than happy to blog some of them here.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #026 – 64 Bit Vs 32 Bit Confusion

I often see people getting lost between 64 bit and 32 bit operating systems and application conversation. There are plenty of individuals still today, not sure what is the difference between them as well what runs on what platform. Here are a few very simple questions related to Windows OS and SQL Server.

32bit 64bit Interview Question of the Week #026   64 Bit Vs 32 Bit Confusion

Question: Can you run SQL SERVER 32 bit on Windows 32 bit OS?

Answer: Yes

Question: Can you run SQL SERVER 64 bit on Windows 32 bit OS?

Answer: No

Question: Can you run SQL SERVER 32 bit on Windows 64 bit OS?

Answer: Yes

Question: Can you run SQL SERVER 64 bit on Windows 64 bit OS?

Answer: Yes

I guess there you go. In a simple words, you can run pretty much everything on Windows 64 bit OS. However, Windows 32 bit OS can’t run SQL Server 64 bit.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL Server – 2016 – New Feature: Dynamic Data Masking

As most of you know that latest update from Microsoft about SQL Server 2016 was released of CTP2.1 version. There is a long list of new features getting introduced in SQL Server 2016. In this blog, I would talk about one of the security feature called Dynamic Data Masking.

Whenever you access your account on your bank site, would you be comfortable in seeing your credit card or bank account number in clear text on the web page? There are multiple ways to do this at the application level, but as a human nature, it leaves room for error. One small mistake from the developer can leak sensitive data and can cost a huge loss. Wouldn’t it be great if a credit card number would be returned with only its last 4 digits visible – XXXX-XXXX-XXXX-1234 with no additional coding? Sounds interesting, read on!

Before experimenting this feature please remember that if you are using CTP2.0 then you need to turn on trace flags using below command.

DBCC TRACEON(209,219,-1)

If you don’t enable, then here is the error which you would receive while trying my sample script given later.

Msg 102, Level 15, State 1, Line 14
Incorrect syntax near ‘masked’.

Not to forget that this is SQL Server 2016 feature. Running the script on earlier version of SQL would cause below

Msg 102, Level 15, State 1, Line 14
Incorrect syntax near ‘MASKED’.
Msg 319, Level 15, State 1, Line 14
Incorrect syntax near the keyword ‘with’. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon.

Here is the script to create objects (database, table, masked column)

SET NOCOUNT ON
GO
-- drop database MaskingDemo - if already exists
USE [master]
GO
IF DB_ID('MaskingDemo') IS NOT NULL
BEGIN
ALTER DATABASE
[MaskingDemo] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
DROP DATABASE
[MaskingDemo]
END
-- create new database called MaskingDemo
CREATE DATABASE MaskingDemo
GO
USE MaskingDemo
GO
-- Create table with different data type columns
CREATE TABLE MyContacts (
ID INT IDENTITY(1, 1) PRIMARY KEY
,fName NVARCHAR(30) NOT NULL
,
lName NVARCHAR(30) NOT NULL
,
CreditCard VARCHAR(20) NULL
,
SalaryINR INT NULL
,
OfficeEmail NVARCHAR(60) NULL
,
PersonalEmail NVARCHAR(60) NULL
,
SomeDate DATETIME NULL
)
-- insert a row
INSERT INTO [dbo].[MyContacts]
([fName],[lName] ,[CreditCard],[SalaryINR],[OfficeEmail],[PersonalEmail], SomeDate)
VALUES('Pinal','Dave','1234-5678-1234-5678',999999,'pinal@SQLAuthority.com','pinal@foobar.in', '31-March-2013')
GO
-- apply masking
ALTER TABLE MyContacts
ALTER COLUMN CreditCard ADD MASKED
WITH (FUNCTION = 'partial(2,"XX-XXXX-XXXX-XX",2)')
ALTER TABLE MyContacts
ALTER COLUMN SalaryINR ADD MASKED
WITH (FUNCTION = 'default()')      -- default on int
ALTER TABLE MyContacts
ALTER COLUMN SomeDate ADD MASKED
WITH (FUNCTION = 'default()')      -- default on date
ALTER TABLE MyContacts
ALTER COLUMN fname ADD MASKED
WITH (FUNCTION = 'default()')      -- default on varchar
ALTER TABLE MyContacts
ALTER COLUMN OfficeEmail ADD MASKED
WITH (FUNCTION = 'email()')
GO
-- create a new user and grant select permissions
USE MaskingDemo
GO
CREATE USER WhoAmI WITHOUT LOGIN;
GRANT SELECT ON MyContacts TO WhoAmI;

I have added additional comments in script so that it’s easy to understand. Below script would show the power of masking.

USE MaskingDemo
GO
SELECT * FROM MyContacts; -- this would show clear data
GO
EXECUTE AS USER = 'WhoAmI';
SELECT * FROM MyContacts;     -- this should show masked data
REVERT;
GO

mask 01 SQL Server   2016   New Feature: Dynamic Data Masking

As we can see above, those fields which are masked are showing obfuscated data based on masking rule.
Note: Versions after CTP2 release, the trace flag will not be needed. If you add trace flag, you would start getting “Incorrect syntax” error.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Tips working with Files inside SQL Server

When two SQL enthusiasts meet, often the conversation drifts to questioning each others understanding and implementation. Now, when two DBA’s meet it is always a great to see how the conversation goes because each carry a ton of baggage which can sometimes contradict too. One of the common rules of thumb has been to placing log and data files of different drives.

Recently, I had a friend say there is nothing much to do apart from placement of the drives. In one such conversation, I said – “Hey, it is not that simple. Isn’t there more to it as a DBA you take care, when working with your database files (data / log)?”. I got a blank look and was asked, Pinal I don’t see you have ever written anything like this over the blog. Though they are spread across over multiple blog posts, I thought this blog will aggregate some of the points I shared in the conversation.

#1 Tip

Physically isolate Data files from Log files. Combining heterogeneous workloads (workloads with very different I/O and latency requirements) can have negative effects on overall performance. Here is a KB article that talks about it in detail: Separate database and transaction log files on different drives for optimal performance and disaster recovery

#2 Tip

Secondly, it is highly recommended to place the database files and backups on separate physical devices. Otherwise, if the device that contains the database fails, your backups will be unavailable as well. Furthermore, placing the data (meaning Data and Log files) and backups on isolated devices optimizes the I/O performance for both production use of the database and writing backups.

#3 Tip

Similarly, it is strongly recommended to place the database files and FILESTREAM data on separate devices. Placing the data (meaning Data and Log files) and FILESTREAM data on isolated devices optimizes the I/O performance and provides better management of FILESTREAM data, namely regularly defragmenting such volumes. Just incase you want to learn more about FILESTREAMs, I recommend you read these older posts.

#4 Tip

Data files should be of equal size within each Filegroup, as SQL Server uses a proportional fill algorithm that favors allocations in files with more free space. You may recognize this recommendation as directed at the TempDB, but it applies to all user databases also. Read more about this from MSDN under File and Filegroup Fill Strategy section.

#5 Tip

Do not change the default file extensions (mdf, ldf and ndf) for data and log files. When talking about this, I would like you to take a look at the post: SQL SERVER – Antivirus Exclusions Best Practices With SQL Server because if any of these are changed, we need to make these exclusions accordingly. I am generally of the opinion not to disturb defaults unless we have a valid reason to do the same.

#6 Tip

Use the Instant File Initialization feature as this will allow a boost in performance, because when extending or creating a data file (does not apply to log files), it will not be zeroed right away, saving time and I/O cycles. This can be accomplished by granting the “Perform volume maintenance tasks” permission to the SQL Server account, which is set in the security policy console (secpol.msc). In this context I am reminded of a post by my good friend Tim Radney here which I attach for reference.

This permission does come with a small security risk, because by not zeroing out the existing space, when deleting data for example, there is a possibility that data could still be read, even though it has been “deleted”, until some other data writes on that specific area of the data file. However, the performance benefits outweighs the security risk and hence the reason for this recommendation.

As I wrap up, I think I have nicely tabulated some of the conversation points in this blog. I would love to hear from you on some of the recommendations you have done in your environments that could be a great addition to this blog via your comments.

Reference: Pinal Dave (http://blog.sqlauthority.com)