Interview Question of the Week #008 – Is GO T-SQL Statement?

I often get this question in the email – more frequently than I would like to get.

Question: “Is GO a T-SQL statement?”

Answer: “No, GO is not a T-SQL statement, it is a command which is recognized by SSMS or sqlcmd as a signal to send the current batch to SQL Server Engine.”

Well, here are two articles I recommend to read for more information – MSDN and Explanation SQL Command GO.

Here are few things I would like to add related to the GO statement.

  • If recommend a statement terminator semicolon after each statement, however, do not use semicolon after the GO statement or it will give an error.
  • ODBC or OLE DB API does not recognize the GO statement and it will give a syntax error.
  • You can use GO in a next line of T-SQL Statement. If you provide an integer after GO statement, it will execute that statement multiple times.
  • You can only have comments in the same line as a GO statement, any other statement will error.
  • You can change the command GO to any other value in SSMS by going to Tools > Option

Let me know your thoughts about the GO statement.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – FIX: SQL Server Not Starting – Msg 864, Level 16, State 1- Buffer Pool Extension Feature

Recently I got an email from one of the student who attended my SQL Server 2014 Pluralsight Course from last year, he was trying to use Buffer Pool Extension feature and got into some unique trouble. Here is the email from him:

Hi Pinal,
I watched your course about SQL Server 2014 administration new features on PluralSight. It is really a great course for someone like me who is trying to do fast ramp-up on SQL Server 2014.

I was trying to experiment with buffer pool extension feature. I have enabled the buffer pool extension feature on SQL Server Standard Edition and now I am not able to start my SQL Server service. Could you please help me out? This is not a business down situation, but want to understand what went wrong.

Regards,

(Name Hidden)

The very first thing which I asked him was to provide the error log information. Whenever there is a problem with the SQL startup, I always start looking at the Errorlog. Here is what I saw in the SQL Server Error Log.

2015-02-28 00:39:11.63 spid8s      Starting up database 'master'.

2015-02-28 00:39:11.84 spid8s      Attempting to allocate 6560086 BUF for buffer pool extension for a maximum of 6560085 page descriptors.
2015-02-28 00:39:11.84 spid8s      Error: 864, Severity: 16, State: 1.
2015-02-28 00:39:11.84 spid8s      Attempting to allocate 6560086 BUF for buffer pool extension for a maximum of 6560085 page descriptors.

Above ERRORLOG explains that, SQL server is not able to allocate the required buffers into the buffer pool extension file. If you look closely at both numbers, the difference of one buffer. This would mean that the size of buffer pool extension was more than the maximum allowed size. I looked into the documentation about the size limitation and found below link on MSDN.

https://msdn.microsoft.com/en-us/library/dn133176.aspx

(The buffer pool extension size can be up to 32 times the value of max_server_memory for Enterprise editions, and up to 4 times for Standard edition)

I went back and looked into the error log and on the top found below.

Microsoft SQL Server 2014 - 12.0.2430.0 (X64)
Oct 15 2014 16:05:37
Copyright (c) Microsoft Corporation
Standard Edition (64-bit) on Windows NT 6.3 <X64> (Build 9600: ) (Hypervisor)

So it means that the size given for BPE file was going beyond the expected limit of 4 times of max server memory because of standard edition.

So now, we know why the error is occurring and we are unable to start SQL Server. To fix the error we need to start SQL Server with minimum configuration so the BPE is not initialized. We can use startup parameter f. Here are the detailed steps.

  1. Open command prompt. Use “Run As Administrator”, if applicable.
  2. Type net start MSSQLServer /f /mSQLCMD

Since my machine has default instance I am using MSSQLServer. For named instance it would be MSSQL$<InstanceName>

We have given additional parameter called “m” and passed SQLCMD to let SQL Server know that only SQLCMD can connect.

  1. Once SQL is started, connect to SQL via SQLCMD using below on command prompt.
SQLCMD -S(local) -E

Note that you need to provide your own server name. In above example I am connecting to default instance of SQL.

  1. Once connection is made, we need to disable BPE by using below command
ALTER SERVER CONFIGURATION
SET BUFFER POOL EXTENSION OFF

  1. Once it’s turned off, we can stop SQL by net stop command as below
Net Stop MSSQLServer

  1. Now we can start SQL normally and it should work (unless we have some other problem)

To dig further, I have taken a Virtual Machine of SQL Server Standard Edition in Microsoft Azure Cloud. My machine had below configuration:

RAM = 14 GB
Max Server Memory = 12 GB
D drive is SSD for BPE file. Over there I created folder SQLAuthority_BPE

Here are the steps to reproduce the error in SQL Server Standard Edition.

SP_CONFIGURE 'MAX SERVER MEMORY', 12000
GO
RECONFIGURE WITH OVERRIDE
GO
ALTER SERVER CONFIGURATION
SET BUFFER POOL EXTENSION ON (FILENAME = 'D:\SQLAuthority_BPE\BUFFERPOOLEXT.BPE' ,SIZE = 50 GB)
GO
SELECT * FROM SYS.DM_OS_BUFFER_POOL_EXTENSION_CONFIGURATION
GO
SHUTDOWN WITH NOWAIT

Important: Above script would also shutdown SQL Server because I have added shutdown T-SQL command.

Once it was done, I was not able to start SQL without following the steps which I mentioned earlier. After fixing the problem, I configured BPE with 48 GB is size and it worked fine. Here is the message which we can see in ERRORLOG

2015-02-28 09:57:42.110 spid8s       Buffer pool extension "D:\BPE\BUFFERPOOLEXT.BPE" has been initialized successfully with size is 49152 MB.

Hope this would help in understanding the limitation and fixing the error which I have not seen so far.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – ReadOnly Databases and Notes Using Snapshots

This seems to be the third week that I am writing about ReadOnly databases, this blog is in continuation of last weeks notes. When I wrote the notes from last week, one of my blog reader said if it is possible to get the ReadOnly error without making the database into a ReadOnly database. This got me thinking and I couldn’t sleep that day. And a quick call to my friend to get some ideas – the answer was simple – using Snapshots.

I couldn’t control my excitement and this triggered me to write this special case scenario. As we execute the script, we will learn some of the behavior.

Let us start by creating our database which will be used for testing.

CREATE DATABASE [ReadOnlyDB]
CONTAINMENT
= NONE
ON  PRIMARY
( NAME = N'ReadOnlyDB', FILENAME = N'C:\Temp\ReadOnlyDB.mdf' , SIZE = 4024KB , FILEGROWTH = 1024KB )
LOG ON
( NAME = N'ReadOnlyDB_log', FILENAME = N'C:\Temp\ReadOnlyDB_log.ldf' , SIZE = 20480KB , FILEGROWTH = 10%)
GO
USE ReadOnlyDB
GO
-- Creating our table
CREATE TABLE tbl_SQLAuth (id INT, Longname CHAR(8000))
GO

Next let us go ahead and create a snapshot database out of our ReadOnlyDB.

-- Create a snapshot on our DB
CREATE DATABASE ReadOnlyDB_SS ON
( NAME = ReadOnlyDB, FILENAME =
'C:\Temp\ReadOnlyDB_data_1800.ss' )
AS SNAPSHOT OF ReadOnlyDB;
GO

Now that our snapshot database (ReadOnlyDB_SS) is created, it is important to know that snapshot databases are created as ReadOnly databases. The behavior is similar to creating ReadOnly Databases. To test this, let us try to insert some data into our Snapshot DB.

USE ReadOnlyDB_SS
GO
INSERT INTO tbl_SQLAuth VALUES (1, 'SQLAuth');
GO

Now we will be presented with the same familiar error (3906):

Msg 3906, Level 16, State 1, Line 25

Failed to update database “ReadOnlyDB_SS” because the database is read-only.

Even though we have not explicitly marked the snapshot database as Readonly we are getting this error. I was curious to know, what will happen if we try to mark the snapshot database as ReadOnly again. The error is self-explanatory.

-- Let us set the Snapshot DB as READ_ONLY
USE MASTER
GO
ALTER DATABASE [ReadOnlyDB_SS] SET READ_ONLY
GO

Msg 5093, Level 16, State 1, Line 31

The operation cannot be performed on a database snapshot.

Msg 5069, Level 16, State 1, Line 31

ALTER DATABASE statement failed.

If that was easy, what will happen if we try to make the snapshot database to read_write? The error is self-explanatory again here:

USE [master]
GO
ALTER DATABASE [ReadOnlyDB_SS] SET  READ_WRITE WITH NO_WAIT
GO

Msg 5093, Level 16, State 1, Line 37

The operation cannot be performed on a database snapshot.

Msg 5069, Level 16, State 1, Line 37

ALTER DATABASE statement failed.

Now that gives us two learnings, we cannot mark a Snapshot database as ReadOnly and nor can we mark a snapshot database as Read_Write enabled.

With these learnings in place, let us do the cleanup. Make sure to drop the Snapshot database before deleting the main database. So our cleanup script for this blog looks:

-- Clean up time
USE MASTER
GO
DROP DATABASE ReadOnlyDB_SS
GO
DROP DATABASE ReadOnlyDB
GO

Reference: Pinal Dave (http://blog.sqlauthority.com)

Hey DBA – Protecting a Proactive Attitude – Notes from the Field #069

[Note from Pinal]: This is a 69th episode of Notes from the Field series. Mike Walsh is an amazing person to meet and know. I am fortunate to be his friend. Every single time I meet him, I learn something as a human. He is always looking for an opportunity to improve the life of DBAs and Developers.

Being proactive is a good thing. There are few things which can not go wrong and being proactive is one it. Being proactive is not all about SQL scripts and deployments. It is also about attitude and mindset. If you know Mike, you must be aware of that he is the person who is well aware of human psychology. He knows what DBA and Developer things and how their attitude towards technology. I asked him one question – “How to protect a proactive attitude?” and database expert Mike Walsh decided to guide me with the answer of this question.

Read the entire story in his own words.


My posts lately here have been talking about being proactive. I am going to continue on that theme today, because it is that important. We perform a lot of WellDBA Exams™ – where we go into a client shop, review their existing setup, look at performance and write up our findings.

Each time we do one of these, we find the same types of findings almost everywhere. In fact we have a free resource available where we guide you through a checklist where you can evaluate your own environment against the 7 most common findings we encounter and see detailed information on what to do about those findings.

You can grab that free resource here.

In this post I want to zoom in on a proactive attitude again, and some ways to protect your proactive time. Then I will suggest a plan using that checklist to go through your environment and ensure the big things are taken care of.

Protecting a Proactive Attitude

I know I talk about this in just about every post I share here, but I am trying to cement this thought – a good DBA is a proactive DBA. You cannot just wait for a problem to arise, fight the problem, and then sit back and wait for the next problem.

I know you don’t do that. I know most DBAs don’t do that, at least not on purpose. But in this interrupt driven world, users and other IT teams are constantly clawing at their DBA for requests. These requests have to be met, because customers are expecting them to be done. But I challenge you that many requests don’t need to be done as fast as most DBAs do them.

You should develop a plan to protect your proactive time.

A few tips on how to preserve that time:

  • Schedule it! Make a block of time on your calendar each day, each week, whatever you can do. Guard that time and shut down your e-mail and IM clients (if you are allowed) and focus on doing proactive tasks to make your life easier.
  • Talk to management. When you are reactive you are spending more time, you are less effective and mistakes happen. If you get management to guard your time and people’s access to you for a bit for some proactive time, it will make you more efficient.
  • Work from home if you can. A lot of times when I was a full-time DBA – I could be more proactive when working from home, as the “drop by” visits can’t happen as easily that way.
  • Consider a WellDBA exam from Linchpin People, or a health check from another firm. Sometimes having an outside firm come in, audit your environment and tell you about your misses is the best way to get proactive time prioritized. Plus that approach helps you know where to focus your proactive time.
  • Just do it – Sounds harsh – but sometimes you just have to do it. If you don’t start trying to be more proactive, you’ll always be reactive and that is when mistakes happen. This is when we can come in and do an Exam and uncover the findings that the DBAs will say “yeah I know.. But I didn’t have time”

A Proactive Plan

Download our checklist, or find other SQL Server diagnostic scripts and approaches out there. There are several. Our checklist is less focused on scripts, and more focused on how, what and why you should care about these 7 areas, then suggests an approach to resolve the findings.

Here is the link again, it really is free and not some gimmick to get you on a “to be harassed” list.

Then start going through the checklist. We’ve ordered it in terms of our typical priorities.  For example – if you aren’t able to recover your environment, the other 6 points don’t matter as much.. Make sure you are really secure in each area, make sure you really understand the how and why in each area.

Move on to the next question and repeat the process. It is that simple. Some of the tasks will take more time than others to investigate. Some will take more time than others to fix. But this is a place to get started. And sometimes a place to start is all that is needed.

So, get some time set aside for being proactive, review your environment and get to the point where at least those 7 critical items are being taken care of! All the best on your journey!

If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Storing Data in Hindi, Chinese, Gujarati and Tamil (or any other) Language in the Same Table

Long time ago, I have written a blog which talks about multi language support available with SQL Server.

SQL SERVER – 2005 – Multiple Language Support

I have been getting below questions on regular basics and it still baffles me:

  1. How do I pass Hindi parameters in SQL Server procedure as I want to insert a row with Hindi text?
  2. What is the collation I need to use if I have to store Tamil in SQL Server tables?
  3. I am inserting the data in Gujarati but while selecting I am getting “???”

Here are some basic rules which you may want to keep in mind. While storing Unicode data, the column must be of Unicode data type (nchar, nvarchar, ntext). Another rule is that the value must be prefixed with N while insertion.

If you are using a stored procedure then while passing the parameter to stored procedure make sure that N is used before the string. Also make sure the parameter declared within the stored procedure which are used for carrying those variable are also defined as Unicode.

Here is the sample script and the output which can make sure understand the things. I have used Google translate to do translation and I am not sure how accurate it is in Kannada, Tamil and Telugu.

use master
go
If db_id('SQLAuthority') is not null
drop database SQLAuthority
go
set nocount on
go
Create database SQLAuthority
go
use SQLAuthority
go
Create Table MultiLanguage_Will_Not_Work (i int, j varchar(10), k varchar(100))
go
-- Hindi
Insert into MultiLanguage_Will_Not_Work values (1, 'with N', N'मेरा नाम Pinal है')
go
-- Tamil
Insert into MultiLanguage_Will_Not_Work values (2, 'without N', 'என் பெயர் PINAL ஆகிறது')
go
Select * from MultiLanguage_Will_Not_Work
go
Create Table MultiLanguage (i int,j varchar(10), k nvarchar(100))
go
-- Gujarati
Insert into MultiLanguage values (1, 'with N', N'મારું નામ પિનલ છે')
go
-- Telugu
Insert into MultiLanguage values (2, 'without N', 'నా పేరు PINAL ఉంది')
go
-- Kannada
Insert into MultiLanguage values (3, 'with N', N'ನನ್ನ ಹೆಸರು ಪಿನಾಲ್ ಆಗಿದೆ')
go
Select * from MultiLanguage
go
use master
go
drop database SQLAuthority
go

Here is the output

Same would work for any language as well. It is important for us to know the basics and this blog post is one of many basics blogs that I plan to write. Do let me know which topic you would like me to revisit as part of this – back to basics series? More ideas you give me, more interesting the series can be.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Configure, Monitor and Restore Backups from Mobile & Web browser

So tell me if this is a fantasy or a reality: You are on a beach, sipping on your cocktail and looking at the sea when you get a frantic call from your boss, “One of the junior DBAs has just deleted all of the data from a critical database”. You say “Hold on”, switch to a browser on your smartphone, go to your dashboard on SqlBak.com and click a Restore button on your latest transaction log backup. In a few seconds you get back to your boss and say: “I have fixed it”. Your boss is stunned thinking you are a magician and makes a note to give you a raise and a bonus while you continue enjoying the beach…

Well, the beach and the pay raise are a fantasy, but restoring your database in a couple of clicks through just a browser on your smartphone is a service you can use today – it is called SqlBak.

This is how it looks like – you select a backup job, click Restore and the latest backup will be restored to your SQL Server:

Let’s see how to make it works

First – boring stuff: go through 10 seconds signup with your Facebook, Twitter or Google account. Then (and here’s the secret to a magic of SqlBak working through a browser) – download and install a small service program to run on your SQL Server computer – SqlBak Client.

To connect this program to your SqlBak account, enter your “Secret Key” from SqlBak Dashboard

into the SqlBak Client (it will prompt you for it):

 

The service program (SqlBak Client) has practically no interface and after connecting it to your SqlBak account you can forget about it (it will also auto-update). All of the configuration, monitoring and restore interface will be online through SqlBak.com, not through the program directly.

Now the fun part begins.

Go to the SqlBak Dashboard and click “Add New Job” button.  Select the computer where you have previously installed and connected the SqlBak Client and authenticate with the SQL Server – exactly like you’d do in SSMS.

Press Continue and you get to the Job Settings page. Select the databases you want to backup:

Then Press “Add Backup Destinations” to select where you want to store the backups – you have an option of sending it to you Local/Network folder, FTP, Amazon S3, Dropbox, Google Drive, MS OneDrive or Azure Storage:

I’ve selected to send backups to Amazon S3 and a Network Folder:

Lets schedule the full backup to run every 6h and differential backup every 1 hours:

By default it takes standard backup files and compresses them to zip – lets keep it like that:

You would probably want to receive a confirmation email at least on any failure – so fill out that field:

Now save the job and press Run Now to see if everything runs correctly:

You would see a log like this:

Now we can forget about it – it will email us if anything is wrong. If we get back to it in a few days – heres what we would see a backup history like this:

For each successful backup you have the Download and Restore buttons. Let’s click on the Restore buttons to restore one of the Differential backup. Select the destination to restore from (Amazon S3 or a network folder in our case):

You will see a restore progress bar:

In a few minutes your database is restored.

Absolutely brilliant! I know of no other product that would allow me to do the same from just a browser. It took quite a few pictures to describe the process, but in reality it takes just a few minutes and pretty self-explanatory.

In addition to backups this service would monitor that your SQL Server is alive and would email you if it goes offline – very useful.

Companies that want to provide Sql Server private label backups themselves can configure the service so it is branded to their company.

Some companies with strict security requirements may be hesitant of storing SQL Server login credentials with SqlBak. I’ve contacted the developers and they have told me that they are working on a version that would give user a choice of entering login credentials on the client – this way they would never leave the SQL Server.

Conclusion: I have seen many SQL backup programs before, but I am seriously impressed with the ease of using SqlBak – the only SQL backup software where you can configure, monitor and restore backups right from the browser.

If you enter the promo code Pinal2015 on the Sign Up page and you will receive $20 credit to your account.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – COLUMNSTORE and COLUMNSTORE_ARCHIVE data compression

In earlier blogs on “Fundamentals of Columnstore Index”, I took a simple approach on taking a tour of how to create a simple Columnstore index. Having said that, SQL Server 2014 supports columnstore and columnstore archival compression for columnstore tables and indexes. Columnstore tables and indexes are always stored with columnstore compression. Most data warehouses have some data that is frequently accessed and some that is accessed more infrequently.

You might be wondering the use case scenario for these commands. For example, in reality the data may be partitioned by date and the most recent data is accessed much more frequently than older data. In such cases the older data can benefit from additional compression at the cost of slower query performance. I talk to a lot of customers about this special feature:

  • To enable this scenario SQL Server added support for archival compression of SQL Server column stores from SQL Server 2014.
  • The archival compression option is enabled on a per-table or partition (object) basis; a column store can contain objects with archival compression applied and objects without. To allow for easy extensibility of existing on-disk structures, archival compression is implemented as an extra stream compression layer on top of Columnstore index implementation.
  • The Columnstore compression implementation transparently compresses the bytes being written to disk during the column store serialization process and transparently decompresses them during the deserialization process. Stream decompression is always applied when data is read from disk. Data is not cached in memory with stream compression.

The further reduction obtained by archival compression is substantial, ranging anywhere from 25% to 60%+ depending on the data. These are rough numbers based on what I have seen at customer interactions in the past.

  • Use COLUMNSTORE_ARCHIVE data compression to compress columnstore data with archival compression.

Let us look at how this can be achieved using TSQL commands.

To Apply Compression:

The simple TSQL commands to play with ColumnStore and columnStore Archive bits in single ALTER script:

-- Adding Archive bit for single Partition
ALTER TABLE tbl_myColumnStore
REBUILD PARTITION
= 1 WITH (DATA_COMPRESSION =  COLUMNSTORE_ARCHIVE) ;
-- Adding Archive bit for all Partition
ALTER TABLE tbl_myColumnStore
REBUILD PARTITION
= ALL WITH (DATA_COMPRESSION =  COLUMNSTORE_ARCHIVE) ;
-- Adding Columnstore for all parition and Archive for few Partition
ALTER TABLE tbl_myColumnStore
REBUILD PARTITION
= ALL WITH (DATA_COMPRESSION =  COLUMNSTORE_ARCHIVE ON PARTITIONS (1,3)) ;

Use COLUMNSTORE data compression to decompress archival compression. This resulting data will continue to be compressed with columnstore compression.

To Remove the Archive Compression

Similar to above commands, we can also use to remove our archival compression on specific partitions or on objects. A representative command looks like:

-- Turning Archive bit off to Columnstore for all Partitions
ALTER TABLE tbl_myColumnStore
REBUILD PARTITION
= ALL WITH (DATA_COMPRESSION =  COLUMNSTORE) ;

As I wrap up this blog, I would like to know how many of you out there are using the archival columnstore in your environments. How much compression are you getting and what is the space savings because of this? I would love to hear from your experience as we learn together.

Reference: Pinal Dave (http://blog.sqlauthority.com)