SQL SERVER – Maintenance Plan – Maintenance Cleanup Task not Deleting Files

I am always notified by my blog readers whenever they see something unusual and I try to help them so that I can learn something new. Recently, one of my blog reader told that his drive is getting filled with the regular backups taken via SQL Server Maintenance Plans.

My simple answer was – You should delete or move older backups and Maintenance Cleanup Task is available in the Toolbox. And I was shocked when he said that he already has that task, but it is not working as expected. It is supposed to delete files with extension “bak” which are 7 days old, but its not deleting any of those files. My next question was – is there any error message? The answer was – “When I run the job, it gives me a success message, but older .bak files are still present.”

I asked him to look into the log files generated by maintenance plan. They are generated under LOG folder (which contains SQL ERRORLOGs as well). Here is what was shared.

Microsoft(R) Server Maintenance Utility (Unicode) Version 12.0.2381
Report was generated on “local”.
Maintenance Plan: MaintenancePlan
Duration: 00:00:00
Status: Succeeded.
Details:
Maintenance Cleanup Task (local)
Maintenance Cleanup on Local server connection
Cleanup Database Backup files
Age: Older than 4 Weeks
Task start: 2014-10-28T10:14:59.
Task end: 2014-10-28T10:14:59.
Success
Command:EXECUTE master.dbo.xp_delete_file 0,N”E:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\Backup”,N”.bak”,N”2014-09-30T10:14:59”
GO

When I configured the plan on my machine and looked at the log, I got similar output. There is a little difference there. Notice the extension of the file given as .bak (notice a dot before bak). I have highlighted in the output.

Then I asked my friend to share the screenshot of the cleanup task and here is what he shared.

 

And that was the problem. I asked him to keep bak as extension (by removing dot before it) and run the below command.

EXECUTE MASTER.dbo.xp_delete_file 0,N'E:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\Backup',N'bak',N'2014-09-30T10:14:59'
GO

The command is same as what was shown in the log file. I have just replaced two single quotes with one single quote. As soon as that was run, all BAK files older than 2014-09-30T10:14:59 got deleted successfully.

Moral of the story: Doesn’t use dot before the extension of the file in the UI.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – FIX – The database ‘model’ is marked RESTORING and is in a state that does not allow recovery to be run

The best way one can learn SQL Server is by trying out things on their own and I am no different. I constantly am trying to explore the various options one can use when working with SQL Server. In the same context, when I was playing around with backup restore commands, I made a mistake and unfortunately restarted SQL Server. After that I was unable to start SQL Service.  If I start the service, it doesn’t give any error but gets stop automatically.

Whenever I have any weird problems with SQL, I always look at ERRORLOG files for that instance. If you don’t know the location of Errorlog, you should refer Balmukund’s blog (Help : Where is SQL Server ErrorLog?)

This is what I found in ERROLROG just before the stop.

2014-10-28 002039.02 spid9s      Starting up database 'model'.
2014-10-28 002040.01 spid9s      The database 'model' is marked RESTORING and is in a state that does not allow recovery to be run.
2014-10-28 002040.04 spid9s      Error 927, Severity 14, State 2.
2014-10-28 002040.04 spid9s      Database 'model' cannot be opened. It is in the middle of a restore.

The error and behavior which I am seeing makes sense because to start SQL Server, we need master, model and tempdb database. You might think that MSDB is also a system database and would be needed for SQL Engine? Well, you might have been tricked. MSDB is needed for SQL Server Agent Service, not SQL Server Service. So, my master is fine, model has some problem. Every new database is created using model, including TempDB so SQL Service is refusing to start. Since the model database is not recovered successfully, SQL Server cannot create the tempdb database, and the instance of SQL Server does not start understandably.

So I called up Balmukund – these are the perks of having a good friend to rely. He never says “no” but he also doesn’t give complete solution to the problem. He gives hint and asks me to research further. This time also magical words were – “use trace flag 3608 and restore model with recovery”.

I followed his advice and performed below steps.

1. Start SQL Server with trace flag 3608 using net start command

Net Start MSSQL$SQL2014 /T3608

In my case SQL2014 is the name of the instance. If you have default instance then service name would be MSSQLServer. For named instance, it is MSSQL$InstanceNameHere

2. After starting with trace flag 3608, I verified the same from Errorlog as well.

Further, I also found below message in ERRORLOG.

Recovering only master database because traceflag 3608 was specified. This is an informational message only. No user action is required.

3. Connected to SQL Instance using SQLCMD by below command.

SQLCMD -S .\SQL2014 -E

You can read parameter of SQLCMD at Books online here

“1>” means we are connected to SQL Instance and then Executed below command (hit enter at end of each line)

RESTORE DATABASE Model WITH RECOVERY
GO

4. Once the command is executed successfully, we will come back to “1>” again. We can type exit to come out of SQLCMD

5. Now stop SQL Service

Net Stop MSSQL$SQL2014

6. And start again without trace flag.

Net Start MSSQL$SQL2014

Now my SQL instance came up happily and I was unblocked.  After sometime I got call from Balmukund asking if SQL is started and I told that my next blog is ready on the same topic.  He finally asked, how did that happen? And my answer was – I ran wrong command. My command was

BACKUP DATABASE model TO DISK = 'Full.bak'
GO
BACKUP LOG model TO DISK = 'Log.trn' WITH NORECOVERY
GO

My request to my reader is that please DONOT run the above command in your SQL instance and restart SQL else you need to follow the steps in production server. Learning never stops when working with SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS Trick – Generating CSV file using Management Studio

In my previous blog I talked about using SQLCMD to generate a comma separate files for the table data.

Some blog readers reported that why we should use SQLCMD or BCP to generate the file when the same can be achieved using SQL Server Management Studio itself. Well, that was a strong statement and I liked the simplicity of the solution. Here are the steps. A lot of times it is our inability to learn some of the simple techniques present in the tools that we use make us even more productive.

Here are the steps to achieve the same. Go to SQL Server Management Studio > Choose Tools from Menu Bar > Click on Options. Go to “Query Results” > “SQL Server” > Click on “Results to Text”

Once we choose “Comma delimited” and hit OK – we will get below kind of output when we eecure a query (Note that we need to open a new Query Window so that the settings are saved)

 

Notice that first column is the column name. If we want to avoid that and wants ONLY the data, then we can customized that in the same place where we have modified the separator setting. If we uncheck “Include column headers in the result set” then we will get only the rows as output.

We can also choose custom delimiter. I have set delimiter as pipe (|) and unchecked the box which I mentioned earlier and below is the output for the same query.

As we can see that output is pipe separated and we just have data not column heading. This can be saved and imported again to some other server using bcp command, Bulk Insert command or Import/Export Wizard.

Recommended reading:

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – FIX – The term ‘Invoke-Sqlcmd’ is not recognized as the name of a cmdlet

It’s always a fun to learn and share new thing which I learn about any technology. So I decided to spend my weekend at home quietly, so I could learn some scripting using PowerShell and SQL Server. My laptop has tons of software installed so I always use my Virtual Machine to learn new things because I want to see how the things work when someone is working as a learner. My day didn’t start well as I was hit by errors as soon as I started Invoke-SQLCMD. This is part of PowerShell command and is worth a look.

This is the first error I got as soon as I wanted to run Invoke-SQLCMD

Error # 1 The term ‘Invoke-Sqlcmd’ is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

The Error was because of the fact that I just installed SQL Server Engine on the machine. I didn’t install any client components. So essentially the error appears because the Invoke-sqlcmd cmdlet is not included as part of Windows PowerShell, but instead it is of sqlps (SQL Server 2008 PowerShell Host). So, before using invoke-sqlcmd we should install SSMS or the SQL Server Feature Pack (latest is SQL 2014 which can be downloaded from here)

Once installation was done, I ran the command and I got a new error.

Error # 2The ‘invoke-sqlcmd‘ command was found in the module ‘SQLPS’, but the module could not be loaded. For more information, run ‘Import-Module SQLPS’.

Here is the complete error message

As we can see above, I can also run Import-Module SQLPS to know more. Here is the output of the command

Error # 3 Import-Module : File E:\Program Files (x86)\Microsoft SQL Server\120\Tools\PowerShell\Modules\SQLPS\Sqlps.ps1 cannot be loaded because running scripts is disabled on this system. For more information, see about_Execution_Policies at http://go.microsoft.com/fwlink/?LinkID=135170.

Here is the complete error message

Why this error? If you know the basics of PowerShell, every command is called cmdlet (pronounced as command let). Invoke-SQLCmd is also a cmdlet provided either by the snap-in SqlServerCmdletSnapin100 (prior to SQL Server 2012) or module SQLPS (SQL Server 2012+). So, we need any one loaded into PowerShell (for example, at the beginning of your script) before you can call the cmdlet.

Assuming PowerShell is installed on the SQL server. You can open the Windows PowerShell Command prompt as below and get on to SQL power shell environment.

Add-PSSnapin SqlServerCmdletSnapin100
Add-PSSnapin SqlServerProviderSnapin100

Once I followed above, I was able to use Invoke-SQLCMD as below

Hope this would help others in finding the solution to the errors. How many of you here are frequent users of PowerShell to administer and automate SQL Server tasks?
Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to use Procedure sp_user_counter1 to sp_user_counter10

There are many performance counters available in SQL Server which can be used to monitor various parameters of SQL Server engine. Have you ever been into a situation where you want to see value in performance counter for a query which you have returned? Imagine a situation where you want to look at fragmentation or numbers of rows in a table over a period of time. And you want to plot the values against standard performance counters available.

SQL Server had this feature from a long time. I noticed it recently while working with one of my friend. He was showing me the impact of a number of database vs the number of threads in the AlwaysOn availability group. He showed me graph and my first question was – how did you plot the number of databases with that counter? The answer was – using a user counter. I did further research and learned something new so sharing that with you in the form of this blog.

There are ten system procedures which are available in SQL Server, which allows user to set specific counter values. They are sp_user_counter1, sp_user_counter2… till sp_user_counter10. That’s why they are called user settable counters. Below is the screenshot of Performance Monitor. (Start > Run > PerfMon)

These counters can be populated using stored procedures. Counter Object name and stored procedure names are same. This means the values for “User Counter 1” can using sp_user_counter1, values for “User counter 2” can using sp_user_counter2 and so on.

Here is a quick example to demonstrate the usage.
-- Drop database, if already present
USE MASTER
GO
IF EXISTS (
SELECT *
FROM sys.databases
WHERE NAME = 'SQLAuthority'
)
BEGIN
ALTER DATABASE
SQLAuthority SET single_user WITH ROLLBACK IMMEDIATE;
DROP DATABASE SQLAuthority;
END
GO
-- Create New Database
CREATE DATABASE SQLAuthority
GO
-- Create Objects Needed for Demo
USE SQLAuthority
GO
CREATE TABLE TrackMe (i INT)
GO
-- while loop to populate the data and populate the counter
SET NOCOUNT ON
DECLARE
@NumberOfRows INT=0,@NumberOfRows_2 INT=0
WHILE (@NumberOfRows<25)
BEGIN
INSERT INTO
TrackMe VALUES (1), (2), (3)
SELECT @NumberOfRows = COUNT(*) FROM TrackMe
EXECUTE sp_user_counter1 @NumberOfRows        -- 3, 6, 9, 12...
WAITFOR delay '00:00:02'
SELECT @NumberOfRows_2 = @NumberOfRows*2
EXECUTE sp_user_counter2 @NumberOfRows_2    -- 6, 12, 18, 24...
WAITFOR delay '00:00:02'
END
-- loop finished reset the counters
EXECUTE sp_user_counter1 0
EXECUTE sp_user_counter2 0

Here is the Performance counter graph where I have captured the counters.

Let me explain little logic of the script. My script is plotting number of rows in the table as counter1 and double of that as counter2. There is a delay of 2 seconds added between each counter so that we can see steps. Once the number of rows in the table are around 50, I am ending the loop and resetting the values to zero. That’s when you are seeing a drop at the end.

If you look at the procedure, it is straight forward. Believe me, here is the code.

CREATE PROCEDURE sys.sp_user_counter1 @newvalue INT AS
DBCC
setinstance ('SQLServer:User Settable', 'Query', 'User counter 1', @newvalue)

Which means, we can also call undocumented command DBCC SETINSTANCE directly to set value for the counters.

Can you think of more innovative use of drawing value in performance monitor?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Fix : Msg 230, Level 14, State 1 – The SELECT permission was denied on the object, database, schema – Part II

A couple of days back I wrote the blog on Msg 230 and thought it was one of the simplest implementations of access control on the system. To continue with the code in this blog, please make sure you read the same first before getting into the examples here.

In that blog, I wrote a statement where I said the DENY takes precedence over GRANT. Post this, my good friend Vinod Kumar had pinged me to say that this might not be the case always. Now that statement got me thinking harder and I was quick to ask, prove me wrong? More than a challenge, it was a great opportunity to learn some of these finer details that lets us learn and understand SQL Server better. I have always been of the opinion that the best way to learn something is by sharing. So here is the repro to the exception where the DENY at a higher level is overridden by a GRANT at a granular level.

Similar to our previous post, let us go ahead and create our samples database, create a user inside this database, our secure table with some data.

CREATE DATABASE PermissionsDB
GO
USE PermissionsDB
GO
CREATE USER Pinal WITHOUT LOGIN;
GO
CREATE TABLE SecureTbl (ID INT, Name VARCHAR(50), SSN VARCHAR(20))
GO
INSERT INTO SecureTbl VALUES (1, 'Pinal', '111-22-3333')
GO

If we try to access the table using the login of “Pinal” at this point in time without any explicit permission given to the user. We will get the following error:

Msg 229, Level 14, State 5, Line 38
The SELECT permission was denied on the object 'SecureTbl', database 'PermissionsDB', schema 'dbo'.

Now that we have our secure table ready, let us go ahead and DENY explicit rights on the table to user ‘Pinal’. Next, we will go ahead and GRANT at a specific column level – in the command below. In this example, we have gone ahead to give rights on the column “ID” alone.

-- Does a GRANT at column-level override a DENY at the object level
DENY SELECT ON SecureTbl TO Pinal
GRANT SELECT (ID) ON SecureTbl TO Pinal
GO

Let us next change the user context to “Pinal” and execute the Select command.

EXECUTE AS USER='Pinal'
SELECT * FROM SecureTbl
SELECT ID FROM SecureTbl
REVERT

Though we have DENY permissions at the Object level (table in this instance), the GRANT seems to be working on the specific column level though.

Msg 230, Level 14, State 1, Line 38
The SELECT permission was denied on the column 'Name' of the object 'SecureTbl', database 'PermissionsDB', schema 'dbo'.
Msg 230, Level 14, State 1, Line 38
The SELECT permission was denied on the column 'SSN' of the object 'SecureTbl', database 'PermissionsDB', schema 'dbo'.

If you want to GRANT access to multiple columns, we can do it using the command as shown below.

GRANT SELECT (ID, Name) ON SecureTbl TO Pinal
GO

I am always of the opinion that it is better to DENY across the board and then give specific access sometimes. If you might notice, when compared to the previous article – we have NOT GRANTED SELECT at the global level for all here. We were explicit in the DENY logic for instance.

-- Explicitly grant at object level
GRANT SELECT ON SecureTbl TO PUBLIC

Let us make sure there are no unnecessary databases in our server. Here is the cleanup script.

-- Clean up
USE MASTER
GO
DROP DATABASE PermissionsDB

I hope you also learnt something new today. This is a great way to learn the fine prints when it comes to working with databases and permissions. Please let me know if you have ever used this technique in your environments and what was the actual scenario?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Common Sense Data Security – Notes from the Field #055

[Note from Pinal]: This is a 55th episode of Notes from the Field series. Common sense is not as much as common as we think. I am sure you agree with it from your real world experience. However, when it is about data and its security, there has to be some rules along with the policy but common sense is extremely critical. When I read this article, I find it humorous at points and some of the examples also reminded me of my past experience. If you are in data security, you will have a great time reading these notes, but if you are not, you will still love it.

In this episode of the Notes from the Field series database expert Kevin Hazzard explains common sense data security and how we can apply in daily life in real world. Read the experience of Reeves in his own words.


There are many excellent books and articles that address the correct ways to store sensitive user information. Yet, many in IT are still failing to protect customers from loss due to data breaches. Every day, it seems that there’s another retailer or commercial web site in the news for losing passwords or credit card numbers to hackers. As an industry, why are we struggling to secure this type of information when there’s so much good intelligence and so many great tools for getting the job done? It’s a complicated subject so perhaps it’s time to step back a bit and use a bit of common sense to analyze the problem.

No matter the industry, using the right tool for the job is rule number one. Line-of-business databases are all about organizing information and getting it into the hands of people who perform transactions and make decisions with it. As a result, these databases become naturally permissive by nature, especially as they evolve to meet the demands of growing businesses. There are good access controls in modern databases but when it comes to managing ultra-secure bits of data, traditional, relational databases may not be the best fit for the job.

Lightweight Directory Access Protocol (LDAP) servers like ApacheDS, OpenLDAP and Microsoft Active Directory do a much better job of handling sensitive data with less trouble than any custom coding we might do on our own. Moreover, the built-in authentication functions of LDAP are mature and standards-based, making them safe and reusable from many different applications without custom interface development. It’s our duty as technologists and as business people to highlight the high cost of custom security solutions and the huge potential risks to our managers. In particular, when it comes to storing passwords in our line-of-business databases, just say no.

If we must manage financial instruments or personally identifying information in a database like SQL Server, there are three classes of problems to solve:

  1. Keeping the hackers from stealing our stuff,
  2. Detecting when breach attempts occur, and
  3. If data is unfortunately lost, making the information useless.

Let’s think about these efforts from a common sense perspective. Problem one is all about access control. The problem with permissions in any complex system is that they are difficult to maintain over time. Even if the initial configuration and policies safeguard the sensitive data, some future administrator may fail to understand or enforce the rules correctly. We could make those future administrators’ jobs much easier if we followed one simple rule: never mix highly-sensitive data in tables containing non-privileged data.

It’s deceptively simple-sounding but in practice, if sensitive data is always segregated into encrypted tables (http://blog.sqlauthority.com/2009/04/28/sql-server-introduction-to-sql-server-encryption-and-symmetric-key-encryption-tutorial-with-script/) and placed into a separate, secure schema requiring elevated access privileges, mistakes concerning permissions will become less likely over time. Moreover, by denying SELECT, INSERT, UPDATE and DELETE privileges on the secured tables, every query can be routed through stored procedures where problems two and three can be addressed with auditing and data obfuscation controls. Lastly, to ensure that lost data is useless, use the new Backup Encryption feature of SQL Server 2014 or invest in a third-party tool that does the same.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)