SQL SERVER – FIX – The term ‘Invoke-Sqlcmd’ is not recognized as the name of a cmdlet

It’s always a fun to learn and share new thing which I learn about any technology. So I decided to spend my weekend at home quietly, so I could learn some scripting using PowerShell and SQL Server. My laptop has tons of software installed so I always use my Virtual Machine to learn new things because I want to see how the things work when someone is working as a learner. My day didn’t start well as I was hit by errors as soon as I started Invoke-SQLCMD. This is part of PowerShell command and is worth a look.

This is the first error I got as soon as I wanted to run Invoke-SQLCMD

Error # 1 The term ‘Invoke-Sqlcmd’ is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

The Error was because of the fact that I just installed SQL Server Engine on the machine. I didn’t install any client components. So essentially the error appears because the Invoke-sqlcmd cmdlet is not included as part of Windows PowerShell, but instead it is of sqlps (SQL Server 2008 PowerShell Host). So, before using invoke-sqlcmd we should install SSMS or the SQL Server Feature Pack (latest is SQL 2014 which can be downloaded from here)

Once installation was done, I ran the command and I got a new error.

Error # 2The ‘invoke-sqlcmd‘ command was found in the module ‘SQLPS’, but the module could not be loaded. For more information, run ‘Import-Module SQLPS’.

Here is the complete error message

As we can see above, I can also run Import-Module SQLPS to know more. Here is the output of the command

Error # 3 Import-Module : File E:\Program Files (x86)\Microsoft SQL Server\120\Tools\PowerShell\Modules\SQLPS\Sqlps.ps1 cannot be loaded because running scripts is disabled on this system. For more information, see about_Execution_Policies at http://go.microsoft.com/fwlink/?LinkID=135170.

Here is the complete error message

Why this error? If you know the basics of PowerShell, every command is called cmdlet (pronounced as command let). Invoke-SQLCmd is also a cmdlet provided either by the snap-in SqlServerCmdletSnapin100 (prior to SQL Server 2012) or module SQLPS (SQL Server 2012+). So, we need any one loaded into PowerShell (for example, at the beginning of your script) before you can call the cmdlet.

Assuming PowerShell is installed on the SQL server. You can open the Windows PowerShell Command prompt as below and get on to SQL power shell environment.

Add-PSSnapin SqlServerCmdletSnapin100
Add-PSSnapin SqlServerProviderSnapin100

Once I followed above, I was able to use Invoke-SQLCMD as below

Hope this would help others in finding the solution to the errors. How many of you here are frequent users of PowerShell to administer and automate SQL Server tasks?
Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – How to use Procedure sp_user_counter1 to sp_user_counter10

There are many performance counters available in SQL Server which can be used to monitor various parameters of SQL Server engine. Have you ever been into a situation where you want to see value in performance counter for a query which you have returned? Imagine a situation where you want to look at fragmentation or numbers of rows in a table over a period of time. And you want to plot the values against standard performance counters available.

SQL Server had this feature from a long time. I noticed it recently while working with one of my friend. He was showing me the impact of a number of database vs the number of threads in the AlwaysOn availability group. He showed me graph and my first question was – how did you plot the number of databases with that counter? The answer was – using a user counter. I did further research and learned something new so sharing that with you in the form of this blog.

There are ten system procedures which are available in SQL Server, which allows user to set specific counter values. They are sp_user_counter1, sp_user_counter2… till sp_user_counter10. That’s why they are called user settable counters. Below is the screenshot of Performance Monitor. (Start > Run > PerfMon)

These counters can be populated using stored procedures. Counter Object name and stored procedure names are same. This means the values for “User Counter 1” can using sp_user_counter1, values for “User counter 2” can using sp_user_counter2 and so on.

Here is a quick example to demonstrate the usage.
-- Drop database, if already present
FROM sys.databases
WHERE NAME = 'SQLAuthority'
-- Create New Database
-- Create Objects Needed for Demo
USE SQLAuthority
-- while loop to populate the data and populate the counter
@NumberOfRows INT=0,@NumberOfRows_2 INT=0
WHILE (@NumberOfRows<25)
TrackMe VALUES (1), (2), (3)
SELECT @NumberOfRows = COUNT(*) FROM TrackMe
EXECUTE sp_user_counter1 @NumberOfRows        -- 3, 6, 9, 12...
WAITFOR delay '00:00:02'
SELECT @NumberOfRows_2 = @NumberOfRows*2
EXECUTE sp_user_counter2 @NumberOfRows_2    -- 6, 12, 18, 24...
WAITFOR delay '00:00:02'
-- loop finished reset the counters
EXECUTE sp_user_counter1 0
EXECUTE sp_user_counter2 0

Here is the Performance counter graph where I have captured the counters.

Let me explain little logic of the script. My script is plotting number of rows in the table as counter1 and double of that as counter2. There is a delay of 2 seconds added between each counter so that we can see steps. Once the number of rows in the table are around 50, I am ending the loop and resetting the values to zero. That’s when you are seeing a drop at the end.

If you look at the procedure, it is straight forward. Believe me, here is the code.

CREATE PROCEDURE sys.sp_user_counter1 @newvalue INT AS
setinstance ('SQLServer:User Settable', 'Query', 'User counter 1', @newvalue)

Which means, we can also call undocumented command DBCC SETINSTANCE directly to set value for the counters.

Can you think of more innovative use of drawing value in performance monitor?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Fix : Msg 230, Level 14, State 1 – The SELECT permission was denied on the object, database, schema – Part II

A couple of days back I wrote the blog on Msg 230 and thought it was one of the simplest implementations of access control on the system. To continue with the code in this blog, please make sure you read the same first before getting into the examples here.

In that blog, I wrote a statement where I said the DENY takes precedence over GRANT. Post this, my good friend Vinod Kumar had pinged me to say that this might not be the case always. Now that statement got me thinking harder and I was quick to ask, prove me wrong? More than a challenge, it was a great opportunity to learn some of these finer details that lets us learn and understand SQL Server better. I have always been of the opinion that the best way to learn something is by sharing. So here is the repro to the exception where the DENY at a higher level is overridden by a GRANT at a granular level.

Similar to our previous post, let us go ahead and create our samples database, create a user inside this database, our secure table with some data.

USE PermissionsDB
INSERT INTO SecureTbl VALUES (1, 'Pinal', '111-22-3333')

If we try to access the table using the login of “Pinal” at this point in time without any explicit permission given to the user. We will get the following error:

Msg 229, Level 14, State 5, Line 38
The SELECT permission was denied on the object 'SecureTbl', database 'PermissionsDB', schema 'dbo'.

Now that we have our secure table ready, let us go ahead and DENY explicit rights on the table to user ‘Pinal’. Next, we will go ahead and GRANT at a specific column level – in the command below. In this example, we have gone ahead to give rights on the column “ID” alone.

-- Does a GRANT at column-level override a DENY at the object level

Let us next change the user context to “Pinal” and execute the Select command.


Though we have DENY permissions at the Object level (table in this instance), the GRANT seems to be working on the specific column level though.

Msg 230, Level 14, State 1, Line 38
The SELECT permission was denied on the column 'Name' of the object 'SecureTbl', database 'PermissionsDB', schema 'dbo'.
Msg 230, Level 14, State 1, Line 38
The SELECT permission was denied on the column 'SSN' of the object 'SecureTbl', database 'PermissionsDB', schema 'dbo'.

If you want to GRANT access to multiple columns, we can do it using the command as shown below.

GRANT SELECT (ID, Name) ON SecureTbl TO Pinal

I am always of the opinion that it is better to DENY across the board and then give specific access sometimes. If you might notice, when compared to the previous article – we have NOT GRANTED SELECT at the global level for all here. We were explicit in the DENY logic for instance.

-- Explicitly grant at object level

Let us make sure there are no unnecessary databases in our server. Here is the cleanup script.

-- Clean up

I hope you also learnt something new today. This is a great way to learn the fine prints when it comes to working with databases and permissions. Please let me know if you have ever used this technique in your environments and what was the actual scenario?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Common Sense Data Security – Notes from the Field #055

[Note from Pinal]: This is a 55th episode of Notes from the Field series. Common sense is not as much as common as we think. I am sure you agree with it from your real world experience. However, when it is about data and its security, there has to be some rules along with the policy but common sense is extremely critical. When I read this article, I find it humorous at points and some of the examples also reminded me of my past experience. If you are in data security, you will have a great time reading these notes, but if you are not, you will still love it.

In this episode of the Notes from the Field series database expert Kevin Hazzard explains common sense data security and how we can apply in daily life in real world. Read the experience of Reeves in his own words.

There are many excellent books and articles that address the correct ways to store sensitive user information. Yet, many in IT are still failing to protect customers from loss due to data breaches. Every day, it seems that there’s another retailer or commercial web site in the news for losing passwords or credit card numbers to hackers. As an industry, why are we struggling to secure this type of information when there’s so much good intelligence and so many great tools for getting the job done? It’s a complicated subject so perhaps it’s time to step back a bit and use a bit of common sense to analyze the problem.

No matter the industry, using the right tool for the job is rule number one. Line-of-business databases are all about organizing information and getting it into the hands of people who perform transactions and make decisions with it. As a result, these databases become naturally permissive by nature, especially as they evolve to meet the demands of growing businesses. There are good access controls in modern databases but when it comes to managing ultra-secure bits of data, traditional, relational databases may not be the best fit for the job.

Lightweight Directory Access Protocol (LDAP) servers like ApacheDS, OpenLDAP and Microsoft Active Directory do a much better job of handling sensitive data with less trouble than any custom coding we might do on our own. Moreover, the built-in authentication functions of LDAP are mature and standards-based, making them safe and reusable from many different applications without custom interface development. It’s our duty as technologists and as business people to highlight the high cost of custom security solutions and the huge potential risks to our managers. In particular, when it comes to storing passwords in our line-of-business databases, just say no.

If we must manage financial instruments or personally identifying information in a database like SQL Server, there are three classes of problems to solve:

  1. Keeping the hackers from stealing our stuff,
  2. Detecting when breach attempts occur, and
  3. If data is unfortunately lost, making the information useless.

Let’s think about these efforts from a common sense perspective. Problem one is all about access control. The problem with permissions in any complex system is that they are difficult to maintain over time. Even if the initial configuration and policies safeguard the sensitive data, some future administrator may fail to understand or enforce the rules correctly. We could make those future administrators’ jobs much easier if we followed one simple rule: never mix highly-sensitive data in tables containing non-privileged data.

It’s deceptively simple-sounding but in practice, if sensitive data is always segregated into encrypted tables (http://blog.sqlauthority.com/2009/04/28/sql-server-introduction-to-sql-server-encryption-and-symmetric-key-encryption-tutorial-with-script/) and placed into a separate, secure schema requiring elevated access privileges, mistakes concerning permissions will become less likely over time. Moreover, by denying SELECT, INSERT, UPDATE and DELETE privileges on the secured tables, every query can be routed through stored procedures where problems two and three can be addressed with auditing and data obfuscation controls. Lastly, to ensure that lost data is useless, use the new Backup Encryption feature of SQL Server 2014 or invest in a third-party tool that does the same.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Is tempDB behaving like a Normal DB?

The more I work with SQL Server, more I am baffled with the way SQL Server works. In all these interactions, I have seen DBA’s and Developers think about TempDB as something totally different from a usage point of view and treat it differently. Yes, I do agree TempDB is a special purpose database and needs special attention because it is a shared database for the SQL Server instance. As the name suggests, it is for temporary working of SQL Server. In this blog, I am not going to rehash hundreds of blogs / whitepaper on how important tempDB is and what are the processes which utilize tempDB.

In this blog I would take a middle ground of how I used to work and coach DBA’s when doing performance tuning activities during my consulting days. These are interesting ways of looking at TempDB because in my opinion, tempDB is also similar to normal DB in multiple ways with a twist. Whenever I say this statement many question to how I can say this? If tempDB is like normal database, then it should also function like a normal database.

I generally take this challenge and say the twist is there for a reason. But the fundamentals of databases, pages, allocations etc. are all the same – no matter what database we are talking about. Hence, in my opinion TempDB is similar to normal database in many ways. Generally to illustrate this, I use the following explanation.

USE tempdb
SELECT Operation, Context, [Transaction ID], AllocUnitId, AllocUnitName, [Page ID],
[Transaction Name], [Description]
FROM sys.fn_dblog(NULL, NULL)

Since TempDB can be set with SIMPLE recovery model only, the CHECKPOINT will make sure it flushes all the data and we will normally be left out 3 rows. To drive home concept, we will try to create a simple single table and insert a row. It is shown below:

CREATE TABLE #Understanding_temp_table
(id INT,
col1 CHAR(20),
col3 CHAR(50));
INSERT INTO #Understanding_temp_table
VALUES (1, 'Pinal', '01-Dec-2014 07:01:01', '* * DUMMY * *')
SELECT * FROM #Understanding_temp_table

Now, if we go ahead and execute the below DMV query again we will get close to 150+ rows. If we look at the PFS page allocation, IAM allocations, Data page allocation and many more entries. The values and way to read fn_dblog() will be reserved for some other blog post, but let me take show you some interesting things to lookout for.

SELECT Operation, Context, [Transaction ID], AllocUnitId, AllocUnitName, [Page ID],
[Transaction Name], [Description]
FROM sys.fn_dblog(NULL, NULL)

In the output of above DMV, we can see these two entries available on our Description column which are interesting to look at. In my example, I have taken the two rows and shown for reference.

Changed type HOBT_FIRST_IAM (2) for AU 1008806317071335424 in rowset 1008806317066092544 from page 0000:00000000 to page 0001:0000008e

Changed type HOBT_FIRST (1) for AU 1008806317071335424 in rowset 1008806317066092544 from page 0000:00000000 to page 0001:0000007f

If we take the page of 0000008e , 0000007f and convert it from Hex to Decimal, the values are: 142 and 127 respectively. Now let us try to map this to the allocations inside our TempDB database.

SELECT allocated_page_file_id, allocated_page_page_id, allocated_page_iam_page_id, *
FROM sys.dm_db_database_page_allocations(2, NULL , NULL, NULL, 'DETAILED')

From the query output, we can see the pages 142 and 127 are allocated to IAM page and Data pages respectively. In a sense this is almost similar to how a normal database would function too. The allocation of pages would be similar. The only catch here is the object ID is negative. These are user defined temp tables inside tempDB, that is the only difference. We can take a look at the object name using the below query. I remember reading an article from my good friend Vinod Kumar around Database Page Basics if you want a quick refresher.

As defined before, we know the data page is 127. Let us use the DBCC command to dump the page data and check what is on our page. When I execute the command, you can see the single row we inserted at the start of this article. This is very much in line to our understanding of inserting into a table.

The twist is simple, if we go ahead and close the session that created the Temp Table (#Understanding_temp_table), then SQL Server goes ahead and cleans the pages allocated. This can be confirmed again by running the command. The typical output is shown below.

I hope this blog gives you a feeler to how allocations happen to tables, irrespective of them being temp or normal. Secondly, we can also see once the session is closed, the allocations are released back for temp tables. In future blogs, we will take a deeper look into tempdb and learn about how they are different when compared to normal databases.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SQLCMD to Build Comma Separated String

We have an active usergroup in Bangalore called SQLBangalore. Whenever I get a chance to attend the local usergroup sessions (which happens often – trust me), I get to learn something every single time. In one of the recent user group meetings I had the opportunity to attend a Tips and tricks session by my good friends Balmukund and Vinod Kumar. Having friends who can stump you every now and then is an experience by itself for me.

Now, the scenario we are talking is simple. How can I create a comma separated string of table’s data? This is like exporting a table’s data using SQLCMD. At first this looks interesting, but can be a great learning of how SQLCMD works behind the scenes.

We will create a file for which output needs to be made. In my example, I have created a file called as Contacts-People.sql which contains the following statement.

FirstName, Lastname FROM Adventureworks2012.[Person].[Person]
WHERE Lastname = 'Ferrier'

This returns close to 16 rows on my database. Now the requirement is to get the output as a comma separated string. We can go in steps to achieve the same.

Step 1: Give the input file to SQLCMD command

C:\Temp> sqlcmd -i Contacts-People.sql

The –i option can be used to define input file.

Step 2: Make the columns width narrow to get a concise output

C:\Temp> sqlcmd -i Contacts-People.sql -W

Step 3: Next step is to add the comma separator between the columns.

C:\Temp> sqlcmd -i Contacts-People.sql -W -s,

This adds the separator.

Step 4: This is the icing on the cake. We want to remove the header so that we get ONLY the column values as part of our output.

C:\Temp> sqlcmd -i Contacts-People.sql -W -s, -h-1

There you have it. This is an easy and quick way we can generate a comma separate string of the columns from a table. I am sure there are more than one way to do this. Feel free to send the output to a flat file and name it as .csv. Do let me know if you are aware of any other method.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Restricting Access to Contained Databases using Logon Triggers

When I wrote the article on “A Simple Example of Contained Databases” lesser did I know how people might be planning to use this capability? Having said that, there are environments where people want to use the concept of contained databases yet have flexibility of restricting the users. Recently, a DBA friend of mine had an interesting requirement and I was pleasantly surprised with the question:

Pinal: Hi Buddy.

DBA: Hi Pinal, Sorry to disturb you in the middle of the night.

Pinal: No problems, please go ahead.

DBA: I have a query around Contained Databases.

Pinal: Sure, let me see if I can help you. Not sure though.

DBA: Well, contained databases creates users inside the database and it is also used for authentication right?

Pinal: You are correct in your understanding. Now is there a problem?

DBA: No, there is no problem. I am worried about contained database implementation.

Pinal: Hmmm I don’t quite understand. Did you get a chance to search my site for a simple sample of implementing the same?

DBA: I did read the article on SQLAuthority, which is not the problem. It is about auditing and security processes inside my company.

Pinal: That is an interesting point of view. Please tell me more.

DBA: In our company, we are very strict and want to track every logins which are getting authenticated inside SQL Server. With contained databases, I want to build a different process.

Pinal: Go ahead, I am all ears.

DBA: We have enabled the creation of Contained Databases on the server. I want to restrict the databases which can be created as Contained Databases.

Pinal: Wow, why would you want to do like this?

DBA: Simple, as a DBA and Server Administrator – want to make sure only authorized users and databases which can act as contained databases can use the capability. We don’t want random users to create contained database on the server.

Pinal: Interesting, yes it is quite possible and the idea here is to use Logon Triggers. Let me make sure I try to blog this someday.

This blog is completely inspired by this conversation and let us go through the implementation. Here are the steps:

  1. Create a table in Master to hold databases that can be used for authentication.
  2. Insert the list of databases into the created table.
  3. Create the Logon Trigger to restrict the login available
  4. Try to Logon into the DB as normal user and as Contained Database user.
  5. Delete an entry from our table and check by logging in.
  6. Cleanup the script.

So you might ask, why master database and not any other database? Since logon triggers are server-scoped objects, we will create any necessary additional objects in master.

Note: Please DONOT use the script as-is because we are creating a Logon trigger. Please test, verify and then use the below script.

Let me walk you through the script:

-- Step 1 - Create a table in Master to hold databases that can be used for authentication.
CREATE TABLE [dbo].[authenticated_DBs](dbs INT PRIMARY KEY);
-- We want anyone to be able to access this data for read-only purposes
GRANT SELECT ON [dbo].[authenticated_DBs] TO PUBLIC;

The first step is to create a table which will host all the authenticated DBs we want for this particular instance that the DBA is monitoring. We have created the same in the master so that only authenticated and authorized users can change this.

-- Step 2 - Insert the list of databases into the created table.
-- Add the DB id for all of the DBs authorized to authenticate
-- including/excluding master DB
INSERT INTO [dbo].[authenticated_DBs] VALUES (DB_ID('master'));
-- Used the script from previous blog to create the Contained DB
-- http://blog.sqlauthority.com/2011/03/31/sql-server-denali-a-simple-example-of-contained-databases/
-- Insert database id for ContainedDatabase
INSERT INTO [dbo].[authenticated_DBs] VALUES (DB_ID('ContainedDatabase'));

The next step is to identify which all databases can allow login process. In our example, make sure we have made an entry for “master” and the database called “ContainedDatabase” based on the script from the other blog post.

-- Step 3 - Create the Logon Trigger to restrict the login available
-- This logon trigger will verify the current logon matches with one of the authorized DBs.
-- If it does, it allows the logon process to continue,
-- otherwise it will rollback, causing the session to terminate
CREATE TRIGGER Logon_authenticated_dbs
NOT (EXISTS(SELECT * FROM [dbo].[authenticated_DBs]
WHERE dbs IN (SELECT authenticating_database_id
FROM sys.dm_exec_sessions
WHERE session_id = @@spid)))

The Logon Trigger is the tricky part of this whole blog. You can be innovative with other parameters too. But be careful when playing with Logon triggers as this can lock you out of your SQL Server instance. Please use it carefully and cautiously.

SELECT * FROM [dbo].[authenticated_DBs]

I am showing the two entries that are on my local machine at the moment. We will try to mimic the login process and see what happens.

-- Step 5 - Delete an entry from our table and check by logging in.
-- Your [ContainedDatabase] database id might not be 7. Change appropriately
DELETE [dbo].[authenticated_DBs] WHERE dbs = 7
-- Try to connect again using ContainedUser
-- ContainedUser fails to connect due to trigger
INSERT INTO [dbo].[authenticated_DBs] VALUES (DB_ID('ContainedDatabase'));
-- Try connect again as ContainedUser. Now it should work.

If you are have an entry into our authenticated table and are a valid user in the “Containeddatabase”, then the Login process will succeed as shown below.

If the entry into the table [dbo].[authenticated_DBs] is removed, we will be presented with an error message as shown below.

We are likely to get a similar error message if we are connecting to a contained database which was created by a user and the DBA has not given explicit rights by adding an entry into the server node.

-- CAUTION: If you forget to do this and drop the table authenticated_DBs then you need to manually delete this trigger or else you can't get into master too
DROP TRIGGER Logon_authenticated_dbs
DROP TABLE authenticated_DBs

In case you write a bad trigger which is always going to fail then you would be locked out from the server. In those situation DAC connections can help. I have written this earlier

SQL SERVER – Fix : Error : 17892 Logon failed for login due to trigger execution. Changed database context to ‘master’.

Please make sure to do the cleanup else you are likely to get errors if you are using this script for testing purposes and are an user of Contained Databases in your environments.

Reference: Pinal Dave (http://blog.sqlauthority.com)