SQL SERVER – SSMS Trick – Generating CSV file using Management Studio

In my previous blog I talked about using SQLCMD to generate a comma separate files for the table data.

Some blog readers reported that why we should use SQLCMD or BCP to generate the file when the same can be achieved using SQL Server Management Studio itself. Well, that was a strong statement and I liked the simplicity of the solution. Here are the steps. A lot of times it is our inability to learn some of the simple techniques present in the tools that we use make us even more productive.

Here are the steps to achieve the same. Go to SQL Server Management Studio > Choose Tools from Menu Bar > Click on Options. Go to “Query Results” > “SQL Server” > Click on “Results to Text”

Once we choose “Comma delimited” and hit OK – we will get below kind of output when we eecure a query (Note that we need to open a new Query Window so that the settings are saved)

 

Notice that first column is the column name. If we want to avoid that and wants ONLY the data, then we can customized that in the same place where we have modified the separator setting. If we uncheck “Include column headers in the result set” then we will get only the rows as output.

We can also choose custom delimiter. I have set delimiter as pipe (|) and unchecked the box which I mentioned earlier and below is the output for the same query.

As we can see that output is pipe separated and we just have data not column heading. This can be saved and imported again to some other server using bcp command, Bulk Insert command or Import/Export Wizard.

Recommended reading:

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – FIX – The term ‘Invoke-Sqlcmd’ is not recognized as the name of a cmdlet

It’s always a fun to learn and share new thing which I learn about any technology. So I decided to spend my weekend at home quietly, so I could learn some scripting using PowerShell and SQL Server. My laptop has tons of software installed so I always use my Virtual Machine to learn new things because I want to see how the things work when someone is working as a learner. My day didn’t start well as I was hit by errors as soon as I started Invoke-SQLCMD. This is part of PowerShell command and is worth a look.

This is the first error I got as soon as I wanted to run Invoke-SQLCMD

Error # 1 The term ‘Invoke-Sqlcmd’ is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

The Error was because of the fact that I just installed SQL Server Engine on the machine. I didn’t install any client components. So essentially the error appears because the Invoke-sqlcmd cmdlet is not included as part of Windows PowerShell, but instead it is of sqlps (SQL Server 2008 PowerShell Host). So, before using invoke-sqlcmd we should install SSMS or the SQL Server Feature Pack (latest is SQL 2014 which can be downloaded from here)

Once installation was done, I ran the command and I got a new error.

Error # 2The ‘invoke-sqlcmd‘ command was found in the module ‘SQLPS’, but the module could not be loaded. For more information, run ‘Import-Module SQLPS’.

Here is the complete error message

As we can see above, I can also run Import-Module SQLPS to know more. Here is the output of the command

Error # 3 Import-Module : File E:\Program Files (x86)\Microsoft SQL Server\120\Tools\PowerShell\Modules\SQLPS\Sqlps.ps1 cannot be loaded because running scripts is disabled on this system. For more information, see about_Execution_Policies at http://go.microsoft.com/fwlink/?LinkID=135170.

Here is the complete error message

Why this error? If you know the basics of PowerShell, every command is called cmdlet (pronounced as command let). Invoke-SQLCmd is also a cmdlet provided either by the snap-in SqlServerCmdletSnapin100 (prior to SQL Server 2012) or module SQLPS (SQL Server 2012+). So, we need any one loaded into PowerShell (for example, at the beginning of your script) before you can call the cmdlet.

Assuming PowerShell is installed on the SQL server. You can open the Windows PowerShell Command prompt as below and get on to SQL power shell environment.

Add-PSSnapin SqlServerCmdletSnapin100
Add-PSSnapin SqlServerProviderSnapin100

Once I followed above, I was able to use Invoke-SQLCMD as below

Hope this would help others in finding the solution to the errors. How many of you here are frequent users of PowerShell to administer and automate SQL Server tasks?
Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to use Procedure sp_user_counter1 to sp_user_counter10

There are many performance counters available in SQL Server which can be used to monitor various parameters of SQL Server engine. Have you ever been into a situation where you want to see value in performance counter for a query which you have returned? Imagine a situation where you want to look at fragmentation or numbers of rows in a table over a period of time. And you want to plot the values against standard performance counters available.

SQL Server had this feature from a long time. I noticed it recently while working with one of my friend. He was showing me the impact of a number of database vs the number of threads in the AlwaysOn availability group. He showed me graph and my first question was – how did you plot the number of databases with that counter? The answer was – using a user counter. I did further research and learned something new so sharing that with you in the form of this blog.

There are ten system procedures which are available in SQL Server, which allows user to set specific counter values. They are sp_user_counter1, sp_user_counter2… till sp_user_counter10. That’s why they are called user settable counters. Below is the screenshot of Performance Monitor. (Start > Run > PerfMon)

These counters can be populated using stored procedures. Counter Object name and stored procedure names are same. This means the values for “User Counter 1” can using sp_user_counter1, values for “User counter 2” can using sp_user_counter2 and so on.

Here is a quick example to demonstrate the usage.
-- Drop database, if already present
USE MASTER
GO
IF EXISTS (
SELECT *
FROM sys.databases
WHERE NAME = 'SQLAuthority'
)
BEGIN
ALTER DATABASE
SQLAuthority SET single_user WITH ROLLBACK IMMEDIATE;
DROP DATABASE SQLAuthority;
END
GO
-- Create New Database
CREATE DATABASE SQLAuthority
GO
-- Create Objects Needed for Demo
USE SQLAuthority
GO
CREATE TABLE TrackMe (i INT)
GO
-- while loop to populate the data and populate the counter
SET NOCOUNT ON
DECLARE
@NumberOfRows INT=0,@NumberOfRows_2 INT=0
WHILE (@NumberOfRows<25)
BEGIN
INSERT INTO
TrackMe VALUES (1), (2), (3)
SELECT @NumberOfRows = COUNT(*) FROM TrackMe
EXECUTE sp_user_counter1 @NumberOfRows        -- 3, 6, 9, 12...
WAITFOR delay '00:00:02'
SELECT @NumberOfRows_2 = @NumberOfRows*2
EXECUTE sp_user_counter2 @NumberOfRows_2    -- 6, 12, 18, 24...
WAITFOR delay '00:00:02'
END
-- loop finished reset the counters
EXECUTE sp_user_counter1 0
EXECUTE sp_user_counter2 0

Here is the Performance counter graph where I have captured the counters.

Let me explain little logic of the script. My script is plotting number of rows in the table as counter1 and double of that as counter2. There is a delay of 2 seconds added between each counter so that we can see steps. Once the number of rows in the table are around 50, I am ending the loop and resetting the values to zero. That’s when you are seeing a drop at the end.

If you look at the procedure, it is straight forward. Believe me, here is the code.

CREATE PROCEDURE sys.sp_user_counter1 @newvalue INT AS
DBCC
setinstance ('SQLServer:User Settable', 'Query', 'User counter 1', @newvalue)

Which means, we can also call undocumented command DBCC SETINSTANCE directly to set value for the counters.

Can you think of more innovative use of drawing value in performance monitor?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Fix : Msg 230, Level 14, State 1 – The SELECT permission was denied on the object, database, schema – Part II

A couple of days back I wrote the blog on Msg 230 and thought it was one of the simplest implementations of access control on the system. To continue with the code in this blog, please make sure you read the same first before getting into the examples here.

In that blog, I wrote a statement where I said the DENY takes precedence over GRANT. Post this, my good friend Vinod Kumar had pinged me to say that this might not be the case always. Now that statement got me thinking harder and I was quick to ask, prove me wrong? More than a challenge, it was a great opportunity to learn some of these finer details that lets us learn and understand SQL Server better. I have always been of the opinion that the best way to learn something is by sharing. So here is the repro to the exception where the DENY at a higher level is overridden by a GRANT at a granular level.

Similar to our previous post, let us go ahead and create our samples database, create a user inside this database, our secure table with some data.

CREATE DATABASE PermissionsDB
GO
USE PermissionsDB
GO
CREATE USER Pinal WITHOUT LOGIN;
GO
CREATE TABLE SecureTbl (ID INT, Name VARCHAR(50), SSN VARCHAR(20))
GO
INSERT INTO SecureTbl VALUES (1, 'Pinal', '111-22-3333')
GO

If we try to access the table using the login of “Pinal” at this point in time without any explicit permission given to the user. We will get the following error:

Msg 229, Level 14, State 5, Line 38
The SELECT permission was denied on the object 'SecureTbl', database 'PermissionsDB', schema 'dbo'.

Now that we have our secure table ready, let us go ahead and DENY explicit rights on the table to user ‘Pinal’. Next, we will go ahead and GRANT at a specific column level – in the command below. In this example, we have gone ahead to give rights on the column “ID” alone.

-- Does a GRANT at column-level override a DENY at the object level
DENY SELECT ON SecureTbl TO Pinal
GRANT SELECT (ID) ON SecureTbl TO Pinal
GO

Let us next change the user context to “Pinal” and execute the Select command.

EXECUTE AS USER='Pinal'
SELECT * FROM SecureTbl
SELECT ID FROM SecureTbl
REVERT

Though we have DENY permissions at the Object level (table in this instance), the GRANT seems to be working on the specific column level though.

Msg 230, Level 14, State 1, Line 38
The SELECT permission was denied on the column 'Name' of the object 'SecureTbl', database 'PermissionsDB', schema 'dbo'.
Msg 230, Level 14, State 1, Line 38
The SELECT permission was denied on the column 'SSN' of the object 'SecureTbl', database 'PermissionsDB', schema 'dbo'.

If you want to GRANT access to multiple columns, we can do it using the command as shown below.

GRANT SELECT (ID, Name) ON SecureTbl TO Pinal
GO

I am always of the opinion that it is better to DENY across the board and then give specific access sometimes. If you might notice, when compared to the previous article – we have NOT GRANTED SELECT at the global level for all here. We were explicit in the DENY logic for instance.

-- Explicitly grant at object level
GRANT SELECT ON SecureTbl TO PUBLIC

Let us make sure there are no unnecessary databases in our server. Here is the cleanup script.

-- Clean up
USE MASTER
GO
DROP DATABASE PermissionsDB

I hope you also learnt something new today. This is a great way to learn the fine prints when it comes to working with databases and permissions. Please let me know if you have ever used this technique in your environments and what was the actual scenario?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Common Sense Data Security – Notes from the Field #055

[Note from Pinal]: This is a 55th episode of Notes from the Field series. Common sense is not as much as common as we think. I am sure you agree with it from your real world experience. However, when it is about data and its security, there has to be some rules along with the policy but common sense is extremely critical. When I read this article, I find it humorous at points and some of the examples also reminded me of my past experience. If you are in data security, you will have a great time reading these notes, but if you are not, you will still love it.

In this episode of the Notes from the Field series database expert Kevin Hazzard explains common sense data security and how we can apply in daily life in real world. Read the experience of Reeves in his own words.


There are many excellent books and articles that address the correct ways to store sensitive user information. Yet, many in IT are still failing to protect customers from loss due to data breaches. Every day, it seems that there’s another retailer or commercial web site in the news for losing passwords or credit card numbers to hackers. As an industry, why are we struggling to secure this type of information when there’s so much good intelligence and so many great tools for getting the job done? It’s a complicated subject so perhaps it’s time to step back a bit and use a bit of common sense to analyze the problem.

No matter the industry, using the right tool for the job is rule number one. Line-of-business databases are all about organizing information and getting it into the hands of people who perform transactions and make decisions with it. As a result, these databases become naturally permissive by nature, especially as they evolve to meet the demands of growing businesses. There are good access controls in modern databases but when it comes to managing ultra-secure bits of data, traditional, relational databases may not be the best fit for the job.

Lightweight Directory Access Protocol (LDAP) servers like ApacheDS, OpenLDAP and Microsoft Active Directory do a much better job of handling sensitive data with less trouble than any custom coding we might do on our own. Moreover, the built-in authentication functions of LDAP are mature and standards-based, making them safe and reusable from many different applications without custom interface development. It’s our duty as technologists and as business people to highlight the high cost of custom security solutions and the huge potential risks to our managers. In particular, when it comes to storing passwords in our line-of-business databases, just say no.

If we must manage financial instruments or personally identifying information in a database like SQL Server, there are three classes of problems to solve:

  1. Keeping the hackers from stealing our stuff,
  2. Detecting when breach attempts occur, and
  3. If data is unfortunately lost, making the information useless.

Let’s think about these efforts from a common sense perspective. Problem one is all about access control. The problem with permissions in any complex system is that they are difficult to maintain over time. Even if the initial configuration and policies safeguard the sensitive data, some future administrator may fail to understand or enforce the rules correctly. We could make those future administrators’ jobs much easier if we followed one simple rule: never mix highly-sensitive data in tables containing non-privileged data.

It’s deceptively simple-sounding but in practice, if sensitive data is always segregated into encrypted tables (http://blog.sqlauthority.com/2009/04/28/sql-server-introduction-to-sql-server-encryption-and-symmetric-key-encryption-tutorial-with-script/) and placed into a separate, secure schema requiring elevated access privileges, mistakes concerning permissions will become less likely over time. Moreover, by denying SELECT, INSERT, UPDATE and DELETE privileges on the secured tables, every query can be routed through stored procedures where problems two and three can be addressed with auditing and data obfuscation controls. Lastly, to ensure that lost data is useless, use the new Backup Encryption feature of SQL Server 2014 or invest in a third-party tool that does the same.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Is tempDB behaving like a Normal DB?

The more I work with SQL Server, more I am baffled with the way SQL Server works. In all these interactions, I have seen DBA’s and Developers think about TempDB as something totally different from a usage point of view and treat it differently. Yes, I do agree TempDB is a special purpose database and needs special attention because it is a shared database for the SQL Server instance. As the name suggests, it is for temporary working of SQL Server. In this blog, I am not going to rehash hundreds of blogs / whitepaper on how important tempDB is and what are the processes which utilize tempDB.

In this blog I would take a middle ground of how I used to work and coach DBA’s when doing performance tuning activities during my consulting days. These are interesting ways of looking at TempDB because in my opinion, tempDB is also similar to normal DB in multiple ways with a twist. Whenever I say this statement many question to how I can say this? If tempDB is like normal database, then it should also function like a normal database.

I generally take this challenge and say the twist is there for a reason. But the fundamentals of databases, pages, allocations etc. are all the same – no matter what database we are talking about. Hence, in my opinion TempDB is similar to normal database in many ways. Generally to illustrate this, I use the following explanation.

USE tempdb
GO
CHECKPOINT
GO
SELECT Operation, Context, [Transaction ID], AllocUnitId, AllocUnitName, [Page ID],
[Transaction Name], [Description]
FROM sys.fn_dblog(NULL, NULL)
GO

Since TempDB can be set with SIMPLE recovery model only, the CHECKPOINT will make sure it flushes all the data and we will normally be left out 3 rows. To drive home concept, we will try to create a simple single table and insert a row. It is shown below:

CREATE TABLE #Understanding_temp_table
(id INT,
col1 CHAR(20),
col2 DATETIME,
col3 CHAR(50));
INSERT INTO #Understanding_temp_table
VALUES (1, 'Pinal', '01-Dec-2014 07:01:01', '* * DUMMY * *')
SELECT * FROM #Understanding_temp_table

Now, if we go ahead and execute the below DMV query again we will get close to 150+ rows. If we look at the PFS page allocation, IAM allocations, Data page allocation and many more entries. The values and way to read fn_dblog() will be reserved for some other blog post, but let me take show you some interesting things to lookout for.

SELECT Operation, Context, [Transaction ID], AllocUnitId, AllocUnitName, [Page ID],
[Transaction Name], [Description]
FROM sys.fn_dblog(NULL, NULL)
GO

In the output of above DMV, we can see these two entries available on our Description column which are interesting to look at. In my example, I have taken the two rows and shown for reference.

Changed type HOBT_FIRST_IAM (2) for AU 1008806317071335424 in rowset 1008806317066092544 from page 0000:00000000 to page 0001:0000008e

Changed type HOBT_FIRST (1) for AU 1008806317071335424 in rowset 1008806317066092544 from page 0000:00000000 to page 0001:0000007f

If we take the page of 0000008e , 0000007f and convert it from Hex to Decimal, the values are: 142 and 127 respectively. Now let us try to map this to the allocations inside our TempDB database.

SELECT allocated_page_file_id, allocated_page_page_id, allocated_page_iam_page_id, *
FROM sys.dm_db_database_page_allocations(2, NULL , NULL, NULL, 'DETAILED')
WHERE OBJECT_ID NOT IN (SELECT ID FROM sysobjects WHERE TYPE IN ('S','IT','SQ'))

From the query output, we can see the pages 142 and 127 are allocated to IAM page and Data pages respectively. In a sense this is almost similar to how a normal database would function too. The allocation of pages would be similar. The only catch here is the object ID is negative. These are user defined temp tables inside tempDB, that is the only difference. We can take a look at the object name using the below query. I remember reading an article from my good friend Vinod Kumar around Database Page Basics if you want a quick refresher.

As defined before, we know the data page is 127. Let us use the DBCC command to dump the page data and check what is on our page. When I execute the command, you can see the single row we inserted at the start of this article. This is very much in line to our understanding of inserting into a table.

The twist is simple, if we go ahead and close the session that created the Temp Table (#Understanding_temp_table), then SQL Server goes ahead and cleans the pages allocated. This can be confirmed again by running the command. The typical output is shown below.

I hope this blog gives you a feeler to how allocations happen to tables, irrespective of them being temp or normal. Secondly, we can also see once the session is closed, the allocations are released back for temp tables. In future blogs, we will take a deeper look into tempdb and learn about how they are different when compared to normal databases.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SQLCMD to Build Comma Separated String

We have an active usergroup in Bangalore called SQLBangalore. Whenever I get a chance to attend the local usergroup sessions (which happens often – trust me), I get to learn something every single time. In one of the recent user group meetings I had the opportunity to attend a Tips and tricks session by my good friends Balmukund and Vinod Kumar. Having friends who can stump you every now and then is an experience by itself for me.

Now, the scenario we are talking is simple. How can I create a comma separated string of table’s data? This is like exporting a table’s data using SQLCMD. At first this looks interesting, but can be a great learning of how SQLCMD works behind the scenes.

We will create a file for which output needs to be made. In my example, I have created a file called as Contacts-People.sql which contains the following statement.

SET NOCOUNT ON
SELECT
FirstName, Lastname FROM Adventureworks2012.[Person].[Person]
WHERE Lastname = 'Ferrier'

This returns close to 16 rows on my database. Now the requirement is to get the output as a comma separated string. We can go in steps to achieve the same.

Step 1: Give the input file to SQLCMD command

C:\Temp> sqlcmd -i Contacts-People.sql

The –i option can be used to define input file.

Step 2: Make the columns width narrow to get a concise output

C:\Temp> sqlcmd -i Contacts-People.sql -W

Step 3: Next step is to add the comma separator between the columns.

C:\Temp> sqlcmd -i Contacts-People.sql -W -s,

This adds the separator.

Step 4: This is the icing on the cake. We want to remove the header so that we get ONLY the column values as part of our output.

C:\Temp> sqlcmd -i Contacts-People.sql -W -s, -h-1

There you have it. This is an easy and quick way we can generate a comma separate string of the columns from a table. I am sure there are more than one way to do this. Feel free to send the output to a flat file and name it as .csv. Do let me know if you are aware of any other method.

Reference: Pinal Dave (http://blog.sqlauthority.com)