SQL SERVER – Fix : Msg 230, Level 14, State 1 – The SELECT permission was denied on the object, database, schema – Part II

A couple of days back I wrote the blog on Msg 230 and thought it was one of the simplest implementations of access control on the system. To continue with the code in this blog, please make sure you read the same first before getting into the examples here.

In that blog, I wrote a statement where I said the DENY takes precedence over GRANT. Post this, my good friend Vinod Kumar had pinged me to say that this might not be the case always. Now that statement got me thinking harder and I was quick to ask, prove me wrong? More than a challenge, it was a great opportunity to learn some of these finer details that lets us learn and understand SQL Server better. I have always been of the opinion that the best way to learn something is by sharing. So here is the repro to the exception where the DENY at a higher level is overridden by a GRANT at a granular level.

Similar to our previous post, let us go ahead and create our samples database, create a user inside this database, our secure table with some data.

CREATE DATABASE PermissionsDB
GO
USE PermissionsDB
GO
CREATE USER Pinal WITHOUT LOGIN;
GO
CREATE TABLE SecureTbl (ID INT, Name VARCHAR(50), SSN VARCHAR(20))
GO
INSERT INTO SecureTbl VALUES (1, 'Pinal', '111-22-3333')
GO

If we try to access the table using the login of “Pinal” at this point in time without any explicit permission given to the user. We will get the following error:

Msg 229, Level 14, State 5, Line 38
The SELECT permission was denied on the object 'SecureTbl', database 'PermissionsDB', schema 'dbo'.

Now that we have our secure table ready, let us go ahead and DENY explicit rights on the table to user ‘Pinal’. Next, we will go ahead and GRANT at a specific column level – in the command below. In this example, we have gone ahead to give rights on the column “ID” alone.

-- Does a GRANT at column-level override a DENY at the object level
DENY SELECT ON SecureTbl TO Pinal
GRANT SELECT (ID) ON SecureTbl TO Pinal
GO

Let us next change the user context to “Pinal” and execute the Select command.

EXECUTE AS USER='Pinal'
SELECT * FROM SecureTbl
SELECT ID FROM SecureTbl
REVERT

Though we have DENY permissions at the Object level (table in this instance), the GRANT seems to be working on the specific column level though.

Msg 230, Level 14, State 1, Line 38
The SELECT permission was denied on the column 'Name' of the object 'SecureTbl', database 'PermissionsDB', schema 'dbo'.
Msg 230, Level 14, State 1, Line 38
The SELECT permission was denied on the column 'SSN' of the object 'SecureTbl', database 'PermissionsDB', schema 'dbo'.

If you want to GRANT access to multiple columns, we can do it using the command as shown below.

GRANT SELECT (ID, Name) ON SecureTbl TO Pinal
GO

I am always of the opinion that it is better to DENY across the board and then give specific access sometimes. If you might notice, when compared to the previous article – we have NOT GRANTED SELECT at the global level for all here. We were explicit in the DENY logic for instance.

-- Explicitly grant at object level
GRANT SELECT ON SecureTbl TO PUBLIC

Let us make sure there are no unnecessary databases in our server. Here is the cleanup script.

-- Clean up
USE MASTER
GO
DROP DATABASE PermissionsDB

I hope you also learnt something new today. This is a great way to learn the fine prints when it comes to working with databases and permissions. Please let me know if you have ever used this technique in your environments and what was the actual scenario?

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – Common Sense Data Security – Notes from the Field #055

[Note from Pinal]: This is a 55th episode of Notes from the Field series. Common sense is not as much as common as we think. I am sure you agree with it from your real world experience. However, when it is about data and its security, there has to be some rules along with the policy but common sense is extremely critical. When I read this article, I find it humorous at points and some of the examples also reminded me of my past experience. If you are in data security, you will have a great time reading these notes, but if you are not, you will still love it.

In this episode of the Notes from the Field series database expert Kevin Hazzard explains common sense data security and how we can apply in daily life in real world. Read the experience of Reeves in his own words.


There are many excellent books and articles that address the correct ways to store sensitive user information. Yet, many in IT are still failing to protect customers from loss due to data breaches. Every day, it seems that there’s another retailer or commercial web site in the news for losing passwords or credit card numbers to hackers. As an industry, why are we struggling to secure this type of information when there’s so much good intelligence and so many great tools for getting the job done? It’s a complicated subject so perhaps it’s time to step back a bit and use a bit of common sense to analyze the problem.

No matter the industry, using the right tool for the job is rule number one. Line-of-business databases are all about organizing information and getting it into the hands of people who perform transactions and make decisions with it. As a result, these databases become naturally permissive by nature, especially as they evolve to meet the demands of growing businesses. There are good access controls in modern databases but when it comes to managing ultra-secure bits of data, traditional, relational databases may not be the best fit for the job.

Lightweight Directory Access Protocol (LDAP) servers like ApacheDS, OpenLDAP and Microsoft Active Directory do a much better job of handling sensitive data with less trouble than any custom coding we might do on our own. Moreover, the built-in authentication functions of LDAP are mature and standards-based, making them safe and reusable from many different applications without custom interface development. It’s our duty as technologists and as business people to highlight the high cost of custom security solutions and the huge potential risks to our managers. In particular, when it comes to storing passwords in our line-of-business databases, just say no.

If we must manage financial instruments or personally identifying information in a database like SQL Server, there are three classes of problems to solve:

  1. Keeping the hackers from stealing our stuff,
  2. Detecting when breach attempts occur, and
  3. If data is unfortunately lost, making the information useless.

Let’s think about these efforts from a common sense perspective. Problem one is all about access control. The problem with permissions in any complex system is that they are difficult to maintain over time. Even if the initial configuration and policies safeguard the sensitive data, some future administrator may fail to understand or enforce the rules correctly. We could make those future administrators’ jobs much easier if we followed one simple rule: never mix highly-sensitive data in tables containing non-privileged data.

It’s deceptively simple-sounding but in practice, if sensitive data is always segregated into encrypted tables (http://blog.sqlauthority.com/2009/04/28/sql-server-introduction-to-sql-server-encryption-and-symmetric-key-encryption-tutorial-with-script/) and placed into a separate, secure schema requiring elevated access privileges, mistakes concerning permissions will become less likely over time. Moreover, by denying SELECT, INSERT, UPDATE and DELETE privileges on the secured tables, every query can be routed through stored procedures where problems two and three can be addressed with auditing and data obfuscation controls. Lastly, to ensure that lost data is useless, use the new Backup Encryption feature of SQL Server 2014 or invest in a third-party tool that does the same.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Is tempDB behaving like a Normal DB?

The more I work with SQL Server, more I am baffled with the way SQL Server works. In all these interactions, I have seen DBA’s and Developers think about TempDB as something totally different from a usage point of view and treat it differently. Yes, I do agree TempDB is a special purpose database and needs special attention because it is a shared database for the SQL Server instance. As the name suggests, it is for temporary working of SQL Server. In this blog, I am not going to rehash hundreds of blogs / whitepaper on how important tempDB is and what are the processes which utilize tempDB.

In this blog I would take a middle ground of how I used to work and coach DBA’s when doing performance tuning activities during my consulting days. These are interesting ways of looking at TempDB because in my opinion, tempDB is also similar to normal DB in multiple ways with a twist. Whenever I say this statement many question to how I can say this? If tempDB is like normal database, then it should also function like a normal database.

I generally take this challenge and say the twist is there for a reason. But the fundamentals of databases, pages, allocations etc. are all the same – no matter what database we are talking about. Hence, in my opinion TempDB is similar to normal database in many ways. Generally to illustrate this, I use the following explanation.

USE tempdb
GO
CHECKPOINT
GO
SELECT Operation, Context, [Transaction ID], AllocUnitId, AllocUnitName, [Page ID],
[Transaction Name], [Description]
FROM sys.fn_dblog(NULL, NULL)
GO

Since TempDB can be set with SIMPLE recovery model only, the CHECKPOINT will make sure it flushes all the data and we will normally be left out 3 rows. To drive home concept, we will try to create a simple single table and insert a row. It is shown below:

CREATE TABLE #Understanding_temp_table
(id INT,
col1 CHAR(20),
col2 DATETIME,
col3 CHAR(50));
INSERT INTO #Understanding_temp_table
VALUES (1, 'Pinal', '01-Dec-2014 07:01:01', '* * DUMMY * *')
SELECT * FROM #Understanding_temp_table

Now, if we go ahead and execute the below DMV query again we will get close to 150+ rows. If we look at the PFS page allocation, IAM allocations, Data page allocation and many more entries. The values and way to read fn_dblog() will be reserved for some other blog post, but let me take show you some interesting things to lookout for.

SELECT Operation, Context, [Transaction ID], AllocUnitId, AllocUnitName, [Page ID],
[Transaction Name], [Description]
FROM sys.fn_dblog(NULL, NULL)
GO

In the output of above DMV, we can see these two entries available on our Description column which are interesting to look at. In my example, I have taken the two rows and shown for reference.

Changed type HOBT_FIRST_IAM (2) for AU 1008806317071335424 in rowset 1008806317066092544 from page 0000:00000000 to page 0001:0000008e

Changed type HOBT_FIRST (1) for AU 1008806317071335424 in rowset 1008806317066092544 from page 0000:00000000 to page 0001:0000007f

If we take the page of 0000008e , 0000007f and convert it from Hex to Decimal, the values are: 142 and 127 respectively. Now let us try to map this to the allocations inside our TempDB database.

SELECT allocated_page_file_id, allocated_page_page_id, allocated_page_iam_page_id, *
FROM sys.dm_db_database_page_allocations(2, NULL , NULL, NULL, 'DETAILED')
WHERE OBJECT_ID NOT IN (SELECT ID FROM sysobjects WHERE TYPE IN ('S','IT','SQ'))

From the query output, we can see the pages 142 and 127 are allocated to IAM page and Data pages respectively. In a sense this is almost similar to how a normal database would function too. The allocation of pages would be similar. The only catch here is the object ID is negative. These are user defined temp tables inside tempDB, that is the only difference. We can take a look at the object name using the below query. I remember reading an article from my good friend Vinod Kumar around Database Page Basics if you want a quick refresher.

As defined before, we know the data page is 127. Let us use the DBCC command to dump the page data and check what is on our page. When I execute the command, you can see the single row we inserted at the start of this article. This is very much in line to our understanding of inserting into a table.

The twist is simple, if we go ahead and close the session that created the Temp Table (#Understanding_temp_table), then SQL Server goes ahead and cleans the pages allocated. This can be confirmed again by running the command. The typical output is shown below.

I hope this blog gives you a feeler to how allocations happen to tables, irrespective of them being temp or normal. Secondly, we can also see once the session is closed, the allocations are released back for temp tables. In future blogs, we will take a deeper look into tempdb and learn about how they are different when compared to normal databases.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SQLCMD to Build Comma Separated String

We have an active usergroup in Bangalore called SQLBangalore. Whenever I get a chance to attend the local usergroup sessions (which happens often – trust me), I get to learn something every single time. In one of the recent user group meetings I had the opportunity to attend a Tips and tricks session by my good friends Balmukund and Vinod Kumar. Having friends who can stump you every now and then is an experience by itself for me.

Now, the scenario we are talking is simple. How can I create a comma separated string of table’s data? This is like exporting a table’s data using SQLCMD. At first this looks interesting, but can be a great learning of how SQLCMD works behind the scenes.

We will create a file for which output needs to be made. In my example, I have created a file called as Contacts-People.sql which contains the following statement.

SET NOCOUNT ON
SELECT
FirstName, Lastname FROM Adventureworks2012.[Person].[Person]
WHERE Lastname = 'Ferrier'

This returns close to 16 rows on my database. Now the requirement is to get the output as a comma separated string. We can go in steps to achieve the same.

Step 1: Give the input file to SQLCMD command

C:\Temp> sqlcmd -i Contacts-People.sql

The –i option can be used to define input file.

Step 2: Make the columns width narrow to get a concise output

C:\Temp> sqlcmd -i Contacts-People.sql -W

Step 3: Next step is to add the comma separator between the columns.

C:\Temp> sqlcmd -i Contacts-People.sql -W -s,

This adds the separator.

Step 4: This is the icing on the cake. We want to remove the header so that we get ONLY the column values as part of our output.

C:\Temp> sqlcmd -i Contacts-People.sql -W -s, -h-1

There you have it. This is an easy and quick way we can generate a comma separate string of the columns from a table. I am sure there are more than one way to do this. Feel free to send the output to a flat file and name it as .csv. Do let me know if you are aware of any other method.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Restricting Access to Contained Databases using Logon Triggers

When I wrote the article on “A Simple Example of Contained Databases” lesser did I know how people might be planning to use this capability? Having said that, there are environments where people want to use the concept of contained databases yet have flexibility of restricting the users. Recently, a DBA friend of mine had an interesting requirement and I was pleasantly surprised with the question:

Pinal: Hi Buddy.

DBA: Hi Pinal, Sorry to disturb you in the middle of the night.

Pinal: No problems, please go ahead.

DBA: I have a query around Contained Databases.

Pinal: Sure, let me see if I can help you. Not sure though.

DBA: Well, contained databases creates users inside the database and it is also used for authentication right?

Pinal: You are correct in your understanding. Now is there a problem?

DBA: No, there is no problem. I am worried about contained database implementation.

Pinal: Hmmm I don’t quite understand. Did you get a chance to search my site for a simple sample of implementing the same?

DBA: I did read the article on SQLAuthority, which is not the problem. It is about auditing and security processes inside my company.

Pinal: That is an interesting point of view. Please tell me more.

DBA: In our company, we are very strict and want to track every logins which are getting authenticated inside SQL Server. With contained databases, I want to build a different process.

Pinal: Go ahead, I am all ears.

DBA: We have enabled the creation of Contained Databases on the server. I want to restrict the databases which can be created as Contained Databases.

Pinal: Wow, why would you want to do like this?

DBA: Simple, as a DBA and Server Administrator – want to make sure only authorized users and databases which can act as contained databases can use the capability. We don’t want random users to create contained database on the server.

Pinal: Interesting, yes it is quite possible and the idea here is to use Logon Triggers. Let me make sure I try to blog this someday.

This blog is completely inspired by this conversation and let us go through the implementation. Here are the steps:

  1. Create a table in Master to hold databases that can be used for authentication.
  2. Insert the list of databases into the created table.
  3. Create the Logon Trigger to restrict the login available
  4. Try to Logon into the DB as normal user and as Contained Database user.
  5. Delete an entry from our table and check by logging in.
  6. Cleanup the script.

So you might ask, why master database and not any other database? Since logon triggers are server-scoped objects, we will create any necessary additional objects in master.

Note: Please DONOT use the script as-is because we are creating a Logon trigger. Please test, verify and then use the below script.

Let me walk you through the script:

-- Step 1 - Create a table in Master to hold databases that can be used for authentication.
USE MASTER
GO
CREATE TABLE [dbo].[authenticated_DBs](dbs INT PRIMARY KEY);
GO
-- We want anyone to be able to access this data for read-only purposes
GRANT SELECT ON [dbo].[authenticated_DBs] TO PUBLIC;
GO

The first step is to create a table which will host all the authenticated DBs we want for this particular instance that the DBA is monitoring. We have created the same in the master so that only authenticated and authorized users can change this.

-- Step 2 - Insert the list of databases into the created table.
-- Add the DB id for all of the DBs authorized to authenticate
-- including/excluding master DB
INSERT INTO [dbo].[authenticated_DBs] VALUES (DB_ID('master'));
GO
-- Used the script from previous blog to create the Contained DB
-- http://blog.sqlauthority.com/2011/03/31/sql-server-denali-a-simple-example-of-contained-databases/
-- Insert database id for ContainedDatabase
INSERT INTO [dbo].[authenticated_DBs] VALUES (DB_ID('ContainedDatabase'));
GO

The next step is to identify which all databases can allow login process. In our example, make sure we have made an entry for “master” and the database called “ContainedDatabase” based on the script from the other blog post.

-- Step 3 - Create the Logon Trigger to restrict the login available
-- This logon trigger will verify the current logon matches with one of the authorized DBs.
-- If it does, it allows the logon process to continue,
-- otherwise it will rollback, causing the session to terminate
CREATE TRIGGER Logon_authenticated_dbs
ON ALL SERVER FOR LOGON
AS
BEGIN
IF
NOT (EXISTS(SELECT * FROM [dbo].[authenticated_DBs]
WHERE dbs IN (SELECT authenticating_database_id
FROM sys.dm_exec_sessions
WHERE session_id = @@spid)))
ROLLBACK;
END;
GO

The Logon Trigger is the tricky part of this whole blog. You can be innovative with other parameters too. But be careful when playing with Logon triggers as this can lock you out of your SQL Server instance. Please use it carefully and cautiously.

SELECT * FROM [dbo].[authenticated_DBs]

I am showing the two entries that are on my local machine at the moment. We will try to mimic the login process and see what happens.

-- Step 5 - Delete an entry from our table and check by logging in.
-- Your [ContainedDatabase] database id might not be 7. Change appropriately
DELETE [dbo].[authenticated_DBs] WHERE dbs = 7
-- Try to connect again using ContainedUser
-- ContainedUser fails to connect due to trigger
INSERT INTO [dbo].[authenticated_DBs] VALUES (DB_ID('ContainedDatabase'));
GO
-- Try connect again as ContainedUser. Now it should work.

If you are have an entry into our authenticated table and are a valid user in the “Containeddatabase”, then the Login process will succeed as shown below.

If the entry into the table [dbo].[authenticated_DBs] is removed, we will be presented with an error message as shown below.

We are likely to get a similar error message if we are connecting to a contained database which was created by a user and the DBA has not given explicit rights by adding an entry into the server node.

USE MASTER
GO
-- CAUTION: If you forget to do this and drop the table authenticated_DBs then you need to manually delete this trigger or else you can't get into master too
DROP TRIGGER Logon_authenticated_dbs
ON ALL SERVER
GO
DROP TABLE authenticated_DBs
GO

In case you write a bad trigger which is always going to fail then you would be locked out from the server. In those situation DAC connections can help. I have written this earlier

SQL SERVER – Fix : Error : 17892 Logon failed for login due to trigger execution. Changed database context to ‘master’.

Please make sure to do the cleanup else you are likely to get errors if you are using this script for testing purposes and are an user of Contained Databases in your environments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to Chang Audit Location?

The auditing capability inside SQL Server is a hidden gem and not known to many in my opinion. If you have a requirement to audit your SQL Server environment and want to keep track of the activities such as DBCC commands on the server, backup, restore, failed logins, login creations, database access, database permission changes, user password changes, trace changes and much more can be audited at the server node. A similar longer list is available at the database audit node too which we can use. In this blog post let me talk about a simple task of changing the location of the audit log after it was created.

This post is inspired by one of the sessions I attended where one of the attendees had asked, if we can audit and store the information of audit on a network share. What will happen when the network share is unavailable because of network issues? Is there a seamless way to switch or change the network location anyhow? This was an interesting question and I do know from SQL Server 2012, the concept of audit resilience is available. Moreover we can also change the share which holds the Audit information using TSQL.

In this blog post we will go in sequence below:

  1. Create an Audit on a network share
  2. Enable the Audit
  3. Create an Server Audit Specification
  4. Enable the Server Audit Specification
  5. Look at the metadata and access the Logs
  6. Create an failure by removing sharing the Network drive
  7. Change the Audit location to a local drive
  8. Access the Audit logs

So let us go through the scripts for the above steps:

-- Step 1 - Create an Audit on a network share
CREATE SERVER AUDIT [NetworkAudit]
TO FILE(FILEPATH='\\SQLAuthority\Audit\Network') /* substitute in here network drive */
WITH (ON_FAILURE=FAIL_OPERATION, QUEUE_DELAY=0);

-- Step 2 - Enable the Audit
ALTER SERVER AUDIT [NetworkAudit] WITH (STATE=ON);

-- Step 3 - Create a Server Audit Specification
CREATE SERVER AUDIT SPECIFICATION [complianceServerSpec] FOR SERVER AUDIT [NetworkAudit]
ADD (SCHEMA_OBJECT_ACCESS_GROUP);

-- Step 4 - Enable the Server Audit Specification
ALTER SERVER AUDIT SPECIFICATION [complianceServerSpec] WITH (STATE=ON);

If you ask me these are the most common steps we will take to create ANY audit. As outlined above, we have created a server audit where we are auditing any Schema Object Access.

-- Step 5 - Look at the metadata and access the Logs
SELECT * FROM sys.server_file_audits
SELECT * FROM sys.fn_get_audit_file('\\SQLAuthority\Audit\Network\*', NULL, NULL);
SELECT * FROM sys.dm_server_audit_status

Let us simulate an error on the network path where the audit files are located. Check the metadata where the filesize if 0. That indicates we have a lost network location.

-- Step 6 - Create an failure by unsharing the Network drive
SELECT * FROM sys.dm_server_audit_status

If we try to read the file location we will get the following error:

SELECT * FROM sys.fn_get_audit_file('\\SQLAuthority\Audit\Network\*', NULL, NULL);

Msg 33224, Level 16, State 4, Line 20
The specified pattern did not return any files or does not represent a valid file share. Verify the pattern parameter and rerun the command. 

Since this network drive is unavailable, we need to move the auditing to a local drive. Here is how we can do the same.

-- Step 7 - Change the Audit location to a local drive
ALTER SERVER AUDIT [NetworkAudit] WITH (STATE=OFF);
ALTER SERVER AUDIT [NetworkAudit] TO FILE(FILEPATH='D:\Audit');
ALTER SERVER AUDIT [NetworkAudit] WITH (STATE=ON);

In this example I am moving the drive to a local D:\ drive. At many times it is quite possible we might not have access or permission to write to a local folder on the server. If that is the case, we are likely to get the following error when we try to enable the STATE=ON.

Msg 33222, Level 16, State 1, Line 29
Audit 'NetworkAudit' failed to start. For more information, see the SQL Server error log. You can also query sys.dm_os_ring_buffers where ring_buffer_type = 'RING_BUFFER_XE_LOG'.

If we query the dm_os_ring_buffers, it will mention the file creation error. This is because we do not have enough permissions to write on the folder. Go ahead and change the permissions appropriately and rerun the command.

<Record id = “443″ type =”RING_BUFFER_XE_LOG” time =”4084184″><XE_LogRecord message=”file: file create or open failed (last error: 5)”></XE_LogRecord></Record>

There is yet another error that can come when configuring the location to a local server. If the folder specified in the ALTER SERVER AUDIT is not, accessible or not available, SQL Server will raise an error as:

Msg 33072, Level 16, State 1, Line 2
The audit log file path is invalid.

So make sure the folder specified exists before issuing the command. Let us check the metadata to see if the changes have taken effect:

-- Step 8 - Access the Audit logs
SELECT * FROM sys.dm_server_audit_status
SELECT * FROM sys.fn_get_audit_file('D:\Audit\*', NULL, NULL);

The output would look like, take a note of file size to be non-zero:

This concludes the logical steps one needs to do to setup and change the audit file location inside SQL Server. As a final step, here are the steps to clean up the objects created in this blog post.

-- Cleanup
ALTER SERVER AUDIT SPECIFICATION [complianceServerSpec] WITH (STATE=OFF);
ALTER SERVER AUDIT [NetworkAudit] WITH (STATE=OFF);
DROP SERVER AUDIT SPECIFICATION [complianceServerSpec];
DROP SERVER AUDIT [NetworkAudit];

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Fix – Msg 230, Level 14, State 1 – The SELECT permission was denied on the column of the object , database , schema

Being a DBA is one of the most rewarding experience because you control the server as well as the way the permissions are given to each and every individual users. In my career I have often seen people try to underestimate the use of GRANT, DENY and REVOKE. Not so much for the third part, but the first two are super important and super critical.

In this blog post, let me take you through a common requirement that I hear from application developers and business users about how to secure the environment. Let us take a simple scenario to illustrate the basics of GRANT and DENY in this article.

Assume you have a table inside your database which has sensitive personally identifiable information like SSN. Your organization mandates that no one should be able to read the SSN column but if they read the other columns, it is absolutely fine. These kind of requirements are all over the place and I have seen developers struggle to implement the same.

Here is my simple shot at the scenario. Let us create our database and the table having the SSN field:

CREATE DATABASE PermissionsDB
GO
USE PermissionsDB
GO
CREATE USER Pinal WITHOUT LOGIN;
GO
CREATE TABLE SecureTbl (ID INT, Name VARCHAR(50), SSN VARCHAR(20))
GO
INSERT INTO SecureTbl VALUES (1, 'Pinal', '111-22-3333')
GO

Additionally, I have created a user called ‘Pinal’ who needs to be denied permission. Now that we got our object under question, I am going to do two simple steps, a) First GRANT SELECT rights to everyone using this database. This can be restricted to the specific user account too. b) Explicitly we are going to DENY access to the SSN column.

-- Explicitly grant at object level
GRANT SELECT ON SecureTbl TO PUBLIC
-- DENY at the column level
DENY SELECT (SSN) ON SecureTbl TO Pinal
GO

The next step for us if to check if the user context of ‘Pinal’ as SELECT privileges on the table. To illustrate this, we are changing the user context using the EXECUTE AS command and trying to access the table.

-- Switch the context
EXECUTE AS USER='Pinal'
-- below would give Error as we are selecting SSN too
SELECT * FROM SecureTbl
-- No Error from below as SSN column is not in select list
SELECT ID, Name FROM SecureTbl
REVERT

In the above query, the first SELECT statement will result in the below error. This is because we have explicitly DENIED permission to user ‘Pinal’ from accessing the SSN field. The error also mentions this clearly.

Msg 230, Level 14, State 1, Line 21
The SELECT permission was denied on the column 'SSN' of the object 'SecureTbl', database 'PermissionsDB', schema 'dbo'.

Hence for all practical purposes, even though we have explicitly given permission to read on the Table Object, the DENY makes sure we will not be able to read the column values. This is one of the simplest way to look at GRANT and DENY implementation inside SQL Server.

Do let me know if you have used this technique inside your environments.

Reference: Pinal Dave (http://blog.sqlauthority.com)