SQL SERVER – Restricting Access to Contained Databases using Logon Triggers

When I wrote the article on “A Simple Example of Contained Databases” lesser did I know how people might be planning to use this capability? Having said that, there are environments where people want to use the concept of contained databases yet have flexibility of restricting the users. Recently, a DBA friend of mine had an interesting requirement and I was pleasantly surprised with the question:

Pinal: Hi Buddy.

DBA: Hi Pinal, Sorry to disturb you in the middle of the night.

Pinal: No problems, please go ahead.

DBA: I have a query around Contained Databases.

Pinal: Sure, let me see if I can help you. Not sure though.

DBA: Well, contained databases creates users inside the database and it is also used for authentication right?

Pinal: You are correct in your understanding. Now is there a problem?

DBA: No, there is no problem. I am worried about contained database implementation.

Pinal: Hmmm I don’t quite understand. Did you get a chance to search my site for a simple sample of implementing the same?

DBA: I did read the article on SQLAuthority, which is not the problem. It is about auditing and security processes inside my company.

Pinal: That is an interesting point of view. Please tell me more.

DBA: In our company, we are very strict and want to track every logins which are getting authenticated inside SQL Server. With contained databases, I want to build a different process.

Pinal: Go ahead, I am all ears.

DBA: We have enabled the creation of Contained Databases on the server. I want to restrict the databases which can be created as Contained Databases.

Pinal: Wow, why would you want to do like this?

DBA: Simple, as a DBA and Server Administrator – want to make sure only authorized users and databases which can act as contained databases can use the capability. We don’t want random users to create contained database on the server.

Pinal: Interesting, yes it is quite possible and the idea here is to use Logon Triggers. Let me make sure I try to blog this someday.

This blog is completely inspired by this conversation and let us go through the implementation. Here are the steps:

  1. Create a table in Master to hold databases that can be used for authentication.
  2. Insert the list of databases into the created table.
  3. Create the Logon Trigger to restrict the login available
  4. Try to Logon into the DB as normal user and as Contained Database user.
  5. Delete an entry from our table and check by logging in.
  6. Cleanup the script.

So you might ask, why master database and not any other database? Since logon triggers are server-scoped objects, we will create any necessary additional objects in master.

Note: Please DONOT use the script as-is because we are creating a Logon trigger. Please test, verify and then use the below script.

Let me walk you through the script:

-- Step 1 - Create a table in Master to hold databases that can be used for authentication.
USE MASTER
GO
CREATE TABLE [dbo].[authenticated_DBs](dbs INT PRIMARY KEY);
GO
-- We want anyone to be able to access this data for read-only purposes
GRANT SELECT ON [dbo].[authenticated_DBs] TO PUBLIC;
GO

The first step is to create a table which will host all the authenticated DBs we want for this particular instance that the DBA is monitoring. We have created the same in the master so that only authenticated and authorized users can change this.

-- Step 2 - Insert the list of databases into the created table.
-- Add the DB id for all of the DBs authorized to authenticate
-- including/excluding master DB
INSERT INTO [dbo].[authenticated_DBs] VALUES (DB_ID('master'));
GO
-- Used the script from previous blog to create the Contained DB
-- http://blog.sqlauthority.com/2011/03/31/sql-server-denali-a-simple-example-of-contained-databases/
-- Insert database id for ContainedDatabase
INSERT INTO [dbo].[authenticated_DBs] VALUES (DB_ID('ContainedDatabase'));
GO

The next step is to identify which all databases can allow login process. In our example, make sure we have made an entry for “master” and the database called “ContainedDatabase” based on the script from the other blog post.

-- Step 3 - Create the Logon Trigger to restrict the login available
-- This logon trigger will verify the current logon matches with one of the authorized DBs.
-- If it does, it allows the logon process to continue,
-- otherwise it will rollback, causing the session to terminate
CREATE TRIGGER Logon_authenticated_dbs
ON ALL SERVER FOR LOGON
AS
BEGIN
IF
NOT (EXISTS(SELECT * FROM [dbo].[authenticated_DBs]
WHERE dbs IN (SELECT authenticating_database_id
FROM sys.dm_exec_sessions
WHERE session_id = @@spid)))
ROLLBACK;
END;
GO

The Logon Trigger is the tricky part of this whole blog. You can be innovative with other parameters too. But be careful when playing with Logon triggers as this can lock you out of your SQL Server instance. Please use it carefully and cautiously.

SELECT * FROM [dbo].[authenticated_DBs]

I am showing the two entries that are on my local machine at the moment. We will try to mimic the login process and see what happens.

-- Step 5 - Delete an entry from our table and check by logging in.
-- Your [ContainedDatabase] database id might not be 7. Change appropriately
DELETE [dbo].[authenticated_DBs] WHERE dbs = 7
-- Try to connect again using ContainedUser
-- ContainedUser fails to connect due to trigger
INSERT INTO [dbo].[authenticated_DBs] VALUES (DB_ID('ContainedDatabase'));
GO
-- Try connect again as ContainedUser. Now it should work.

If you are have an entry into our authenticated table and are a valid user in the “Containeddatabase”, then the Login process will succeed as shown below.

If the entry into the table [dbo].[authenticated_DBs] is removed, we will be presented with an error message as shown below.

We are likely to get a similar error message if we are connecting to a contained database which was created by a user and the DBA has not given explicit rights by adding an entry into the server node.

USE MASTER
GO
-- CAUTION: If you forget to do this and drop the table authenticated_DBs then you need to manually delete this trigger or else you can't get into master too
DROP TRIGGER Logon_authenticated_dbs
ON ALL SERVER
GO
DROP TABLE authenticated_DBs
GO

In case you write a bad trigger which is always going to fail then you would be locked out from the server. In those situation DAC connections can help. I have written this earlier

SQL SERVER – Fix : Error : 17892 Logon failed for login due to trigger execution. Changed database context to ‘master’.

Please make sure to do the cleanup else you are likely to get errors if you are using this script for testing purposes and are an user of Contained Databases in your environments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – How to Chang Audit Location?

The auditing capability inside SQL Server is a hidden gem and not known to many in my opinion. If you have a requirement to audit your SQL Server environment and want to keep track of the activities such as DBCC commands on the server, backup, restore, failed logins, login creations, database access, database permission changes, user password changes, trace changes and much more can be audited at the server node. A similar longer list is available at the database audit node too which we can use. In this blog post let me talk about a simple task of changing the location of the audit log after it was created.

This post is inspired by one of the sessions I attended where one of the attendees had asked, if we can audit and store the information of audit on a network share. What will happen when the network share is unavailable because of network issues? Is there a seamless way to switch or change the network location anyhow? This was an interesting question and I do know from SQL Server 2012, the concept of audit resilience is available. Moreover we can also change the share which holds the Audit information using TSQL.

In this blog post we will go in sequence below:

  1. Create an Audit on a network share
  2. Enable the Audit
  3. Create an Server Audit Specification
  4. Enable the Server Audit Specification
  5. Look at the metadata and access the Logs
  6. Create an failure by removing sharing the Network drive
  7. Change the Audit location to a local drive
  8. Access the Audit logs

So let us go through the scripts for the above steps:

-- Step 1 - Create an Audit on a network share
CREATE SERVER AUDIT [NetworkAudit]
TO FILE(FILEPATH='\\SQLAuthority\Audit\Network') /* substitute in here network drive */
WITH (ON_FAILURE=FAIL_OPERATION, QUEUE_DELAY=0);

-- Step 2 - Enable the Audit
ALTER SERVER AUDIT [NetworkAudit] WITH (STATE=ON);

-- Step 3 - Create a Server Audit Specification
CREATE SERVER AUDIT SPECIFICATION [complianceServerSpec] FOR SERVER AUDIT [NetworkAudit]
ADD (SCHEMA_OBJECT_ACCESS_GROUP);

-- Step 4 - Enable the Server Audit Specification
ALTER SERVER AUDIT SPECIFICATION [complianceServerSpec] WITH (STATE=ON);

If you ask me these are the most common steps we will take to create ANY audit. As outlined above, we have created a server audit where we are auditing any Schema Object Access.

-- Step 5 - Look at the metadata and access the Logs
SELECT * FROM sys.server_file_audits
SELECT * FROM sys.fn_get_audit_file('\\SQLAuthority\Audit\Network\*', NULL, NULL);
SELECT * FROM sys.dm_server_audit_status

Let us simulate an error on the network path where the audit files are located. Check the metadata where the filesize if 0. That indicates we have a lost network location.

-- Step 6 - Create an failure by unsharing the Network drive
SELECT * FROM sys.dm_server_audit_status

If we try to read the file location we will get the following error:

SELECT * FROM sys.fn_get_audit_file('\\SQLAuthority\Audit\Network\*', NULL, NULL);

Msg 33224, Level 16, State 4, Line 20
The specified pattern did not return any files or does not represent a valid file share. Verify the pattern parameter and rerun the command. 

Since this network drive is unavailable, we need to move the auditing to a local drive. Here is how we can do the same.

-- Step 7 - Change the Audit location to a local drive
ALTER SERVER AUDIT [NetworkAudit] WITH (STATE=OFF);
ALTER SERVER AUDIT [NetworkAudit] TO FILE(FILEPATH='D:\Audit');
ALTER SERVER AUDIT [NetworkAudit] WITH (STATE=ON);

In this example I am moving the drive to a local D:\ drive. At many times it is quite possible we might not have access or permission to write to a local folder on the server. If that is the case, we are likely to get the following error when we try to enable the STATE=ON.

Msg 33222, Level 16, State 1, Line 29
Audit 'NetworkAudit' failed to start. For more information, see the SQL Server error log. You can also query sys.dm_os_ring_buffers where ring_buffer_type = 'RING_BUFFER_XE_LOG'.

If we query the dm_os_ring_buffers, it will mention the file creation error. This is because we do not have enough permissions to write on the folder. Go ahead and change the permissions appropriately and rerun the command.

<Record id = “443” type =”RING_BUFFER_XE_LOG” time =”4084184″><XE_LogRecord message=”file: file create or open failed (last error: 5)”></XE_LogRecord></Record>

There is yet another error that can come when configuring the location to a local server. If the folder specified in the ALTER SERVER AUDIT is not, accessible or not available, SQL Server will raise an error as:

Msg 33072, Level 16, State 1, Line 2
The audit log file path is invalid.

So make sure the folder specified exists before issuing the command. Let us check the metadata to see if the changes have taken effect:

-- Step 8 - Access the Audit logs
SELECT * FROM sys.dm_server_audit_status
SELECT * FROM sys.fn_get_audit_file('D:\Audit\*', NULL, NULL);

The output would look like, take a note of file size to be non-zero:

This concludes the logical steps one needs to do to setup and change the audit file location inside SQL Server. As a final step, here are the steps to clean up the objects created in this blog post.

-- Cleanup
ALTER SERVER AUDIT SPECIFICATION [complianceServerSpec] WITH (STATE=OFF);
ALTER SERVER AUDIT [NetworkAudit] WITH (STATE=OFF);
DROP SERVER AUDIT SPECIFICATION [complianceServerSpec];
DROP SERVER AUDIT [NetworkAudit];

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Fix – Msg 230, Level 14, State 1 – The SELECT permission was denied on the column of the object , database , schema

Being a DBA is one of the most rewarding experience because you control the server as well as the way the permissions are given to each and every individual users. In my career I have often seen people try to underestimate the use of GRANT, DENY and REVOKE. Not so much for the third part, but the first two are super important and super critical.

In this blog post, let me take you through a common requirement that I hear from application developers and business users about how to secure the environment. Let us take a simple scenario to illustrate the basics of GRANT and DENY in this article.

Assume you have a table inside your database which has sensitive personally identifiable information like SSN. Your organization mandates that no one should be able to read the SSN column but if they read the other columns, it is absolutely fine. These kind of requirements are all over the place and I have seen developers struggle to implement the same.

Here is my simple shot at the scenario. Let us create our database and the table having the SSN field:

CREATE DATABASE PermissionsDB
GO
USE PermissionsDB
GO
CREATE USER Pinal WITHOUT LOGIN;
GO
CREATE TABLE SecureTbl (ID INT, Name VARCHAR(50), SSN VARCHAR(20))
GO
INSERT INTO SecureTbl VALUES (1, 'Pinal', '111-22-3333')
GO

Additionally, I have created a user called ‘Pinal’ who needs to be denied permission. Now that we got our object under question, I am going to do two simple steps, a) First GRANT SELECT rights to everyone using this database. This can be restricted to the specific user account too. b) Explicitly we are going to DENY access to the SSN column.

-- Explicitly grant at object level
GRANT SELECT ON SecureTbl TO PUBLIC
-- DENY at the column level
DENY SELECT (SSN) ON SecureTbl TO Pinal
GO

The next step for us if to check if the user context of ‘Pinal’ as SELECT privileges on the table. To illustrate this, we are changing the user context using the EXECUTE AS command and trying to access the table.

-- Switch the context
EXECUTE AS USER='Pinal'
-- below would give Error as we are selecting SSN too
SELECT * FROM SecureTbl
-- No Error from below as SSN column is not in select list
SELECT ID, Name FROM SecureTbl
REVERT

In the above query, the first SELECT statement will result in the below error. This is because we have explicitly DENIED permission to user ‘Pinal’ from accessing the SSN field. The error also mentions this clearly.

Msg 230, Level 14, State 1, Line 21
The SELECT permission was denied on the column 'SSN' of the object 'SecureTbl', database 'PermissionsDB', schema 'dbo'.

Hence for all practical purposes, even though we have explicitly given permission to read on the Table Object, the DENY makes sure we will not be able to read the column values. This is one of the simplest way to look at GRANT and DENY implementation inside SQL Server.

Do let me know if you have used this technique inside your environments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – User Defined Audit with SQL Server

Talk to any financial institution or bank they will be super paranoid when it comes to security and auditing policies applied to their organizations. In a recent session to one of our customers, I had to enter their premises and I had to go through a number of screening. From the entrance, car parking, reception, elevators and everywhere there was someone asking us for some information. The more I think about it, the more frustrated I become. After returning home, I thought through the complete incident with my family. I was pleasantly surprised the calmness at which they were talking to me about process.

Wow, it is a great way to learn patience and why this is important. Organizations have a reason, they need data secure, they want to have procedures so that there is no data loss, no theft of data and many more. In this competitive world this is super critical. Hence these procedures are important, critical for survival. And hence this blog is inspired to the Auditing capability with SQL Server 2012 and what I found interesting.

Creating our Audit

Creating an audit is as simple as going through a series of wizard with some basic data. Go to SQL Server Management Studio -> Security -> Audits and create a new audit. Go ahead and add the FilePath, in our example I have added it as “C:\Audit”.

The TSQL equivalent for the same is:

CREATE SERVER AUDIT [Audit-20141115-213944]
TO FILE
(  FILEPATH = N'C:\Audit'
,MAXSIZE = 0 MB
,MAX_ROLLOVER_FILES = 2147483647
,RESERVE_DISK_SPACE = OFF)
WITH
(  QUEUE_DELAY = 1000
,ON_FAILURE = CONTINUE)
GO

After creating the same, we will make sure this Audit is enabled. This ensures where our Auditing data will get into. In our example it is configured to a File as shown above.

Once we enable the Audit node, the next step is to add the Audit operation. Here we are going to add the “User Defined Audit Group”

CREATE SERVER AUDIT SPECIFICATION [ServerAuditSpecification-20141115-214433]
FOR SERVER AUDIT [Audit-20141115-213944]
ADD (USER_DEFINED_AUDIT_GROUP)
GO

Creating User defined Audit entries

This is a capability that was introduced with SQL Server 2012 and I personally feel this is quite powerful for a couple of reasons. Now, apart from standard auditing capability now applications can raise specific auditing into the audit log so that we can track or audit logically from an applications point of view.

The simplest way to raise an audit record from an application is using the stored procedure sp_audit_write command. A typical command looks like:

EXEC sys.sp_audit_write 1, 0, N'This is an audit from Pinal'

This command will now put an entry into my audit, we just defined in the previous. To view the entry, select the “View Audit Log” option from the Audit we just created. The Log viewer looks like below:

In the example above, we can see the details. Please note the Audit “Succeeded as False” because we sent the second parameter to sys.sp_audit_write as 0 hence it is False. If you want to enter it as True, pass the value as 0. Also the statement that got us this information is available as part of “Statements” line item.

If you want to view all the entries into our audit file, we can use the sys.fn_get_audit_file function.

SELECT * FROM sys.fn_get_audit_file('c:\Audit\*', NULL, NULL)

I hope you got a flavor of how custom user defined audits can be defined. This is a great way to use the auditing feature from your application’s point of view. Do let me know if you found this useful and if you are planning to use the same in your environments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Filtering CPU Bound Execution Plans with Extended Events

Let me take a tour to what are we talking here from a scenario point of view. Prior to SQL Server 2012, tracing the execution plan is an all or nothing approach. In profiler, if we select execution plan, we will get every occurrence of every execution plan for all the statements that are executed inside SQL Server captured by the trace. This will bloat up the trace file very quickly and unimaginable for us to do any sort of analysis. So in general performance templates used inside profiler we do not capture execution plans – and rightly so.

I have been waiting for the ability to capture execution plan only when a query executes longer than a certain amount of time. Yes, I can add a few filters to profile, but this wasn’t the most efficient way to work. In this blog we will look at the Extended Event query_post_execution_showplan (similar to Showplan XML Statistics Profile) in SQL Server 2012 which has an interesting addition of cpu_time and duration for filtering. Personally, I thought this is a great addition to the usage of Extended Events.

Why use this?

We want to use this to monitor the overall server performance and troubleshooting when things go wrong. When the server responses are slow and we have not identified any particular query, we can choose to capture all the execution plans that consumed CPU_Time or duration exceeding certain threshold. This is one of the ways this feature can be used effectively.

Extended Event UI

If you are not familiar with the Extended Events UI, here is a great start for this task. Goto SQL Server Management Studio -> Management Node -> Extended Events -> Sessions -> Right Click and select “New Session…”.

This starts a wizard which will make our configuration quite easy. In Event library, search using the keyword “showplan“. Query_post_execution_showplan will show up as a search result. Assuming you are on SQL Server 2012 and above, note the cpu_time and duration are among available event fields.

Next, select this event and move the entry to the “Selected events:” list. This enables the “Configure” button at the top of the dialog box.

Click the Filter (Predicate) tab. You can choose cpu_time, operator and value. Note that cpu_time is in microseconds. For example, if you want to capture execution plans on all queries taking 10 seconds of CPU, you will choose 10000000. This can be done for queries that are executing more than 10 seconds too by selecting the “duration” field.

Extended Event T-SQL

Instead of showing the UI version of the command, I have gone ahead and scripted out with the duration field the same extended events. The command for the same is:

CREATE EVENT SESSION [CPU_XEvent] ON SERVER
ADD EVENT sqlserver.query_post_execution_showplan(    ACTION(package0.process_id,sqlserver.database_name,sqlserver.nt_username,sqlserver.sql_text)
WHERE ([duration]>=(10000000)))
GO

In addition to the filter, I have gone ahead and added few global variables which we might be interested as part of the collection process. Use this with caution and be careful in collecting every single event using Extended Events. I was pleasantly surprised to see the cpu_time and duration in SQL Server 2014 and hence thought it is worth a mention.

Finally, if you are doing it on a Live server. Then make sure to enable the Extended Event and we can use the “watch live data” which I found very useful in SSMS.

If you have used other events of Extended Events in your environment, I would like to learn from you on how you used them and to enable what scenarios.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Validation Rules: Code or Database? – Notes from the Field #054

[Note from Pinal]: This is a 54th episode of Notes from the Field series. Where do we blame for a mistake or error in the system? Well, developer blames DBA and DBA blame developers. Sometimes there is no solution to the catch 22 situation. I have been there and I am very sure that you have been there too. Well, this is an amazing and beautiful article by G. Andrew Duthie. He has attempted to demystify the problem which we all face every day.

In this episode of the Notes from the Field series database expert Andrew Duthie explains about Validation Rules and where they should be implemented. Read the experience of Andrew in his own words.


 

If you’re a DBA dealing with developers, you may run into the question of whether it’s better to allow the developers to write validation rules in their .NET app layer, or whether you should put your foot down and insist that the validation be implemented in stored procedures. The answer, as you might expect from a consultant, is “it depends.”

Advantages of Code-based Validation

One of the things that might inform your decision on what to use for validation is the skills possessed by the development team. If you’ve got a developer or developers who are well-versed in C# and LINQ, but don’t have a lot of experience writing stored procedures, you may want to cut them a break and let them use the tools they’re more familiar with.

Writing validation rules in code at the application layer allows developers to stay within the realm of .NET objects, which can result in faster development time.

Disadvantages of Code-based Validation

While there are probably more that could be discussed, I’ll mention just two of the significant disadvantages to writing validation rules in code.

First, if the code for the validation rules is using LINQ, particularly if the rules are complex, there’s the possibility of queries that generate sub-optimal SQL under the covers. This can be mitigated by profiling the queries to make sure that any performance hogs are caught as early as possible, but it’s certainly a valid concern.

Second, from a maintainability standpoint, having rules in the app means that adding rules requires the app to be recompiled and redeployed. For some apps and environments, this may not be a big deal, but in others, it could definitely be a deal-breaker.

Advantages of Stored Procedure-based Validation

Using stored procedures for validation provides some key advantages. One is proximity to the data. Unlike code-based validation, which may require pumping significant amounts of data over the wire from the database to the app tier, stored procedure-based validation keeps the logic on the DB tier, so performance may be significantly better.

Another advantage is that with a good execution design (for example, a master stored procedure that executes a list of validation rules in a specified order based on a configuration table), it can be relatively easy to introduce new rules with less disruption than having to recompile and redeploy an entire application.

Disadvantages of Stored Procedure-based Validation

The major disadvantage of using stored procedures for validation, speaking as an app developer, is the basic impedance mismatch between .NET code (C# or Visual Basic) and T-SQL. While it’s certainly possible for developers to master both, there’s a mental cost in switching between these environments, and a potential for mistakes when transitioning from one to the other.

The other downside of stored procedures is the mixing of application logic between the app tier and the database tier. While validation close to the data can, as noted, improve performance, if some parts of the application logic live in both the app and database tiers, this could make for more costly maintenance down the road.

Consistency is Key

One additional point I’d like to make is that it’s probably wise to choose one option or the othernot both. If you have multiple applications in development (or even in maintenance mode), having a mix of app-based or sproc-based validation will likely give you headaches at some point. So get your team together and have a discussion about how you’re currently handling things, and whether there might be a better way.

Summary

The short answer to “which is better” is really “either.” It all depends on the skills of your developers, the performance you need from your app, and the other factors I’ve discussed. Although I’m coming at this from the perspective of an app developer, I’ve recently become more comfortable with the idea of stored procedure-based validation, particularly in instances where more than one app may be targeting the same database, since this can help reduce redundancy, and centralize management of rules.

I’d love to get your feedback on how you’ve handled validation rules in your environment, so feel free to share a comment below.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – FIX – Msg 4864, Level 16, State 1 – Bulk load data conversion error

Working with SQL Server is such rewarding and most of the times, I get a chance to revisit and explore more into some of these errors. Long time ago, I had written a blog post to read data from CSV/text file and insert into SQL Server Table using BULK INSERT command. You can read it here: SQL SERVER – Import CSV File Into SQL Server Using Bulk Insert – Load Comma Delimited File Into SQL Server

There have been many comments on that blog with error received by my readers. This blog post is to fix few errors mentioned over there.

Msg 4860, Level 16, State 1

This is one of the most common error reported by readers. Let’s have a look at the steps to reproduce the same:

CREATE TABLE library_books_loan
(
student_id VARCHAR(10) NOT NULL,
book_id    VARCHAR(10) NOT NULL,
branch_id  SMALLINT NOT NULL,
id_no      VARCHAR(10) NOT NULL,
date_out   DATE NULL DEFAULT NULL,
due_date   DATE NULL DEFAULT NULL,
date_in    DATE NULL DEFAULT NULL
)

Here is the sample text file I want to insert into this table:

student_id,book_id,branch_id,id_no,date_out,due_date,date_in<>
1,0399147020,1,9019,2013-11-22,2013-12-06,2013-12-01<>
2,0030059380,4,9007,2013-12-01,2013-12-15,2013-12-16<>
3,0671880756,5,9018,2013-12-08,2013-12-22,2013-12-22<>
4,0911625291,3,9013,2014-01-02,2014-01-16,2014-01-12<>
5,0688161995,5,9022,2014-02-10,2014-02-24,2014-03-01<>
6,0911625291,2,9011,2014-03-03,2014-03-17,2014-03-16<>
7,1861003730,3,9034,2014-04-17,2014-05-01,NULL<>
12,0911625607,2,9018,2014-04-19,2014-05-03,NULL<>

We can save the file as C:\Temp\Books_Library.txt. Here is the command which we will run to import the data.

BULK INSERT library_books_loan
FROM 'C:\Temp\Books_Library.txt'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '<>',
FIRSTROW=2

The common mistake which is done by the user is, file gets saved locally (on their computer) and command is executed on remote instance of SQL Server. Here is the error which you would receive if you do this.

Msg 4860, Level 16, State 1, Line 12
Cannot bulk load. The file "C:\Temp\Books_Library.txt.txt" does not exist. 

So please make sure that file exists on the machine where SQL Server is running and path is correct on server itself.

Msg 4864, Level 16, State 1

If file is saved correctly on the server and BULK INSERT is tried then we are likely to get the below error messages:

Msg 4864, Level 16, State 1, Line 12
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 8, column 7 (date_in).
Msg 4864, Level 16, State 1, Line 12
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 9, column 7 (date_in).

If we read error message correctly, it is complaining about row 8 and row 9. Column is date_in. If we look back at data, we can see that NULL is provided as a value.

This means that SQL Server is treating that value as a string “NULL” and trying to insert that into a column which is defined as the date.

Here is the simple explanation by the demo.

CREATE TABLE SQLAuth (d DATE)
GO
INSERT INTO SQLAuth VALUES ('NULL')
GO

Msg 241, Level 16, State 1, Line 3
Conversion failed when converting date and/or time from character string.

Solution

Modify the text file and don’t pass any value to the column. It would be treated as NULL automatically. Here is the modified version

student_id,book_id,branch_id,id_no,date_out,due_date,date_in<>
 1,0399147020,1,9019,2013-11-22,2013-12-06,2013-12-01<>
 2,0030059380,4,9007,2013-12-01,2013-12-15,2013-12-16<>
 3,0671880756,5,9018,2013-12-08,2013-12-22,2013-12-22<>
 4,0911625291,3,9013,2014-01-02,2014-01-16,2014-01-12<>
 5,0688161995,5,9022,2014-02-10,2014-02-24,2014-03-01<>
 6,0911625291,2,9011,2014-03-03,2014-03-17,2014-03-16<>
 7,1861003730,3,9034,2014-04-17,2014-05-01,<>
 12,0911625607,2,9018,2014-04-19,2014-05-03,<>

If we run the same command now, data should be inserted. As highlighted, we can see NULL values inserted on our destination table.

These are some common errors one can get using BULK INSERT command. In case you are facing other errors, please comment and I shall try to respond it in a future post for sure.

Reference: Pinal Dave (http://blog.sqlauthority.com)