SQL SERVER – The Basics of the Execute Process Task – Notes from the Field #084

[Note from Pinal]: This is a new episode of Notes from the Field series. Every time I give an introductory note, however, this time there is no need of intro note. This note is from Andy and as we all know he is amazing person when we have to understand the fundamentals. He has written this blog post with such an interesting way that you must read it to understand the very basic of the file system task.

andyleonard SQL SERVER   The Basics of the Execute Process Task   Notes from the Field #084


Many data integration scenarios involve executing some other process, whether starting a custom application or performing an operating system operation.

Remember: SSIS is a software development platform. With “SQL Server” included in the name, it is easy for people to confuse SSIS as a database tool or accessory, but Control Flow Tasks put that confusion to rest.

SSIS provides several Control Flow tasks. Here is a list that provides a good approximation of which tasks I use most, from most-used to least-used:

In this article I provide an example of configuring the SSIS Execute Process Task, shown in Figure 1:

notes 84 1 SQL SERVER   The Basics of the Execute Process Task   Notes from the Field #084
Figure 1: SSIS Execute Process Task

As with the File System Task, the Execute Process Task provides yet another way to implement an SSIS Design Pattern for source file archival via file compression. When you first open the Execute Process Task Editor, you will note several properties in the property grid, as shown in Figure 2:

notes 84 2 SQL SERVER   The Basics of the Execute Process Task   Notes from the Field #084
Figure 2: SSIS Execute Process Task Properties

An important property is the Executable property which holds the path to the application or process you wish to start with the Execute Process Task. In this case, I am going to start the 7-Zip command line executable to zip a data file. 7-zip is a free file compression utility, and the command line utility is pretty cool. On my system, the 7-Zip command line utility is located at “C:\Program Files\7-Zip\7z.exe” so I configure the Executable property of the Execute Process Task as shown in Figure 3:

notes 84 3 SQL SERVER   The Basics of the Execute Process Task   Notes from the Field #084
Figure 3: The Execute Process Task Editor with the Executable Property Configured

The Arguments property allows me to specify command line arguments to the executable. For 7-Zip, the “a” argument informs the application that I want to add files to a compressed file. The text following the “a” argument specifies the name of the compressed file. The argument that follows the name of the compressed file configures the file (or files) to add. My arguments property reads:

a E:\Projects\7Zip\data\archive1.7z E:\Projects\7Zip\data\test.csv

These arguments tell the 7z.exe executable that I want to add the E:\Projects\7Zip\data\test.csv file to a compressed file named E:\Projects\7Zip\data\archive1.7z, as shown in Figure 4:

notes 84 4 SQL SERVER   The Basics of the Execute Process Task   Notes from the Field #084
Figure 4: The Execute Process Task Editor with the Arguments Property Configured

I can configure other Execute Process Task properties. For example, I choose to hide the command line window for 7-Zip when it executes. To do so, I set the WindowStyle property to Hidden, as shown in Figure 5:

notes 84 5 SQL SERVER   The Basics of the Execute Process Task   Notes from the Field #084
Figure 5: The Execute Process Task Editor’s WindowStyle Property

The SSIS Execute Process Task is now configured to compress a file. Let’s test it! Click the OK button to close the Execute Process Task Editor. Press the F5 key or select SSIS->Start Debugging to test your work. My result is shown in Figure 6:

notes 84 6 SQL SERVER   The Basics of the Execute Process Task   Notes from the Field #084
Figure 6: Successful Test Execution of the SSIS Execute Process Task

Viewing the source and destination directories, we see the file was successfully moved – shown in Figure 7:

notes 84 7 SQL SERVER   The Basics of the Execute Process Task   Notes from the Field #084
Figure 7: The File, Compressed!

As I stated earlier, the SSIS Execute Process Task is powerful, flexible, and robust. This article has demonstrated another way you can use the Execute Process Task to compress files for archiving. Archiving files after loading the data they contain is a common practice in data integration.

If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Security Auditing With ApexSQL Audit

SQL Servers and databases are probably the most critical IT parts of an enterprise. The most important and confidential data are stored there and even a potential issue with that data can be business critical. In situations where significant preventive/protective actions are applied, such measures will not be of any help in determining the cause of an incident when it occurs. For that purpose, establishing the SQL Server continuous auditing and especially being informed, timely of security configuration changes have been of the utmost importance.

SQL Server security auditing with ApexSQL Audit ensures the early or immediate detection of any SQL Server security changes allowing the DBAs to prevent it to cause an unwanted security breach or even worse potential data loss, interruption or hindered server operations. Being informed, timely about any security issue or even about the potential issue is not important only when immediate actions are required, but also for continuous auditing purposes.

In order to fulfill not only basic auditing requirements and to ensure the granular and precise auditing, ApexSQL Audit is featured with two types of auditing filters: Simple and Advanced.

Even though it is named “Simple” for its ease of use, this is quite a comprehensive filter which allows a high level of granularity in filtering. In situation when SQL Server security auditing is required, the following security related events can be specified for auditing at the server and database level.

Simple filter

Server level security related events:

 apexaudit 01 SQL SERVER   Security Auditing With ApexSQL Audit

Additionally, there is an easy option to include or exclude specific login from auditing using the Logins filter when such requirement exists (excluding the trusted users for example)

apexaudit 02 SQL SERVER   Security Auditing With ApexSQL Audit

Database level security related events:

 apexaudit 03 SQL SERVER   Security Auditing With ApexSQL Audit

ApexSQL Audit’s simple filter allows specifying SQL Server security auditing requirement for each security operation independently, but what’s more ApexSQL Audit allows specifying filtering conditions for all individual SQL Server auditing operation on both, server and database levels.

With a few simple mouse clicks, the desired SQL Server security auditing can be set according to specific security requirements, and Simple filter is the fastest way for configuring auditing of any important auditing requirements.

Advanced filter

Even though it is highly granular, the simple filter still doesn’t allows meeting some fine tuning requirements imposed by some advanced users or specific auditing requirements, so ApexSQL Audit also has a highly granular filter that can meet even the most demanding SQL Server auditing/compliance requirements and auditing precision. The Advanced filter utilizes a logical set of operators that can be selected for defining the auditing filter condition.

The image below displays all data fields available in the advanced auditing filter:

apexaudit 04 SQL SERVER   Security Auditing With ApexSQL Audit

Depending on the data field selected is, is not, contain and does not contain operators will be available for filtering. By using the advanced filter, user can create the auditing rule that will ensure any level of auditing precision required

apexaudit 05 SQL SERVER   Security Auditing With ApexSQL Audit

The advance filter is actually interpreted as the standard logical expression in the background where indentation represents the bracket and grouped conditions are treated as enclosed between the brackets. There is no limitation in the number or depth of filtering rules used.

The Text data filter will additionally allow meeting some very specific requirements as it is capable to parse the actual T-SQL of for audited event and then to collect or filter out the audited event depending on the used condition

apexaudit 06 SQL SERVER   Security Auditing With ApexSQL Audit

In the example above, the filtering condition will audit and log only the alter login and create login events which doesn’t fulfill the company rule that Enforce password policy and Enforce password expiration must be enabled for these events, so DBA can be timely notified and take necessary steps to fix that.

As already stated, it is very important and usually required to be informed timely about any SQL Server security auditing issue or event. In order to meet such requirement, ApexSQL Audit provides the capable alerting system that consists of:

  • Built-in system alerts
  • User defined data alerts for alerting on captured events
  • Custom script alerts for alerting on any value that can be retrieved via SQL script

ApexSQL Audit utilizes the true real time alerting since alerting engine will intercept and process the audited event, according to defined alerting condition even before the information is stored in the central repository database, allowing user instant notification when alert is raised.

Some highlights of the ApexSQL Audit alerting system:

  • Fully customizable alert name and notification options (user definable text of the alert name, subject and body text)
  • Alerting engine utilize the same advanced filter engine used for the auditing filter which allows very precise alerting conditions to be created
  • Allows alert email notification with ability to define individual recipient email address for each alerting condition created
  • Unlimited number of created alerts
  • Editing of built-in system alerts to adjust them according to individual needs
  • Editing, removing, disabling or deleting of user created alert
  • Alert history viewer with unlimited number of stored alerts
  • Detail information in alert body about the event

 apexaudit 07 SQL SERVER   Security Auditing With ApexSQL Audit

All collected and stored SQL Server security auditing events should be presented in a human readable form for any internal purpose or on request of an auditor. Besides being able to ensure reporting ability within the standard GUI, ApexSQL Audit features the Web based reporting ability. Web reports allows full reporting ability from any computer within the local network using the standard web browser. Both, the GUI, and Web-based reports are equally capable and equipped. Regardless of whether user decides to use the GUI or web reports, there are two types of reporting –common reports and custom reports.

Common reports consist of 11 predefined basic reports including the comprehensive filtering abilities, which ensures the quick and easy preview of collected events but also the precise reporting on required audited events

apexaudit 08 SQL SERVER   Security Auditing With ApexSQL Audit

The security related reports are Security configuration history, Logon activity history, Permission changes and Unauthorized access

 apexaudit 09 SQL SERVER   Security Auditing With ApexSQL Audit

For those who wish more from reporting, ApexSQL Audit has built-in custom reports feature that grants the full freedom in creating and organizing of reporting on collected during the SQL Server auditing. Again, the advanced filter form is used here as well (it is consistent through all parts of application) making any customization in ApexSQL Audit custom reports or any other part of application to be easy and consistent. Custom reports allow creating the precise filtering condition that can be saved and reused whenever it is needed

apexaudit 10 SQL SERVER   Security Auditing With ApexSQL Audit

There is no limitation in how many reports user can create and save, which allows creating the general overview, thematic or very precise reports that matches very specific requirements in just a few mouse clicks

apexaudit 11 SQL SERVER   Security Auditing With ApexSQL Audit

With ApexSQL Audit, SQL Server security auditing, but also any SQL Server auditing and compliance has never been more reliable, easier and simple. There’s quite a bit more to ApexSQL Audit than what’s outlined here, but this easily show how ApexSQL Audit is capable of meeting most of auditing concerns directly out of the box.

ApexSQL Audit shares other ApexSQL solutions easiness of installation, setup and administration. ApexSQL Audit is focused on ensuring requirements for easy and comprehensive auditing and compliance for SQL Server. The most impressive feature of ApexSQL Audit is the fact that it is a lightweight, it uses central repository database with the built-in tamper proofing mechanism and unprecedented ability to achieve high precision auditing, alerting, and reporting.

Reference : Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Puzzle with Year Function

Puzzle pieces SQL SERVER   Puzzle with Year FunctionA couple of weeks back, I ran a contest with MONTH () function and had close to 300 of you answer the puzzle. It was a great brain teaser and we had an awesome response from you. Most of you got the answer right and it is great to see you folks getting the basics right.

When I was on that step, I was wondering to run this second contest. Sometimes our basics can get tricked if we add a twist to the whole setup. Below are two sets of Queries and I would want you to guess what is the output and why?

Query 1:

What is the output for the below query? And why?

DECLARE @dt DATE = '0001-1-1'
SELECT MONTH(@dt), YEAR(@dt)

Part of the answer is in the earlier puzzle I mentioned starting this blog. But what would the year value show? Take a guess and let me know.

Query 2:

I have made a small change in the default value for the second query. Now can you guess what the output from this query is?

DECLARE @dt DATE = '1-1-1'
SELECT MONTH(@dt), YEAR(@dt)

As part of the hint I can tell you the MONTH function returns the same value. But there is something different for the YEAR function.

So here is the quiz – Why are the values different? What is the reason behind this?

Please leave correct answer in comment below the blog.

I will announce winner of this contest in 48 hours. Two people who give the right answer, I will share a learning resource worth USD 29 on June 11 via Email.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Know Your Backup Before Deleting Database

Earlier last week I wrote a blog around, SQL SERVER – FIX – Msg 3702, Level 16, State 3 – Cannot Drop Database “DB_Name” Because it is Currently in Use. The premise of that blog was driven by some of the demo’s I show at conferences. During one of the UG Meets, I met my good friend Balmukund doing something different and that inspired me to write that blog.

When I wrote that, many people did write back stating it can be dangerous etc. I sort of agree if you are on a production box. As my usecase was based on Demo environment for sessions, lesser did I think about it. Having said that, immediately I thought it would be good to back-it-up with a blog that will prevent us from getting into trouble.

So I wrote back to my friend who wrote an email to me about the dangers. Yes, we need to be careful while dropping databases but be prepared for contingencies. If you have taken a backup of your database, make sure to check if the same is available by querying the MSDB. I gave a typical script would look like this:

USE MSDB
GO
SELECT
msdb.dbo.backupset.database_name,
msdb.dbo.backupset.backup_start_date,
msdb.dbo.backupset.backup_finish_date,
CASE msdb..backupset.TYPE
WHEN
'D' THEN 'Database'
WHEN 'L' THEN 'Log'
WHEN 'I' THEN 'Differential'
WHEN 'F' THEN 'Filegroup'
END AS backup_type,
msdb.dbo.backupmediafamily.physical_device_name
FROM msdb.dbo.backupmediafamily
INNER JOIN msdb.dbo.backupset
ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id
-- Add the WHERE condition if you want it for a specific database
-- WHERE msdb.dbo.backupset.database_name = 'AdventureWorks2014'
ORDER BY msdb.dbo.backupset.backup_finish_date

 

The above script is a classic way to find all the backups done for a given database or the databases on a given instance.

As a best practice, I would like to figure out from these DMV’s if any backups were taken on every single database before working on them. I am sure as a seasoned DBA, you all are always aware of this important steps when working on production databases.

I know each organization has a restore strategy, so can you let me know about your restore strategies for critical databases? When do you take FULL, Differential and how often you take TLog backups in your production environments? Sharing this is a great way to tell the blog readers about typical usage patterns.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #023 – Error Handling with TRY…CATCH

Here is one of the most popular question I see people discussing in SQL Interviews.

Question – How do you handle errors with the help of TRY…CATCH?

Answer –

TRY/CATCH helps to write logic separate the action and error handling code. The code meant for the action is enclosed in the TRY block and the code for error handling is enclosed in the CATCH block. In case the code within the TRY block fails, the control automatically jumps to the CATCH block, letting the transaction roll back and resume execution. In addition to this, the CATCH block captures and provides error information that shows you the ID, message text, state, severity and transaction state of an error.

Functions to be used in CATCH block are :

  • ERROR_NUMBER: returns the error number, and is the same value of @@ERROR.
  • ERROR_SEVERITY: returns the severity level of the error that invoked the CATCH block.
  • ERROR_STATE: returns the state number of the error.
  • ERROR_LINE: returns the line number where the error occurred.
  • ERROR_PROCEDURE: returns the name of the stored procedure or trigger for which the error occurred.
  • ERROR_MESSAGE: returns the full message text of the error. The text includes the values supplied for any substitutable parameters, such as lengths, object names, or times.

You can use these functions anywhere inside a CATCH block, and they will return information regarding the error that has occurred. These functions will return the value null outside of the CATCH block.

Syntax:
BEGIN TRY
{ sql_statement |
statement_block }
END TRY
BEGIN CATCH
{ sql_statement |
statement_block }
END CATCH

The TRY or CATCH block can contain a single T-SQL statement or a series of statements. The CATCH block must follow immediately after the TRY block. The TRY/CATCH block cannot span more than a single batch. In addition, TRY/CATCH block cannot span an IF/ELSE statement.

You can read more about this subject over here: Explanation of TRYCATCH and ERROR Handling.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – FIX: Msg 3102, Level 16, State 1 – RESTORE cannot process database ‘Name’ because it is in use by this session

Some of the simple learnings can come from unexpected places and this blog post is a classic example to that statement. Recently I got an email from one of my readers about the below error message:

Msg 3102, Level 16, State 1, Line 2
RESTORE cannot process database 'DBProductDevelopment' because it is in use by this session. It is recommended that the master database be used when performing this operation.
Msg 3013, Level 16, State 1, Line 2
RESTORE DATABASE is terminating abnormally.

As I always do, I searched on my own blog using http://search.sqlauthority.com and found few earlier blogs where few of my readers reported the error which is the source of this blog.

SQL SERVER – FIX : Error 3154: The backup set holds a backup of a database other than the existing database

SQL SERVER – FIX : ERROR : Msg 3159, Level 16, State 1, Line 1 – Msg 3013, Level 16, State 1, Line 1

One of my reader has asked interesting question related to this error:

Does it mean that I need to restore master database first and then restore this? How is that possible? Is there something wrong with error message?

Answer: No, the error message is not asking to restore master. All it is saying is that the restore command is running in the same database for which restore is being attempted. Error message is asking to change context to master database before performing restore.

Here is the repro of the problem:

CREATE DATABASE SQLAuthority
GO
USE SQLAuthority
GO
BACKUP DATABASE SQLAuthority TO DISK = 'SQLAuthority.bak'
GO
RESTORE DATABASE SQLAuthority FROM DISK = 'SQLAuthority.bak' WITH REPLACE
GO

restore error 01 SQL SERVER   FIX: Msg 3102, Level 16, State 1   RESTORE cannot process database Name because it is in use by this session

As we can see, SQL Server can’t complete the restore operation because the user database has an active connection within our current session. We’ll need to change our database connection to a different database before we attempt to perform the restore. We can use the T-SQL USE command:

Here is the modified script where I have highlighted the change.

Here is the cleanup script.

restore error 02 SQL SERVER   FIX: Msg 3102, Level 16, State 1   RESTORE cannot process database Name because it is in use by this session

/* clean up*/
USE MASTER
GO
ALTER DATABASE SQLAuthority SET single_user WITH ROLLBACK IMMEDIATE
GO
DROP DATABASE SQLAuthority
GO

Have you ever face similar error in your environments? Do you have any similar experience to share? Let me know over the comments section below.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – FIX – Msg 3702, Level 16, State 3 – Cannot Drop Database “DB_Name” Because it is Currently in Use

We learn from mistakes and we improvise with experience. I couldn’t think of a better example than the one below for this. I have been watching many sessions by various speakers and found that many times they have struggled to get rid of below error message.

Msg 3702, Level 16, State 3, Line 1
Cannot drop database “DemoDB” because it is currently in use.

The meaning of the error message is pretty clear that someone is using the database and it can’t be deleted/dropped. First, make sure that it is not our own connection. To make sure, always change the context before dropping the database as shown below

kill 01 SQL SERVER – FIX – Msg 3702, Level 16, State 3   Cannot Drop Database DB Name Because it is Currently in Use

USE MASTER
GO
DROP DATABASE DemoDB
GO

If we are still getting above error, then there are two approaches to solve this:

  1. Long approach: Find the connections which are blocking me to drop the database. Essentially find those sessions which are using this database and then kill those sessions. Here is the script to generate kill command.

SELECT 'kill ' + CONVERT(VARCHAR(100), session_id)
FROM sys.dm_exec_sessions
WHERE database_id = DB_ID('DemoDB')
AND
session_id <> @@spid

This would give output as below.

We can run the kill command and try dropping database again. If it was successful, go ahead else repeat the loop.

  1. Short approach: SQL Server has inbuilt functionality where it can kick out all the connections. There is a cause called “WITH ROLLLBACK IMMEDIATE” which can be used to kill other connections and forcing rollback of their work.

USE [master]
GO
ALTER DATABASE [DemoDB] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
USE [master]
GO
DROP DATABASE [DemoDB]
GO

If you want to learn other rollback options available with ALTER DATABASE, you can read my earlier blog as well

SQL SERVER – Difference Between ROLLBACK IMMEDIATE and WITH NO_WAIT during ALTER DATABASE

Hope this helps and you will use the new technique in your presentations.

Reference : Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Basic Statistics Maintenance – Notes from the Field #083

[Note from Pinal]: This is a 83rd episode of Notes from the Field series. Maintenance of the database is very critical activity and I have always seen DBA taking it very seriously. There is a only one problem – there is no single solution or guidance for how to maintain the database. Everybody has their own opinion and way to do different tasks. Statistics is one of the most important aspect of the database. The performance of entire application can depend on statistics, as it can help SQL Engine with intelligence to execute optimal plan.

In this episode of the Notes from the Field series database expert Stuart Ainsworth explains about basic statistics maintenance.


 SQL SERVER   Basic Statistics Maintenance   Notes from the Field #083

Statistic maintenance is an important (but often overlooked) aspect of performance tuning for SQL Server.  The query optimizer relies on distribution statistics to determine how the query will be executed, with a particular emphasis on SEEKS vs SCANS as well as estimates of effort needed (the cost threshold for parallelism).  Out-of date stats can impact performance significantly.

Luckily, the default setting for most databases covers most database performance scenarios; SQL Server offers three basic settings for statistics maintenance:

  • Auto Create Statistics – SQL Server will create statistics during an index creation script, or when deemed necessary to satisfy a query; enabled by default.
  • Auto Update Statistics – SQL Server will update statistics when it deems them to be outdated; enabled by default.
  • Auto Update Statistics Asynchronously – When enabled, SQL Server will updated statistics after the execution of a query if it determines that an update is necessary; if disabled, the statistics will be updated first, and then the query executed. This setting is disabled by default, and there’s some controversy over whether or not it’s useful.

notes 83 1 SQL SERVER   Basic Statistics Maintenance   Notes from the Field #083

For the most part, SQL Server does a good job of maintaining statistics if these options are left with the defaults; however, statistics can still become stale over time if the data is updated at a slow rate of change.  If your statistics are more than a year old in your database, it’s probably time to do a refresh.

But how do you tell when your stats are out of date? There’s a catalog view in SQL Server called sys.stats that will give you a lot of information about statistics, but it’s very detailed; data is collected down to the column level, which may be overwhelming if you have lots of databases.  I suggest starting at a higher level, by taking a look at how out of date statistics are across all of your databases.   I use the following query to help me quickly identify troublesome databases:

/*checks last updated stats date for all databases; calculates a percentage of stats updated within the last 24 hours.
useful to determine how out-of-date statistics are.  Also identifies if auto updatestates are on and if asynchronous updates
are enabled.
*/
CREATE TABLE #dbs
(
database_name VARCHAR(100)
,
oldest_stats DATETIME
, newest_stats DATETIME
, percent_update_within_last_day DECIMAL(5, 2)
,
is_auto_update_stats_on BIT
, is_auto_update_stats_async_on BIT
)
DECLARE @dynsql NVARCHAR(4000) = 'use ?
;
if db_id() >4
BEGIN
WITH    x AS ( SELECT   STATS_DATE(s.object_id, stats_id) date_updated
FROM     sys.stats s
JOIN sys.tables t ON t.object_id = s.object_id
WHERE    t.is_ms_shipped = 0
) ,
x1
AS ( SELECT   MIN(date_updated) AS oldest_stats
, MAX(date_updated) AS newest_stats
FROM     x
)
SELECT DB_NAME() database_name
, oldest_stats
, newest_stats
, SUM(CASE WHEN DATEDIFF(d, date_updated, newest_stats) <=1 THEN 100.00 ELSE 0.00 END)/COUNT(*) "percent_update_within_last_day"
, d.is_auto_update_stats_on
, d.is_auto_update_stats_async_on
FROM    x
CROSS JOIN x1
CROSS JOIN sys.databases d
WHERE d.database_id = db_id()
GROUP BY oldest_stats
, newest_stats
, d.is_auto_update_stats_on
, d.is_auto_update_stats_async_on
END
'
INSERT  INTO #dbs
( database_name
, oldest_stats
, newest_stats
, percent_update_within_last_day
, is_auto_update_stats_on
, is_auto_update_stats_async_on
)
EXEC sp_MSforeachdb @dynsql
SELECT  *
FROM    #dbs d
DROP TABLE #dbs

Results will look similar to the following:

notes 83 2 SQL SERVER   Basic Statistics Maintenance   Notes from the Field #083

Looking at the results, if I see that the oldest database stats are more than a year old, it’s s a pretty good indicator that statistics are not being maintained by some form of ongoing maintenance operation.  If the defaults are not being used, that’s also something that needs to be investigated.

The percent_update_within_last_day is also a good trigger for me to investigate a potential issue; if less than 20% of the statistics in a database were updated in the last 24 hours, the odds of a less-than-optimal execution plan increase significantly.  If the tables in the database are large, it may take several million rows of changed data to trigger a refresh.

To refresh outdated statistics, I recommend Ola Hallengren’s maintenance scripts; they offer a lot of flexibility for developing a customized index and statistics maintenance plan, including the ability to update statistics on a scheduled basis, and focus on updating statistics that need to be refreshed.  Basic maintenance (such as this) can help prevent performance problems from occurring.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Finding Out Identity Column Without Using Column Name

There is an interesting observation when querying the identity column of a table

Let us create the following tables

CREATE TABLE test1
(
id INT IDENTITY (1,1)
)
GO
CREATE TABLE test2
(
id INT
)

Note that the table test1 has identity column and table test2 does not have it

Now you can find the values of identiy column without using actual column name

SELECT $identity FROM test1

You will not get an error and look at the column name. It is id which is the identity column of test1.
The quick way to check if the table has identity column and it it has, to know the column name use the following

SELECT $identity FROM test1 WHERE 1=0

If the table does not have identity column you will get an error

SELECT $identity FROM test2 WHERE 1=0

The error is

Msg 207, Level 16, State 1, Line 1
 Invalid column name '$identity'.

Reference : Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Errors and Limitations of Working with SPARSE Columns in SQL Server

I make a visit to the local user group whenever I get and keeping the community juices up and running for me. I get an opportunity to share some of the learnings and meet tons of new people. In our recent meetup at the SQLBangalore UG, I met a lot of young talent joining the IT field. Many take time to walk up to me and share a moment now and then. It is in this continued journey that inspired me to write this blog.

When I was at the UG meet, I said I write on a lot of topics and showed up this blog. It is rarely that someone fails to stumble onto this space. But one kid walked up to me and asked, what is SPARSE columns? After I explained what it was, he immediately asked me – is there any catch to it? Are there restrictions and limitations? Though there were few that I mentioned, I thought I will write few as part of this blog.

NULL Effect

Sparse columns MUST be nullable. Trying to create a sparse column as not NULL will fail with an error.

CREATE TABLE noNullSparse (Name VARCHAR(20) SPARSE NOT NULL)

Msg 1731, Level 16, State 1, Line 1

Cannot create the sparse column ‘Name’ in the table ‘noNullSparse’ because an option or data type specified is not valid. A sparse column must be nullable and cannot have the ROWGUIDCOL, IDENTITY, or FILESTREAM properties. A sparse column cannot be of the following data types: text, ntext, image, geometry, geography, or user-defined type.

Though the error is not explicit, you can see that the columns marked as SPARSE cannot be NOT NULL fields.

Datatypes to Watch

The above error gives away a lot of information on what are the datatypes that will cause you possible errors if marked as SPARSE.

  • geography
  • geometry
  • image
  • ntext
  • text
  • timestamp
  • user-defineddatatypes(UDT)

The other attributes like IDENTITY, FILESTREAMS and ROWGUIDs are also not allowed.

SPARSE with Computed Column

Lesser known is that SPARSE column’s cannot be used with Computed Columns. But, we can use a SPARSE a column inside a Computed Column. Below is a classic example:

-- Trying to mark a computed column as SPARSE will fail with incorrect syntax error
CREATE TABLE Sales1 (MRP INT, Loss TINYINT, Profit AS (MRP - Loss) SPARSE)
-- However, including an existing SPARSE column in a computed column is allowed
CREATE TABLE Sales2 (MRP INT, Loss TINYINT SPARSE, Profit AS (MRP - Loss))
-- Cleanup
DROP TABLE Sales2

Final Note

The other times we are likely to get an error is to mark a SPARSE column with default values, Primary key, clustered Index, partition key, user defined table type etc. These are some of the restrictions when working with SPARSE columns and will raise an error if used.

Would be great to know if anyone uses SPARSE columns in your designs anywhere? What are the scenario’s you found it useful? Let me know via your comments as it would be a great learning for all.

Reference: Pinal Dave (http://blog.sqlauthority.com)