SQL Server supports Error handling efficiency. You can do this with the help of TRY and CATCH blocks. If there is an error you can get the exact error message using the system function ERROR_MESSAGE().

I will demonstrate it using the following code


scopeerror0 SQL SERVER   Scope of ERROR MESSAGE

The result you get is

Conversion failed when converting the varchar value ‘SQL’ to data type int.

It is because number 1 cannot be added to a string

But ERROR_MESSAGE will work only inside the CATCH block. If you use it outside of it, you will get nothing


If you execute the above, you get the following two result sets

scopeerror1 SQL SERVER   Scope of ERROR MESSAGE

Resultset 1:

Conversion failed when converting the varchar value ‘SQL’ to data type int.

Resultset 2:


As you see ERROR_MESSAGE() returns NULL when used outside of CATCH block.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Security Considerations for Contained Databases

cloudsec SQL SERVER   Security Considerations for Contained DatabasesI have written a number of blogs around contained databases in the past which you can search. In today’s blog we will talk about security considerations when working with contained databases. With contained authentication, the database can contain authentication information for the database users. This makes it considerably easier to move databases between servers. Some of the other blogs for reference are:

Importance of User Without Login – T-SQL Demo Script

How to Migrate Existing Database to Contained Databases

SQL Authentication process

For SQL Server Authentication against a contained database, the connection attempt must specify ancloudsec1 SQL SERVER   Security Considerations for Contained Databases initial catalog, which should be the contained database. If so, authentication is first attempted against contained users. If no such user exists, SQL Server falls back to server level authentication.

If the user exists and the password does not match, this is a typical authentication failure. No second chances are provided. SQL Server does not attempt a server level authentication in this situation. Therefore, if a contained user and a SQL account in master share the same username, connection attempts can fail.

Windows Authentication process

For Windows authentication against a contained database, an initial catalog must be specified. This initial catalog should be a contained database. If so, authentication is first attempted at the server level. If no matching login or group exists in the master, SQL falls back to database level authentication.

If a contained user with matching windows account or group name does not exist in the database, it also is an authentication failure.

Note: The two levels of authentication possible are database level and server level. The order is opposite for SQL Server authentication and Windows authentication. This is important knowledge while troubleshooting login failures.

Contained authentication brings with it some additional security caveats about which a database administrator has to be aware. Here are some of the important considerations:

  • Delegation of access control – Database containment delinks Server administration from database maintenance to a certain extent. Administrators need to be aware that contained users with ALTER ANY USER privilege can add other users. This privilege should be carefully delegated. The users in contained databases should be periodically audited.
  • Guest account can allow access to other databases – Contained users can access other databases where a guest account is enabled. To avoid this, ensure guest account is disabled for all user databases.
  • Duplicate logins – In cases where SQL authentication is used, a contained user with a different password, but with the same name as his login ID can intentionally or accidentally cause Denial of Service to that login. Windows authentication is attempted first at server level, so that it is not as severely affected.
  • Users with password cannot take advantage of password policies – This makes it harder to enforce password lifetimes and history requirements.
  • Contained Database should not have AUTO_CLOSE set – Contained databases marked for AUTO_CLOSE can significantly increase the cost of authentication, possibly making Denial of Service attacks easier.

As I conclude this blog, I wanted to bring out some of these nuances to readers as it is not very well documented or known when working with contained databases.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Reading Statistics IO data with SQL Server

directory main statistics SQL SERVER   Reading Statistics IO data with SQL ServerI am a supporter of always going to the basics when it comes to performance troubleshooting. Whenever I have done any perf tuning exercise, I start by enabling the Statistics IO as part of my debugging. When STATISTICS I/O is enabled, SQL Server maintains I/O statistics for the queries on a per-table basis (as opposed to a cumulative fashion for all tables). This output is sent back as a message after the query completes.

For each table referenced in the query, there will be one row that is similar to this:

Table ‘authors’. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0.

Scan count: shows the number of times that the OLE DB row set corresponding with the table or index was opened for reading. This value is dependent on the type of plan chosen and where this table is accessed in relation to the other tables listed in the query. Note that just because this says scan that it does not necessarily mean that it was a table scan or index scan—it may have been an index seek.

Logical reads: counts the number of times that a request was made to access a page belonging to that particular table. This is incremented in the underlying calls to one of the page suppliers (linked pages, unordered page, etc.). This counter gets incremented regardless of whether the page was already in cache or a disk I/O was required.

Physical reads: is incremented in the same place that increments the logical reads, but only if the underlying request read the page from the disk. Note that each one of these requests is statistics1 SQL SERVER   Reading Statistics IO data with SQL Serveressentially synchronous – the page was requested and the caller had to wait for the I/O to complete before continuing execution.

Read ahead reads: is incremented when the page(s) were not already found in cache and had to be read from the disk. Here, the caller generally is able to do the other work after calling this and will use the page (latch it) at a later time. It is possible for the caller to request the page(s) to be read, but never actually use it. This might happen in the case of queries with TOP or SET ROWCOUNT, semi joins or anti semi joins where we can break out of the loop on the first match/non-match etc. Pages that are included in the read ahead counter are not included in the physical read counter.

Now that we have this understanding in place, I hope you will use this information at some point in time while doing performance tuning in your environments. Do let me know if you ever used this information as your tuning techniques and how you did troubleshooting with such information.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #037 – What are the Properties of Relational Tables?

The other day in the interview, I heard this question and it sent me back to my college days as I used to hear similar questions from my professors. Here is the same question again and its answer.

Question: What are the properties of the relational tables?

busstra Interview Question of the Week #037   What are the Properties of Relational Tables?

Answers: Relational tables have six properties:

Values Are Atomic
This property implies that columns in a relational table are not repeating group or arrays. The key benefit of the one value property is that it simplifies the data manipulation logic. Such tables are referred to as being in the “first normal form” (1NF).

Column Values Are of the Same Kind
In relational terms this means that all values in a column come from the same domain. A domain is a set of values which a column may have. This property simplifies data access because developers and users can be certain of the type of data contained in a given column. It also simplifies data validation. Because all values are from the same domain, the domain can be defined and enforced with the Data Definition Language (DDL) of the database software.

Each Row is Unique
This property ensures that no two rows in a relational table are identical; there is at least one column, or set of columns, the values of which uniquely identify each row in the table. Such columns are called primary keys. This property guarantees that every row in a relational table is meaningful and that a specific row can be identified by specifying the primary key value.

The Sequence of Columns is Insignificant
This property states that the ordering of the columns in the relational table has no meaning. Columns can be retrieved in any order and in various sequences. The benefit of this property is that it enables many users to share the same table without concern of how the table is organized. It also permits the physical structure of the database to change without affecting the relational tables.

The Sequence of Rows is Insignificant
This property is analogous to the one above, but applies to rows instead of columns. The main benefit is that the rows of a relational table can be retrieved in different order and sequences. Adding information to a relational table is simplified and does not affect existing queries.

Each Column Has a Unique Name
Because the sequence of columns is insignificant, columns must be referenced by name and not by position. A column name need not be unique within an entire database, but only within the table to which it belongs.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Edition Upgrade from Evaluation to Enterprise Core – AlwaysOn Warning!

While preparing for a demo about SQL migration, one of the demo was to show the steps needed to upgrade Evaluation edition to full version of SQL Server. The steps are pretty simple and already blogged at many place on the internet. One of my own blog also shows that:

SQL SERVER – Evaluation Period Has Expired – How to Activate SQL Server?

While doing that, I saw a strange warning message during upgrade:

The AlwaysOn Availability Groups feature is enabled on this instance of SQL Server and it is not supported in the edition being changed to. Before proceeding, disable AlwaysOn Availability Groups on the server instance. For more information, see SQL Server Books Online.

Here is what I saw in SystemConfigurationCheck_Report.htm saved in the same folder where setup logs are stored. I have been fortunate to get into such warnings that help me go over the internet and do my own research. But the best way is to learn to get into such pitfalls and get help from experts online and from friends circle. Fortunate to get help from a number of them lately ESP when working with AlwaysOn environments.

aag warn 01 SQL SERVER   Edition Upgrade from Evaluation to Enterprise Core   AlwaysOn Warning!

Then I searched on the internet and checked Detail.txt to see if there is anything interesting.

(10) 2013-08-21 05:31:39 Slp: Sco: Attempting to connect script
(10) 2013-08-21 05:31:39 Slp: Connection string: Data Source=.;Initial Catalog=master;Integrated Security=True;Pooling=False;Connect Timeout=300;Application Name=SqlSetup
(10) 2013-08-21 05:31:39 Slp: Connected successfully…
(10) 2013-08-21 05:31:39 SQLEngine: –IsAlwaysOnFeatureEnabled: Engine_IsAlwaysOnFeatureEnabled: IsHADREnabled: = 1
(10) 2013-08-21 05:31:39 Slp: Sco: Attempting to dispose script
(10) 2013-08-21 05:31:39 Slp: Sco: Attempting to disconnect script
(10) 2013-08-21 05:31:39 Slp: Sco:SqlScriptStatementCompleted: Rows affected 1
(10) 2013-08-21 05:31:39 Slp: Current SqlServer Connection closed…
(10) 2013-08-21 05:31:39 Slp: Evaluating rule        : Engine_IsAlwaysOnFeatureEnabled
(10) 2013-08-21 05:31:39 Slp: Rule evaluation done   : Warning
(10) 2013-08-21 05:31:39 Slp: Rule evaluation message: The AlwaysOn Availability Groups feature is enabled on this instance of SQL Server and it is not supported in the edition being changed to. Before proceeding, disable AlwaysOn Availability Groups on the server instance. For more information, see SQL Server Books Online.

Above script confirmed that SQL Server was connected to get the feature. I checked my setup media by installing one of the instance on test server. To double check, on the page where I need to accept the license terms I can see it’s the Enterprise Core edition. Now, I was wondering why I receive this warning. I also checked the Summary.txt file and saw below

Package properties:
Description:                   Microsoft SQL Server 2012
ProductName:                   SQL Server 2012
Type:                          RTM
Version:                       11
SPLevel:                       0
Installation location:         E:\x64\setup\
Installation edition:          Enterprise Edition: Core-based Licensing

aag warn 02 SQL SERVER   Edition Upgrade from Evaluation to Enterprise Core   AlwaysOn Warning!

I checked with my few expert friends and they told me that if the destination is enterprise, then I can ignore the warning and proceed with the upgrade. I would guess it is an issue with the setup. In my case, the edition upgrade just completed successfully and I was able to continue using AlwaysOn Availability Group.

If you ever receive this warning, you should be able to move forward and trust me, nothing would happen to the availability group. Have you ever received this warning earlier and what did you do?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – FIX – Replication Error: SQL Server replication requires the actual server name to make a connection to the server 

Replication as a topic is quite old and proven technology. After I saw the introduction of SQL Server AlwaysOn from SQL Server 2012, I hardly see folks talking about replication. For me replication will always be a technology which allows for partial (selected tables/articles) synchronization technology, which a lot of people in a large DB world still require.

Recently, I was trying to configure replication for one of my upcoming sessions internally playing around. I was getting the same error for “Configure Distribution” and “New Publication”. Here is the text of the error message:

TITLE: Configure Distribution Wizard
SQL Server is unable to connect to server ‘SQL16NODEB\SQL2014’.
SQL Server replication requires the actual server name to make a connection to the server. Specify the actual server name, ‘MyServer‘. (Replication.Utilities)

On my machine, here are the steps to reproduce this problem:

  1. Open Microsoft SQL Server Management Studio and connect to Server
  2. Expand the server node.
  3. Expand Replication
  4. Right click “Local Publications” and select “New Publication …”

repl error 01 SQL SERVER   FIX   Replication Error: SQL Server replication requires the actual server name to make a connection to the server 


  1. Open Microsoft SQL Server Management Studio and connect to Server
  2. Expand the server node.
  3. Right click “Replication” and choose “Configure Distribution”

repl error 02 SQL SERVER   FIX   Replication Error: SQL Server replication requires the actual server name to make a connection to the server 

First, notice the error message, it talks about two names here: SQL16NODEB\SQL2014 and MyServer. I have captured profiler on the SQL instance to find out what is being done by SQL Server to raise this error. Here are the two meaningful queries:


When I run them individually in SSMS, I get below output:

repl error 03 SQL SERVER   FIX   Replication Error: SQL Server replication requires the actual server name to make a connection to the server 

After seeing this I realized that I changed my server name recently for some other demo and that is causing a problem.

DECLARE @actualname NVARCHAR(100)
DECLARE @currentname NVARCHAR(100)
SELECT @actualname = CONVERT(NVARCHAR(100), SERVERPROPERTY(N'servername'))
SELECT @currentname = @@SERVERNAME
EXEC sp_dropserver @currentname
EXEC sp_addserver @actualname, local

After running above, we need to restart the SQL Server Service. Then, make sure that you are getting the same output for below queries:


Have you ever encountered the same issue and found some other solution? Please comment and let me know so that we can learn from each other.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Notes about scheduler with Operating Systems

calschedule SQL SERVER   Notes about scheduler with Operating SystemsLast week I visited my hometown and also made a trip to my college to relive some of my learning days. I bumped into one of my professors who was going for a class. I requested to attend and wanted to contribute. The discussion then moved to a core topic in computer science called Scheduling and how it is to be done. Though a generic topic, I was interested in contributing. Here are some of the notes to relive these moments. Scheduling is designed to be mostly invisible to an application developer or a normal system user.

It is taken for granted that operating system transparently schedules the applications. This is also true for any of the operating system kernel components such as Memory Manager or Synchronization primitive techniques.

A sound knowledge of the scheduler, its mechanics and its influence on an application’s execution could prove to be helpful while troubleshooting certain class of problems such as applications high CPU usage and System or application responsiveness. Alternatively, it could be utilized during the application design keeping in mind how the application design and scheduler algorithms could work effectively in tandem.

For operating systems that support multi-processing and multi-tasking, the kernel acts as a controlling authority for all the tasks that execute. It makes sure the tasks promptly use the system resources. The kernel component that controls the processor usage forms the Scheduler. It is very crucial to monitor the tasks and control the way they utilize the system resources that are shared by all the tasks. If this were not managed:

  • Certain tasks can over-use global resources such as system memory or processors while depriving other tasks.
  • Certain tasks could dominate the system itself, or may never get a chance to run at all.

Scheduler being part of the kernel manages the processors usage to be as efficient as possible and with utmost fairness that it could provide to all the tasks on the system. Apart from the scheduler at the core of the operating system, some applications such as SQL Server implement their own wrapper of user-mode scheduling on top of operating system scheduling policies.

Understanding the Evolution

evolution SQL SERVER   Notes about scheduler with Operating SystemsOlder operating systems (before Windows NT and several non-Windows operating systems) segregated each task at the software level as processes with its own virtual address space boundary and context. The scheduler component then scheduled processes on the processor. Hardware started evolving with multi-processors (to provide true multi-tasking) and operating system developers started to see that processors were not effectively utilized even when there was enough work pending.

For example, when a process itself was an execution unit and there was a blocking request made by the process, the operating system either scheduled a different process that had some work (if there was one) or put the processor into an idle loop. In either case, it put the blocking process into a wait state. While this worked very well, processor utilization was not efficient.

A process was a bundle of execution logic that could be executed independently of other processes. These could be executed in parallel, and when it was required to synchronize with each other, waits could be volunteered by these tasks.

Thus was the origin of multi-threaded applications. The process now became more a container than an execution unit itself, and new constructs called threads were born.

The scheduler then started scheduling the threads instead of processes. The original developers of Windows NT realized this and designed Windows NT to have threads as a basic scheduling entity.

Note: The multi-tasking scheme where the tasks regularly give up the execution time to other tasks on the system is generally referred to as co-operative multi-tasking. This approach relies on all tasks to co-operate and contribute to reliability of the system. A task that violates this rule could make the whole system starve.

I elaborated the session later with an extension and little research to complete this blog. This has nothing to do with SQL Server, but the 1 hour of back to college got me nostalgic. How many of you do this from time-to-time? I wish going back to college has its fun elements and gets us back to our roots of learning.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Database source control with ApexSQL software

This article covers the main features of ApexSQL tools that are involved with SQL source control. For the purpose of the article, we will use ApexSQL Source Control, ApexSQL Diff and ApexSQL Build.

The latest version for any of the mentioned tools can be downloaded from ApexSQL download section, and you can play along while reading the article.

Apex SQL Source Control

ApexSQL Source Control is a SQL Server Management Studio add-in that allows version controlling SQL database objects. It natively supports all major source control systems such as Git, Mercurial, Perforce, Subversion and Team Foundation Server (including Visual Studio Online) requiring no additional client tools to be installed

Source control development “models”

ApexSQL Source Control supports two database development models: Shared and Dedicated.

Dedicated model means that each developer is working on a local instance of a database. It allows developers to work independently and test changes without affecting the rest of the team. However, it requires a separate license of SQL Server for each developer. Developers have to be aware of conflicts that may occur in case more than one developer is working on the same object and want to commit it to the repository. We will explain conflicts later in this article.

Shared model means that all developers are working on the same database and that all changes are applied directly against a database, and after that committed to the repository. Conflicts cannot occur in shared model. However, a developer can override other’s change, so it is essential that the entire team is aware of all changes and all objects that are currently in use by someone else.

Initial database linking

To link a database, right-click on it in the Object Explorer pane and from the ApexSQL Source Control menu click the Link database to source control option:

image001 SQL SERVER   Database source control with ApexSQL software

Choose the source control system and database development model:

image002 SQL SERVER   Database source control with ApexSQL software

In case the shared model is selected, an additional option allows you to choose a host database for the framework objects that ApexSQL Source Control is using.

Select objects to be included in version control. By default all supported objects are included (Learn more):

image003 SQL SERVER   Database source control with ApexSQL software

Specify credentials for the repository, along with the repository path and particular project inside the repository if needed (Learn more):

image004 SQL SERVER   Database source control with ApexSQL software

After the linking is done, all objects included in the linking process will appear in the Action center tab. This is the main window used by ApexSQL Source Control in communication with the repository.

It shows the comparison results for each object, comparing its current state in the database with the latest version on the repository.

image005 SQL SERVER   Database source control with ApexSQL software

After the initial commit is performed, the Action center tab will show that the local database and the repository are synced. Each change made after the initial commit will appear upon refreshing the Action center tab, no matter which user made a change.

Check out an object

To check out an object, right click on it from the Object Explorer, and choose the Check out option from the ApexSQL Source Control menu:

image006 SQL SERVER   Database source control with ApexSQL software

Check out serves for information purposes, meaning that it can be overridden by other developer.

Lock an object

To prevent other developers from working on an object, it is necessary to use the Checkout and lock option. It is located in the ApexSQL Source Control menu:

image007 SQL SERVER   Database source control with ApexSQL software

By locking an object, the rest of the team will be prevented from making any changes, until an object is unlocked, or after the change made against an object is committed to the repository.

Each object status is followed by the appropriate icon in the Object Explorer pane, so the entire team is aware of objects that are checked out, locked, or edited:

image008 SQL SERVER   Database source control with ApexSQL software

Besides following the object status in the Object Explorer pane, the same information is available through the Object status form, where all object are listed along with the information about the status of an object and the user who performed an action:

image009 SQL SERVER   Database source control with ApexSQL software

Resolving conflicts in dedicated development model

Conflicts occur within the dedicated database development model in cases when developer A modified an object and committed changes to the repository while developer B was working on the same object version prior to the change. In this situation, when developer B tries to commit the change, a conflict occurs, since the version that developer B used was not the latest one from the repository. ApexSQL Source Control does not allow any action (push to the repository or pull to a database) until the conflict is resolved. For resolution, all changes from the database can be committed to the repository (keep local), the latest changes from the repository can be applied to a database (take remote), or each conflict can be resolved manually by using either the built-in line by line merging tool or one of the 3rd party merging tool (Learn more):

image010 SQL SERVER   Database source control with ApexSQL software

Using Labels/Tags

A Label or Tag represents the current state of the repository (snapshot of the repository or the check point) and it is often used when a stage of database development is finished/milestone is achieved, or a testing is finished. In this case, a label is created, so the database can be easily reverted to the previous state if needed.

To create a label using ApexSQL Source Control, right click the database in the Object Explorer and from the ApexSQL Source Control menu click the Create label option. For reviewing and applying specific labels, use the Show labels option from the same menu:

image011 SQL SERVER   Database source control with ApexSQL software

Applying a label with ApexSQL Source Control will initiate the dependency aware mechanism that will take the created label and create a script that will be applied against the database. The entire process is wrapped as a transaction which means that either all is applied or nothing. Before applying a label, a developer can explore the script, modify it or save it for later usage:

image012 SQL SERVER   Database source control with ApexSQL software


Through the history form, a developer will be able to explore each commit (change set) and to compare each version of an object committed to the repository with the current version from the database or any other version as well. Even more, the selected version can be retrieved from the repository and applied to the database with a single click (Learn more):

image013 SQL SERVER   Database source control with ApexSQL software

Compare/sync the repository with database

Using ApexSQL Diff, a SQL Server database schema comparison, and synchronization tool, a database can be compared with the repository.

ApexSQL Diff will compare specified sources, by comparing the latest version from the repository with the database, or it can be set to compare the database with the specific label.

image014 SQL SERVER   Database source control with ApexSQL software

Source control from the Destination section can be set in a way that the latest version on the repository is used for comparison, or any label specified:

Source control, as the destination, will by default be the latest version in the repository, unless a label is specified

image015 SQL SERVER   Database source control with ApexSQL software

Build a database from the repository

ApexSQL Build can be used to take the latest changes from the repository (or a specific label) and create a deployment and/or immediately create/update the target database.

To build a database directly from the source control repository, select the Build new database option:

image016 SQL SERVER   Database source control with ApexSQL software

Specify the mechanism of deployment:

image017 SQL SERVER   Database source control with ApexSQL software

Specify the repository that holds the database to be deployed:

image018 SQL SERVER   Database source control with ApexSQL software

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – ERROR: Autogrow of file ‘MyDB_log’ in database ‘MyDB’ was cancelled by user or timed out after 30121 milliseconds

One of the most difficult and haunted issue for any DBA is to do a root cause analysis or RCA of a performance issue with SQL Server. That is why monitoring of SQL Server is very important and companies spend a lot of time in buying tools for that. Sometimes, when there is no monitoring then we have to look at all possible logs available. I have been a big believer of checking SQL Server ERRORLOG for any unexpected behavior. Sometimes we do get good piece of information from there. If you don’t know the location of ERRORLOG file, then you should read below blog.

SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location

Below is one of the error message that I have seen in my one of my client’s ERRORLOG file.

Autogrow of file ‘<File Name>’ in database ‘<Database Name>’ was cancelled by user or timed out after %d milliseconds.  Use ALTER DATABASE to set a smaller FILEGROWTH value for this file or to explicitly set a new file size.

The error indicates that the transaction/session cannot wait until the file growth completes and then the session is been cancelled. Due to that, the file growth is also cancelled. The reason is that the file growth cannot complete in time. In above message, we need to note few thing: a) which file, b) which database and c) how much time before timeout. When the database file (MDF or LDF) is full and it is set to grow automatically, the growth would kick in and transaction would be halted until growth is complete. The time taken to grow would be dependent on two factors. 1) size of growth 2) Disk speed.

Let’s assume that database transaction log size is 50 GB and the file growth is 10%, it will grow by 5 GB. The time needed for this size increase exceeds the time of the session which it can wait. Most of the application have timeout of 30 seconds, which is more than enough for auto-grow to complete. If the disk can response much faster, the issue may not appear at all.

A seasoned DBA performs the transaction log backup regularly and avoid such a large size of transaction log. With a large transaction log, sometimes it can also cause a long database startup time after a restart if some problem happens previously. To avoid the issue in our case, we can try to set the file growth to a smaller value such as 1GB or 500 MB. Microsoft has always recommended doing manual growth so that size is in predictable range. Autogrow is like a band-aid. As per SQL industry best practices, we should avoid file auto-growth and initialize the file to a reasonable size according to the application requirement.  Then we can leave the file growth to 1GB or a large size according to the disk response time. This applies to both data file (MDF or NDF) and transaction log file (LDF).

Since SQL Server 2005, SQL Server has a feature called “instant file initialization” which can accelerate the file growth. Its important to keep in mind that only data files can be initialized instantaneously by this feature. Instant file initialization would be activated when the SQL Server (MSSQLSERVER) service account has been granted SE_MANAGE_VOLUME (Perform Volume Maintenance Tasks). https://technet.microsoft.com/en-us/library/ms175935.aspx

To summarize:

  • Increase the file size in a smaller value. Prefer manual growth or fixed size auto-grow.
  • Consider providing “Perform Volume Maintenance Tasks” permission to SQL Service account. This would allow “instant file initialization” for faster growth of data files.
  • Perform transaction log backup regularly so avoid huge size transaction log in full recovery model.
  • Make sure disk performance is good.

Have you ever seen such errors on your production environment?

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #036 – What is the difference between @@DATEFIRST and SET DATEFIRST?

Question: What is the difference between @@DATEFIRST and SET DATEFIRST with respect to SQL Server?

Answer: SQL Server with US English as default language, SQL Server sets DATEFIRST to 7 (Sunday) by default. We can reset any day as the first day of the week using


This will set Friday as the first day of the week.
@@DATEFIRST returns the current value, for the session, of SET DATEFIRST.

----This will return result as 1(Monday)
SET LANGUAGE us_english
----This will return result as 7(Sunday)

In this way @@DATEFIRST and SET DATEFIRST are related. When I learned about this feature I was very glad as our company has started to server global clients and simple feature like this helps a lot to avoid confusion.

Reference: Pinal Dave (http://blog.sqlauthority.com)