Interview Question of the Week #032 – Best Practices Around FileGroups

Here is another question I just heard during a recent interview in the multinational company. I have observed over 1000s of interview, but this is the first time I have heard this is being discussed and I was very much glad to hear it. I have listed the answer of the candidate, right below it as well.

Question: What are the best practices for filegroups in SQL Server?

Answer:

It is suggested to place transaction logs on separate physical hard drives. In this manner, data can be recovered up to the second in the event of a media failure.

Tables and their non-clustered indexes separated into separate file groups can improve performance, because modifications to the table can be written to both the table and the index at the same time.

If tables and their corresponding indexes in a different file group, they must be backed up the two file groups as a single unit as they cannot be backed up separately.

Set a reasonable size of your database and transaction log (25% of database size).

Leave the Autogrow feature ON for the data files and for the log files with reasonable size of autogrow increment.

Place the log files on another physical disk arrays than those with the data files to improve I/O Performance.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – How to Identify Locked Table in SQL Server?

Here is a quick script which will help users to identify locked tables in the SQL Server.

SELECT
OBJECT_NAME(p.OBJECT_ID) AS TableName,
resource_type, resource_description
FROM
sys.dm_tran_locks l
JOIN sys.partitions p ON l.resource_associated_entity_id = p.hobt_id

When you run above script, it will display table name and lock on it.

I have written code to lock the person table in the database. After locking the database, when I ran above script – it gave us following resultset.

lock SQL SERVER   How to Identify Locked Table in SQL Server?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Unable to Bring SQL Cluster Resource Online – Online Pending and then Failed

Here is the situation which my client explained and I was asked for help.

Hi Pinal,
We are having 2 node windows cluster having 3 SQL Server instances clustered running on Windows 2012 R2 on VMWare. We have one instance that will start from the services.msc but not from the Failover Cluster Manager when attempting to bring the service online.  In reality the services start because during the ‘Online pending’ I am able to connect and query the databases on that instance, although it is in the ‘Online pending’ state. 

Do you know what could be the problem?

My first question to him was – is there any error in event log? From the email it sounds like SQL server service is able to start but cluster is not able to connect to SQL Server. Here are the errors in event viewer:

  1. [sqsrvres] ODBC sqldriverconnect failed
    [sqsrvres] checkODBCConnectError: sqlstate = 08001; native error = ffffffff; message = [Microsoft][SQL Server Native Client 10.0]SQL Server Network Interfaces: Error Locating Server/Instance Specified [xFFFFFFFF].
  2. [sqsrvres] ODBC sqldriverconnect failed
    [sqsrvres] checkODBCConnectError: sqlstate = HYT00; native error = 0; message = [Microsoft][SQL Server Native Client 10.0]Login timeout expired
  3. [sqsrvres] ODBC sqldriverconnect failed
    [sqsrvres] checkODBCConnectError: sqlstate = 08001; native error = ffffffff; message = [Microsoft][SQL Server Native Client 10.0]A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.

As I suspected, there is a connectivity issue from local machine to SQL instance. Cluster Service will attempt to connect to the SQL service every few minutes (setting in SQL cluster resource) for the IsAlive check, if this fails then the SQL resource is restarted even if the instance was online. Here are the steps I generally follow.

  1. Start SQL Service via Net Start NOT via Failover cluster Manager.
net start MSSQL$SQL2014

You need to change instance name/server name. For me it is named instance of SQL Server called SQL2014. If you have default instance, then it would be called as MSSQLServer.

  1. Once it’s started successfully, we need to make a connection to SQL Server via using SQLCMD
SQLCMD -S<ServerName\InstanceName>

To get the exact name you need to open Errorlog and look for “Server Name is” keyword. If you don’t know Errorlog location, then follow this below

SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location

  1. In most of the cases, you would get similar error which we I have provided above. Here are the various things to try.
  2. Make sure SQL Brower Service is running.
  3. Create a TCP alias to SQL Instance by giving IP and Port
  4. Make sure firewall is allowing the connections.
  5. Create exceptions of sqlservr.exe and sqlbrowser.exe in Firewall

Most of the troubleshooting steps are available in my earlier blog

SQL SERVER – FIX : ERROR : (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) (Microsoft SQL Server, Error: )

  1. Once the connectivity issue is resolved, stop SQL Service via
net stop MSSQL$SQL2014
  1. Now try to bring SQL Server resource online in cluster.

My friend informed me that they had an TCP alias already created by port number of SQL Server got change and hence connections were failing. After creating correct alias, SQL Server was able to come online in failover cluster manager.

Have you ever encountered such situation? Is there anything which you would like to share? Please comment and let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Performance Monitoring for Analysis Services – Notes from the Field #093

[Note from Pinal]: This is a new episode of Notes from the Field series. When we build any application, we build it with zeal and enthusiasm. We believe it will do well for many years but it is not the case always. After a while performance started to go down and everything is not in the best shape. This is the time when we need to monitor performance and take action based on our analysis.

Bill%20Anton SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

In this episode of the Notes from the Field series I asked SQL Expert Bill Anton a very crucial question – How to monitor performance of analysis services? Bill was very kind to answer the questions and provides plenty of information about how a novice developer can work with SSAS.


When it comes to ad-hoc query performance in business intelligence solutions, very few technologies rival a well-designed Analysis Services Multidimensional cube or Tabular model. And when that cube/tabular model is performing well, as it usually does in the beginning, life is good!

notes 93 1 SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

Photo Credit: SuperCar-RoadTrip.fr

Over time, however, things can change. The underlying business evolves (hopefully) and information workers start asking different questions, resulting in new query patterns. The business grows and now more users are asking more questions of larger data volumes. Performance starts to deteriorate. Queries are taking longer to complete and nightly processing starts to run past the end of the maintenance window into business hours. Users are complaining, management is unhappy, and life is no longer good.

notes 93 2 SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

Photo Credit: Charlie

How did we not see this coming?

If this describes your personal situation, don’t worry, you’re not alone. In fact the majority of clients who bring me in to troubleshoot and resolve their Analysis Services performance problems ask the same question (or some variation of it).

In my experience, 80% of the time the reason no one sees this type of issue coming is because there wasn’t a performance monitoring solution in place. The other 20% who have a performance monitoring solution simply aren’t using it or reviewing information being collected.

I don’t know why so many Analysis Services environments are neglected (I’m not a Business or IT therapist) but I’m going to tell you what steps need to be taken if you want to avoid it.

The Secret is Simple

The first step to maintaining a well running Analysis Services instance is simple. Take measurements!

notes 93 3 SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

Photo Credit: Official U.S. Navy Page

Taking regular measurements is the only way to know how things are performing overtime. This seems ridiculously obvious, but so few companies actually do it. My hunch is that these folks just don’t know what needs to be measured.

Analysis Services can be broken down into 2 fundamental types of workloads:

Processing Workloads

Processing is the term used to describe how data gets loaded into the Analysis Services database. This workload is usually done at night, outside of business hours, so as not to coincide with user activity.

From a performance perspective, the primary things to keep an eye on are:

Processing Duration
This is the amount of time it takes to load data into the Analysis Services database. As long as the processing duration fits within the scheduled maintenance window, there’s not much to worry about. However, if this duration is increasing over time and you think it will eventually run past that window, then you’ll need to review the rest of the information to figure out “why” and “what to do”.

How long does it take to process the Analysis Services database?
Is the processing duration increasing over time?

Resource Consumption
Keeping an eye on resource consumption (e.g. CPU, memory, disk, network) during processing is also a good idea. This kind of information can help shed some light on bottlenecks and provide guidance when coming up with a solution to processing related problems.

Is the processing workload requiring more and more memory? Are we maxing out CPU? How’s disk space?

There are many solutions to problems faced during processing, but without some insight into what’s happening on the system, it’s hard to know which solution is the optimal one.

For example, say we have a Multidimensional cube and notice that the processing duration for one of the measure groups is steadily increasing over time. We review the resource consumption and see that there’s plenty of CPU/Memory/IO to spare. In this case, we may consider partitioning this particular fact table and only processing the most recent partition or processing the partitions in parallel.

Pro Tip: Breaking up processing into stages will provide context to the information above and make it much easier to see which part(s) of the Analysis Services database are contributing to the increase in processing duration or memory consumption.

Query Workloads

This refers to any activity that generates queries against the SSAS database. It could be users running reports, loading dashboards, analyzing data via pivot tables, etc. Because users typically don’t run the same queries at the same time every day, this workload can be much more complicated to monitor.

The key to success is to start with the high level stuff and only go into the details if/when necessary.

Queries
The single most important thing to track for this type of workload is a log of the actual queries being executed against the Analysis Services database.

Not only will you be able to see which queries are slow, but you’ll also have the actual MDX/DAX executed. You won’t have to wait for the user to complain (telling you their report is taking too long to load) because you’ll already have the query and can start reviewing it immediately.

Some Analysis Services implementations actually have service level agreements (SLA) with criteria such as “no query should take more than 30 seconds to complete” and “the average query response time should be less than 5 seconds”. If you’re tracking every query against the Analysis Services database, not only will you know if the SLA has been violated, but you’ll know which query or queries it was that led up to the violation and can start troubleshooting immediately.

Users
Tracking the number of folks using your system (and when they are using it) will prove very helpful for knowing if/when to start considering options for scaling the system up and/or out.

This information can usually be extracted from whatever mechanism you use to track queries being executed against the Analysis Services database, but it is important enough to deserve its own section.

Resource Consumption
In the same vein as the above discussion around tracking resource consumption of the processing workload, you’ll also want to track the same measures (e.g. CPU, memory, disk, network) throughout the day. This information may provide some clues as to why a query is slow.

For example, say you’re reviewing the top 10 slowest queries from the previous week and find several of the queries are now running very fast. At this point you can switch over and start looking at the state of the system last week at the time the query was slow (from a resource consumption perspective) and perhaps find some clues, such as memory pressure or CPU bottleneck caused by a spike in activity.

Here are some examples of the types of questions you should be able to answer with the above information:

What are the top 10 slowest queries each week?

Who are your top users and what is the average number of queries they execute per day/week/month?

What are the average number of users per day/week/month?

What is the max number/average number of concurrent users per day/week/month?

Pro Tip: some folks incorrectly assume the OLAPQueryLog tracks queries. However, this table only tracks parts of queries (storage engine requests). A single query executed against an Analysis Services database could potentially generate 10s of records in this table. The implication is that it doesn’t give you the whole story and you won’t always be able to determine which queries are slow.

Next Steps

Now that you know what types of information you should be collecting and reviewing regularly, the next step is to figure out how you’re going to collect that information. The good news is that there are quite a few options available. The bad news is that you’ll have to wait until next time to find out. Here is the SQLPASS session which discusses about the same concept Analysis Services: Show Me Where It Hurts.

If you want to get started on performance monitoring with SSAS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Understanding the Basics of Write Ahead Logging (WAL) Protocol

The journey for IT started way back in my college days more than a decade back. It was a booming time and it was exciting to see everyone eyeing the Computer Science stream. Almost every other person in those days always wanted to take this new trend. Riding the waves was one part of excitement, while the other was to be unique in whatever we do. It is easy to get stereotyped with someone and we can never be different from the herd. I always believed in doing the basics right – this blog project that I started more than 8 years was to bring these basic learnings back. I am glad I have been doing this.

background logs SQL SERVER   Understanding the Basics of Write Ahead Logging (WAL) ProtocolWhen I was at my hometown, I planned to visit my college and spend some quality time with the students and live some moments back. I always dream of giving time back to the institution that made me who I am now. I think at this moment I request every reader to spend a couple of hours in a year with students from your college. I just wish I had the same exposure in my days back then.

During my last visit, I introduced myself as a database expert (based on what people say). One of the students asked me how durability is achieved in modern databases? Do I know anything about it and how it is done?

This question got me thinking and I immediately said, the first place to search will be my blog. I quickly checked to figure out I haven’t written about this subject. To complete the conversation in the real college style, I took the whiteboard and explained the simple WAL protocol. After a couple of hours of session on various topics, I got out exhausted – after getting out, I thought of writing back here.

The concept of Write Ahead Logging is very common to database systems. This process ensures that no modifications to a database page will be flushed to disk until the associated transaction log records with that modification are written to disk first. Why do we do this? The simple answer is to maintain the ACID properties for a transaction.

What if the database modifications were flushed first and a power failure occurred before the transaction log were written? Well, if the entire transaction was committed and all changes to all pages were also written to disk, it wouldn’t be a problem. But what if the page changes were flushed to disk due to a lazy write to free up buffers and the page changes were part of an active transaction? Transactional consistency would be comprised. The database page on disk will contain changes that are part of an uncommitted transaction (because the log records don’t exist to roll back the change).

This is the reason we write to the Log file first and hence this term is called “Write ahead logging”. Once the transaction gets persisted in the log first and when a power outage happens. The data files and data pages can be appropriately rolled forward (in case of committed transactions) or rolled back (in case of failed/rollback transactions) in the event of abrupt shutdown.

More information on this topic can be found in the SQL Book Online under the topic “Write-Ahead Transaction Log”.

Other sources on this topic include: INF: SQL Server 7.0 and SQL Server 2000 Logging and Data Storage

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Error while starting SQL Server logFileSize <= BlkToByteOffset ((ULONG)-1)

Recently I was playing with tempdb database in SQL Server and made some mistake. Due to that I was then not able to start SQL Service itself. Since it was an interesting problem so worth blogging about it.

Let’s have a look at problem. Whenever there is a SQL startup trouble, always start from SQL ERRORLOG. If you are

SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location


2015-08-08 15:07:16.67 spid10s Clearing tempdb database.
…
2015-08-08 15:07:16.99 spid10s Error: 17066, Severity: 16, State: 1.
2015-08-08 15:07:16.99 <span style="color: #ff0000;">spid10s SQL Server Assertion: File: <"logmgr.cpp"</span>>,<span style="color: #ff0000;"> line=14870 Failed Assertion = '</span>logFileSize<span style="color: #ff0000;"> <= BlkToByteOffset ((ULONG</span>)<span style="color: #ff0000;">-1)'</span>. This error may be timing-related. If the error persists after rerunning the statement, use DBCC CHECKDB to check the database for structural integrity, or restart the server to ensure in-memory data structures are not corrupted.

2015-08-08 15:07:16.99 spid10s Error: 3624, Severity: 20, State: 1.
2015-08-08 15:07:16.99 spid10s A system assertion check has failed. Check the SQL Server error log for details. Typically, an assertion failure is caused by a software bug or data corruption. To check for database corruption, consider running DBCC CHECKDB. If you agreed to send dumps to Microsoft during setup, a mini dump will be sent to Microsoft. An update might be available from Microsoft in the latest Service Pack or in a Hotfix from Technical Support.

2015-08-08 15:07:17.00 spid10s Error: 5173, Severity: 16, State: 1.
2015-08-08 15:07:17.00 spid10s One or more files do not match the primary file of the database. If you are attempting to attach a database, retry the operation with the correct files. If this is an existing database, the file may be corrupted and should be restored from a backup.

2015-08-08 15:07:17.00 spid10s Error: 1802, Severity: 16, State: 4.
2015-08-08 15:07:17.00 spid10s CREATE DATABASE failed. Some file names listed could not be created. Check related errors.
2015-08-08 15:07:17.00 spid10s Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive and then restart SQL Server. Check for additional errors in the event log that may indicate why the tempdb files could not be initialized.

Above is snippet of ERRORLOG where I have removed many things to make this clear in the blog. Here is something interesting. If we look at the file size of the tempdb transaction log file, its 0 KB (yes zero KB). I thought its TempDB so it would be recreated anyways, so I deleted them but still no joy. I was having another SQL instance with the same build of SQL Server so I picked files from there and dumped to desired location but SQL Startup automatically replaced those files.

 SQL SERVER   Error while starting SQL Server logFileSize <= BlkToByteOffset ((ULONG) 1)

Solution:

A) Start SQL Server with startup parameter /f which stands for minimal configuration. You need to change instance name/server name. For me it is named instance of SQL Server called SQL2014.

net start MSSQL$SQL2014 /f

If you have default instance, then it would be called as MSSQLServer.

B) Connect to SQL via SQLCMD. You need to change instance name/server name

SQLCMD -S.\SQL2014

C) After connection is made, check the file sizes of tempdb database

SELECT size FROM sys.master_files WHERE database_id = 2

In my case I got 0 for LDF file and that was the cause of the whole problem.

D) If you get zero, then alter the database and change the size again

ALTER DATABASE [tempdb] MODIFY FILE(NAME=N'templog',SIZE= 10240KB);

E) Check size again by running the same command as step 3. Then exit from SQLCMD

F) Stop SQL Service

net stop MSSQL$SQL2014

G) Start SQL normally and if you were hitting the same issue, then it should be fixed. The same command as step 1 but without /f parameter.

net start MSSQL$SQL2014

Here is the screenshot of the solution

 SQL SERVER   Error while starting SQL Server logFileSize <= BlkToByteOffset ((ULONG) 1)

What caused the problem?

As I said in the beginning, I was playing with TempDB and by mistake I ran below command

ALTER DATABASE [tempdb] MODIFY FILE(NAME=N'templog',SIZE= 0);

That zero in size has caused the whole problem.

Note: Please do not run above command on production server. 

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Error: Msg 4305, Level 16, State 1 – The log in this backup set terminates at LSN, which is too early to apply to the database.

While working with transaction log backup restore or log shipping, you must have seen below error message.

Msg 4305, Level 16, State 1, Line 47
The log in this backup set begins at LSN 33000000048000001, which is too recent to apply to the database. An earlier log backup that includes LSN 33000000044800001 can be restored.
Msg 3013, Level 16, State 1, Line 47
RESTORE LOG is terminating abnormally.

Let’s have a look at the cause of this message. Before that let’s have our sample database ready with backups so that we can play with them.

--Create an empty database
CREATE DATABASE SQLAuthority
GO
-- use the database
USE SQLAuthority
GO
--create first test table.
CREATE TABLE MyTestTable1 (iCol INT)
-- perform full backup F1
BACKUP DATABASE SQLAuthority TO DISK = 'c:\SQLAuthority_F1.bak' WITH FORMAT
--create second test table.
CREATE TABLE MyTestTable2 (a INT)
-- perform transaction log backup T1
BACKUP LOG SQLAuthority TO DISK = 'c:\SQLAuthority_T1.trn' WITH FORMAT
--create third test table.
CREATE TABLE MyTestTable3 (a INT)
-- perform transaction log backup T2
BACKUP LOG SQLAuthority TO DISK =  'c:\SQLAuthority_T2.trn' WITH FORMAT
--create forth test table.
CREATE TABLE MyTestTable4 (a INT)
-- perform full backup F2
BACKUP DATABASE SQLAuthority TO DISK = 'c:\SQLAuthority_F2.bak' WITH FORMAT

Now, we have backups taken in below order:

  1. Full back up ‘c:\SQLAuthority_F1.bak’
  2. First Transaction log backup ‘c:\SQLAuthority_T1.trn’
  3. Second Transaction log backup ‘c:\SQLAuthority_T2.trn’
  4. Full back up ‘c:\SQLAuthority_F2.bak’

Reproduce the Error

-- Let’s perform restore now but before that, lets drop the database
USE MASTER
GO
ALTER DATABASE SQLAuthority SET single_user WITH ROLLBACK IMMEDIATE
GO
DROP DATABASE SQLAuthority
GO
RESTORE DATABASE SQLAuthority FROM DISK = 'c:\SQLAuthority_F1.bak' WITH NORECOVERY
GO
RESTORE LOG SQLAuthority FROM DISK = 'c:\SQLAuthority_T2.trn' WITH NORECOVERY
GO

Here is the error.

Processed 280 pages for database ‘SQLAuthority’, file ‘SQLAuthority’ on file 1.
Processed 6 pages for database ‘SQLAuthority’, file ‘SQLAuthority_log’ on file 1.
RESTORE DATABASE successfully processed 286 pages in 0.148 seconds (15.097 MB/sec).
Msg 4305, Level 16, State 1, Line 43
The log in this backup set begins at LSN 33000000026400001, which is too recent to apply to the database. An earlier log backup that includes LSN 33000000023200001 can be restored.
Msg 3013, Level 16, State 1, Line 43
RESTORE LOG is terminating abnormally.

The error appeared because we have skipped T1. As the message says “An earlier log backup that includes … can be restored”. We can get list of backups taken from MSDB database on source server.

DECLARE @db_name VARCHAR(100)
SELECT @db_name = 'SQLAuthority'
--Get Backup History for required database
SELECT TOP (30) s.database_name
,m.physical_device_name
,CAST(CAST(s.backup_size / 1000000 AS INT) AS VARCHAR(14)) + ' ' + 'MB' AS bkSize
,CAST(DATEDIFF(second, s.backup_start_date, s.backup_finish_date) AS VARCHAR(4)) + ' ' + 'Seconds' TimeTaken
,s.backup_start_date
,CAST(s.first_lsn AS VARCHAR(50)) AS first_lsn
,CAST(s.last_lsn AS VARCHAR(50)) AS last_lsn
,CASE s.[type]
WHEN 'D'
THEN 'Full'
WHEN 'I'
THEN 'Differential'
WHEN 'L'
THEN 'Transaction Log'
END AS BackupType
,s.server_name
,s.recovery_model
FROM msdb.dbo.backupset s
INNER JOIN msdb.dbo.backupmediafamily m ON s.media_set_id = m.media_set_id
WHERE s.database_name = @db_name
ORDER BY backup_start_date DESC
,backup_finish_date

We can also get the same information from ERRORLOG as well.

SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location

Here are the sample messages. I have trimmed them for clarity.

2015-08-08 17:49:06.340 Backup Database backed up. Database: SQLAuthority, creation date(time): 2015/08/08(17:49:05), pages dumped: 291, first LSN: 33:136:179, last LSN: 33:232:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {‘c:\SQLAuthority_F1.bak’}).

2015-08-08 17:49:06.550 Backup Log was backed up. Database: SQLAuthority, creation date(time): 2015/08/08(17:49:05), first LSN: 33:136:179, last LSN: 33:264:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {‘c:\SQLAuthority_T1.trn’}).

2015-08-08 17:49:06.680 Backup Log was backed up. Database: SQLAuthority, creation date(time): 2015/08/08(17:49:05), first LSN: 33:264:1, last LSN: 33:280:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {‘c:\SQLAuthority_T2.trn’}).

2015-08-08 17:49:07.110 Backup Database backed up. Database: SQLAuthority, creation date(time): 2015/08/08(17:49:05), pages dumped: 291, first LSN: 33:312:143, last LSN: 33:392:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {‘c:\SQLAuthority_F2.bak’}).

So, once we find missing transaction log backup, we need to restore them in sequence. Do you have any other trick to solve this issue? If you can share, it would help others as well.
Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #031 – How to do Case Sensitive SQL Query Search

Question – How to do case sensitive search on any column of the table?

Answer

If Column1 of Table1 has following values ‘CaseSearch, casesearch, CASESEARCH, CaSeSeArCh’, following statement will return you all the four records.

SELECT Column1
FROM Table1
WHERE Column1 = 'casesearch'

To make the query case sensitive and retrieve only one record (“casesearch”) from the above query, the collation of the query needs to be changed as follows.

SELECT
Column1
FROM Table1
WHERE Column1 COLLATE Latin1_General_CS_AS = 'casesearch'

Adding COLLATE Latin1_General_CS_AS makes the search case sensitive.

Default Collation of the SQL Server installation SQL_Latin1_General_CP1_CI_AS is not case sensitive.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Adding File to Database in AlwaysOn Availability Group

Writing some typical interview questions have been on my list always. I look for new ways of tricking the person and most importantly, some of the interview questions are designed to make sure the person has strong fundamentals. This blog is one such attempt to understand the basics.

This is one of the common interview questions – What would happen if you add a new file (database file MDF or transaction log file LDF) to a database which is part of a high availability solution? As you might know that there are various high availability solutions like Failover clustering, database mirroring log shipping and the effect would be different in each solution. Out of many other high availability solutions, in this blog we will talk about availability group feature which was introduced in SQL Server 2012.

Let us talk about the basics of availability group first. Whatever modification operation done on the database would be logged to the transaction log file, called as log record. Since database is part of availability group the same command would be sent to all secondary replicas via log record. This means that adding data or log file to primary database essentially roll-forwards the same command to all secondary replicas.  This means that ADD FILE operation will be executed on Secondary servers. Using below code, I have added the file to drive E on Primary Instance.

USE [master]
GO
ALTER DATABASE [MyConDB]
ADD FILE (NAME = N'SQLAuthority_2',
FILENAME = N'E:\SQLAuthority.ndf' ,
SIZE = 4096KB, FILEGROWTH = 1024KB) TO FILEGROUP [PRIMARY]
GO

If this drive does not exist in secondary, then this operation will fail, resulting into the suspended secondary replica. Here is the information seen in the ERRORLOG (SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location)

2015-01-01 17:52:14.19 spid45s     Error: 5123, Severity: 16, State: 1.
2015-01-01 17:52:14.19 spid45s     CREATE FILE encountered operating system error 3(The system cannot find the path specified.) while attempting to open or create the physical file ‘E:\SQLAuthority.ndf’.

2015-01-01 17:52:14.27 spid45s     Error: 5183, Severity: 16, State: 1.
2015-01-01 17:52:14.27 spid45s     Cannot create the file “SQLAuthority_2”. Use WITH MOVE to specify a usable physical file name. Use WITH REPLACE to overwrite an existing file.

2015-01-01 17:52:14.30 spid45s     AlwaysOn Availability Groups data movement for database ‘SQLAuthority’ has been suspended for the following reason: “system” (Source ID 2; Source string: ‘SUSPEND_FROM_REDO’). To resume data movement on the database, you will need to resume the database manually. For information about how to resume an availability database, see SQL Server Books Online.

2015-01-01 17:52:14.33 spid45s     Error: 3313, Severity: 21, State: 2.
2015-01-01 17:52:14.33 spid45s     During redoing of a logged operation in database ‘SQLAuthority’, an error occurred at log record ID (34:358:1). Typically, the specific failure is previously logged as an error in the Windows Event Log service. Restore the database from a full backup, or repair the database.

If there is a possibility to recreate the same path on secondary replica, then we need to resume the data movement using below command.

ALTER DATABASE [SQLAuthority] SET HADR RESUME;
GO

If it is not possible to recreate the path on secondary, then we need to reinitialize this secondary again. Which means we need to restore full backup and a log backup from the primary.

Have you encountered the same problem in Log shipping or Clustering? What steps did you take to solve it? Do let me know via the comments below, I am sure we can learn from each other this way.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to do IF…THEN in SQL SERVER?

Just another day I received following question –

“Pinal,

I am from DotNet expertise and I do not know how to convert the logic of IF…THEN into SQL statement, would you please help?

~Rajesh”

I love simple questions as they have simple answers. In SQL Server, we can use a CASE statement to implement IF…THEN. Here is the example of the CASE statement.

Example 1:

If you are logic is as follows:

IF -1 < 1 THEN 'TRUE'
 ELSE 'FALSE'

You can just use CASE statement as follows:

-- SQL Server 2008 and earlier version solution
SELECT CASE
WHEN -1 < 1 THEN 'TRUE'
ELSE 'FALSE' END AS Result
GO

-- SQL Server 2012 solution
SELECT IIF ( -1 < 1, 'TRUE', 'FALSE' ) AS Result;
GO

If you are interested further about how IIF of SQL Server 2012 works read the blog post.

Well, in our example the condition which we have used is pretty simple, but in the real world the logic can very complex.

You can read more about this subject in the blog post over here: SQL SERVER – Implementing IF … THEN in SQL SERVER with CASE Statements.

Reference: Pinal Dave (http://blog.sqlauthority.com)