SQL SERVER – DBCC CHECKDB errors – Msg 2520 during executions

Some of the simplest recommendations can lead you to some bizarre output that we never anticipated. I always recommend running DBCC CHECKDB on mission critical databases because we need to be 200% sure that the database disk system is healthy from time to time. During one of my sessions, I gave this as a generic recommendation and guess what, I got a strange message from one of the attendees. They then claimed that they were getting errors of mag 2520. One further probing and requesting for the actual error, I understood the problem. They were running CHECKDB on the ResourceDB. I asked – “Why are you doing this on ResourceDB?”. They answered, “Pinal, isn’t ResourceDB a database? Wouldn’t it not get corrupt? What is wrong in doing a CHECKDB on it? If I cant, how can I do it?”

This was a fully loaded question and I had to answer it elaborately. So I took this blog post to explain some of the basics.

Resource database is a system database in SQL Server 2005. Since this database has some special attributes and differs from regular databases in some respects, these indirectly affect the ability to do some operations on this database. One of the areas is the ability to perform DBCC checks and repair on the resource database. The following are the key points to keep in mind when you plan to run any kind of check against the resource database:

When the server is running in multi-user mode, running the command
DBCC CHECKDB ( mssqlsystemresource )

will give the following output:

Msg 2520, Level 16, State 11, Line 1
Could not find database ‘mssqlsystemresource‘. The database either does not exist, or was dropped before a statement tried to use it. Verify if the database exists by querying the sys.databases catalog view.

This is applicable for commands like DBCC CHECKALLOC and DBCC CHECKCATALOG.

When the server is running in multi-user mode, running DBCC CHECKDB on the master database will run DBCC CHECKDB on resource database as well. In the DBCC CHECKDB output result set, we will notice the following sections:

DBCC CHECKDB ( MASTER )

CHECKDB found 0 allocation errors and 0 consistency errors in database ‘master’.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
CHECKDB found 0 allocation errors and 0 consistency errors in database ‘mssqlsystemresource’.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Errorlog will show the following entries:

2015-08-25 14:44:22.650                spid63   DBCC CHECKDB (master) executed by sa found 0 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.  Internal database snapshot has split point LSN = 000000e4:0000017c:0001 and first LSN = 000000e4:00000179:0001.
2015-08-25 14:44:22.840                spid63   DBCC CHECKDB (mssqlsystemresource) executed by sa found 0 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.

When the above checks are run, for the master database checks alone the internal transient replica is created. For resource database, no replica is created for the checks.

When the server is started in single_user mode to perform repairs in master, a special mechanism is used. Only for master database, the check command is run with the repair option. When it comes to checking the resource database, the normal check is done with repair disabled. You can see the effect of that from the errorlog entries:

2015-08-25 15:44:22.650 spid61      DBCC CHECKDB (master, repair_allow_data_loss) executed by sa found 0 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.
2015-08-25 15:44:22.650 spid61      DBCC CHECKDB (mssqlsystemresource) executed by sa found 0 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.

If there is a corruption problem in the resource database and there is a need to get it repaired for the efficient functioning of the SQL Server, then you will need to replace the resource database if there is any physical corruption introduced in the database. Normally users will not have the ability to write to this database. So unless there is a hardware problem, there is less chance that this database should get corrupted ever.

Though this blog started for an unusual error message, I thought it was important to talk some of these fine prints when working with ResourceDB. Hope you have never encountered any error with ResourceDB, if you have – do let me know via comments to what you did in that situation. Would love to learn from you.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Rewriting Database History – Notes from the Field #094

[Note from Pinal]: This is a 94th episode of Notes from the Field series. When I read the title of this article – I was extremely intrigued with it – Rewriting Database History! When I was a kid, history was my favorite subject and till today when I have to deal with history, I always jump first to read and enjoy it. When I see this article from Kevin, I was so delighted. I started to read it immediately, and I did not stop reading it till it was finished. It was 20 minutes well spent. The journey starts from ORM and end at the ORM, however, when it comes to full circle there are so many things in between. Trust me Kevin can be a great historical novel writer.

In this episode of the Notes from the Field series database expert Kevin Hazzard explains the interesting subject of rewriting database history. Read the experience of Kevin in his own words.


KevinHazzard SQL SERVER   Rewriting Database History   Notes from the Field #094

Way back in June of 2006, my friend Ted Neward wrote a very detailed piece called The Vietnam of Computer Science. Juxtaposing America’s involvement in Southeast Asia in the last century with the battles we fight in modern, business-driven enterprises, that article examined the unwinnable war that Object-Relational Mapping (ORM) tools have become. The title of the article is probably a bit politically incorrect these days given that Vietnam’s world standing and economy have improved vastly over the last decade. Moreover, lots of database and application developers working today don’t even remember the war in Vietnam so the allegory may fail for them. But for those of us who grew up in the 1960s and 1970s, the writing evokes plenty of painful memories that paint ORMs as technological quagmires.

ORM tools have evolved quite a bit since 2006 but they’re still generally pretty awful. The metadata-driven code generators buried within tools like Hibernate and Microsoft’shistory1 SQL SERVER   Rewriting Database History   Notes from the Field #094 Entity Framework do a much better job of writing decently-performing queries than they did in the early days. But the cruft, wacky idioms and generally non-sensible constraints one must suffer to use these tools is overwhelming for all but a handful of gladiators who gain some odd sense of purpose in buttressing their companies for long-running conflicts. Nowhere is this more apparent than in the Ruby on Rails space where some sort of addiction to total database ignorance seems to have infected the vast majority of that community. If you want to understand why Ruby on Rails is in decline in recent years, look no further than the chatty, miserably-performing queries that the Active Record ORM generates when accessing complex, relational data structures. Any DBA or database developer who has been enlisted to debug performance problems in those sorts of melees knows there’s a better way to win hearts and minds.

For most developers who have shunned traditional ORMs, I often hear them speak about how unwieldy, brittle or inflexible they found the tools to be. The relational abstractions that they provide is sometimes too weak, making common business operations difficult to express in languages like C# and Java. To solve that problem, the abstraction may become leaky, exposing details that create dependencies which complicate deployments or lock you into vendor or version-specific features.

For database people, ORM aversion is typically related to the naiveté of machine-generated queries. Chattiness is a big problem with full-service ORMs, for example. Mature ORMs do a better job today emitting correlated sub-queries or common table expressions than they did in the past. But that often depends on good mappings which depend in turn on good metadata. Unfortunately, perfect database metadata is quite rare. Anyone who watches the tracing of a database server suffering under the garrulous assault of an ORM-driven application understands just how little the ORM really knows about the database.

history2 SQL SERVER   Rewriting Database History   Notes from the Field #094If the protestors can convince management that the ORM war is unwinnable, the response can move in several potential directions. One might rationalize that the relational representation of data is the real foe, i.e. that because data is rarely used in third normal form within applications, it shouldn’t be stored that way on disks. The pundits of the NoSQL movement say that relational storage models are a holdover from an age when storage and bandwidth were much more expensive than they are today. That is a massive oversimplification for sure, but there are some cases where a document-oriented databases make a lot of sense.

For example, I worked in a shop long ago that used a UniVerse database to manage large caches of medical claims data. As a multi-valued database much like today’s NoSQL databases, UniVerse stores data with all the data that it references in a single document. So a patient’s demographic information can be stored in line with the related medical claims, diagnostic history and lab results. For that system, there was only one index on the database which was used to find individual, semi-structured blobs of patient records. The rules engine which sat atop the database always needed all of a single patient’s information whenever it ran a rule so storing the entire document describing a patient as a single database entity was both practical and highly efficient.

In those days, UniVerse was an IBM product that competed against their own DB2 relational database. For some reason, IBM was keen on getting our company to move from UniVerse to DB2 so we invited them in for a proof of concept to build a relational model of our medical database in DB2. When they were done, they loaded several terabytes of data into the new relational database and aimed the rules engine there. After weeks of tweaking, rewriting and hand-wringing, they never could make the relational database outperform UniVerse. Moreover, it wasn’t even a close race. The UniVerse database running on much older hardware could fetch whole patient documents in a few milliseconds. The best we could get DB2 to do using its normalized implementation was measured in hundreds of milliseconds. The difference between five milliseconds and half a second may not sound like a major problem, but when you’re handling millions of queries per day and trying to satisfy a three second, end-to-end Service Level Agreement with your Point of Sale providers, those delays add up very quickly.

Of course, document-oriented databases like Mongo and multi-valued databases like UniVerse are not always viable solutions to bridging the gulf between on-disk and in-memory data history3 SQL SERVER   Rewriting Database History   Notes from the Field #094representations. For 20 years, I’ve made the claim that databases are an unfortunate consequence of history. Bear with me because I don’t mean to disparage database technologies at all. Quite the opposite, I love and respect modern database systems of many types. However, it’s very true that if you could travel back in time to 1943 and give the developers of the ENIAC computer at the University of Pennsylvania a large scale memory device to attach to their invention, databases as we know them would never have evolved. Instead, all of the rich query tools, and consistency features of modern transactional databases would have been developed inside of our applications instead.

In that sense, document-oriented databases and ORMs are both attempts to undo history. The former solution means to eliminate the mismatch between applications and databases altogether by making the data storage model match the common data utilization pattern. The problem is that by coercing the database to adapt to the application’s model, many of the features that evolved in transactional systems simply aren’t available. So-called eventual consistency scares developers who have come to depend on databases that offer transactions which are Atomic, Consistent, Isolated and Durable (ACID). This is most unfortunate because had the ENIAC developers had unlimited in-memory storage available to them, I believe that in-memory data handling semantics would have evolved to be ACID-compliant just as they are in modern relational databases. Document-oriented databases redo history in a way that doesn’t honor so many of the good choices that were made in real history.

Rather than attempting to undo history per se, ORMs try to hide history from us by making it seem that the underlying database’s storage and query semantics exist inside the application’s space. While this is a better way to undo history because it preserves the best parts of it, this is perhaps the hardest problem in all of computer science today. The reasons that it is so difficult are myriad. Incomplete metadata is often the culprit. The cardinality between entities, for example, is nearly impossible to get right simply by inspecting a database’s available foreign keys. Moreover, the relationships between entities in separate databases or in separate but related data integration processes often has no metadata to tie them together. The Master Data Management (MDM) structures and processes used to manage code assignments can also have a huge impact on an ORM’s query generation. Languages like C# and Java lack even the most basic Design by Contract (DbC) features so it’s no wonder that their handling of database constraints is also quite weak. For example, imagine a simple database constraint that limits an integer value to multiples of ten. Now imagine an ORM trying to convey and enforce that constraint in a language like C#. You get the picture. This is a very difficult, effectively impossible, problem to solve.

history4 SQL SERVER   Rewriting Database History   Notes from the Field #094So what’s the solution? Should ORMs be used at all? Should we abandon relational databases and move entirely over to document-oriented databases? Yes and no, not necessarily in that order. ORMs have taught us a lot over the past 20 years. The number one thing I’ve learned from them is to be pragmatic about my approach to data. I use stored procedures whenever there’s a way to reduce the chattiness and excessive round trip calls that ORMs often induce. That keeps my DBAs happy. For simple, table accesses that are indexed well for my common query predicates, I’ll use an ORM just to save me some time.

Nowadays, I rely on so-called micro-ORMs and auto-mapper tools more than traditional ORMs. Micro-ORMs are an admission that full-service ORMs are often too complex or that they unnecessarily constrain my use of data. Using less ambitious micro-ORMs like Dapper gives me the benefits I desire without all the weight and ceremony of a full-featured ORM. Lastly, when document-oriented storage seems warranted, I evaluate the criteria objectively and openly admit when NoSQL is the correct choice. However, this is less common than avid NoSQL proponents would have you believe. Relational databases are still the best choices for storage and data handling in the vast majority of cases, mostly because of their maturity, rich feature sets, high reliability and great support.

Nine years later and my friend Ted is still mostly correct. ORMs are an unwinnable war. But they’ve taught us a lot and there are other options for solving what is perhaps the hardest problem out there today: undoing history and redoing it as we think it might have been done under different circumstances. No matter which way the market goes, the technical equivalent of sober diplomacy should triumph over conflict.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Login failed for user . Reason: Token-based server access validation failed with an infrastructure error

This is one of the most common error searched on my blog search (http://search.sqlauthority.com) and lately I realized that I have not written any blog about the cause of such error and fixing that.

If you ever talk to an SQL Expert about login failed for user, he/she might ask for the state of the message or complete error message. All login failed for user message would have error number 18456 but the state of the message in the ERRORLOG would tell the exact cause of login failure. It is important to note that SQL Server does not tell the exact state to the client and it would be a state 1 always.

Login failed for user ‘sa’. (.Net SqlClient Data Provider)
——————————
Server Name: BIGPINAL
Error Number: 18456
Severity: 14
State: 1
Line Number: 65536

If you are new to SQL Server then use below to find ERRORLOG

SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location

Here is the message in ERRORLOG file

Error: 18456, Severity: 14, State: 58.
Login failed for user ‘sa’. Reason: An attempt to login using SQL authentication failed. Server is configured for Windows authentication only. [CLIENT: <local machine>]

As we can see, the message in ERRORLOG file is having state 58 and exact reason.

Coming back to message in title, here is the complete message

Error: 18456, Severity: 14, State: 11.
Login failed for user ‘domain\user$’. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: 10.10.10.10]

There are few things to note about this message.

  1. It would come for windows accounts only.
  2. If you are seeing $ in the user name, then it is a machine account / computer account. This mostly happens when a “service” running on the remote machine is connecting to SQL.
  3. If you are not seeing $ in the user name, then it is a user account.

Possible causes and solutions for State 11

  1. If you are seeing login failure for machine account, then you may want to give permission to machine account. You need to run T-SQL like below (replace domain and machine name)

CREATE LOGIN [<Domain>\<Machine account>$] FROM WINDOWS

  1. If it’s a windows domain user account, then it needs to have connect permission

GRANT CONNECT SQL TO [DOMAIN\User]

  1. If few cases, account is part of a group (or many groups) then you need to make sure there is no “DENY” permission inherited via group membership.

SELECT sp.[name],sp.type_desc
FROM sys.server_principals sp
INNER JOIN sys.server_permissions PERM ON sp.principal_id = PERM.grantee_principal_id
WHERE PERM.state_desc = 'DENY'

  1. If the message only comes with local connectivity and same account works fine remotely then it could be a UAC issue. This means that Elevation is required to connect properly, but you do not see any of the normal prompts to elevate. To use elevation (while launching the SSMS to connect to a locally running instance of SQL) Right click->Select “Run as administrator”.

Depending on the situation, out of four, any option might work. If this didn’t work for you, please comment and let me know. I would love to learn from you too.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – How to Migrate Existing Database to Contained Databases

Most of the queries that land into my inbox are based on something I have already written or something people want to explore more on. When it comes to learning and exploring the skills, there is no better way compared to people asking me some tough questions. This blog is an extension to couple of blogs I have already published on a concept called as Contained databases.

SQL SERVER – ‘Denali’ – A Simple Example of Contained Databases

Beginning Contained Databases – Notes from the Field #037

The question asked was, how can we convert an existing database to contained database. What is the process and how do I need to plan? I thought this blog will be a direction in that journey.

There are some initial considerations to determine if there are any existing containment breaches in the existing database. SQL Server makes this easy by providing a DMV and a XEvent to highlight the containment breaches for the database. The complete steps to convert a database to a contained database are as follows:

Identify database containment breaches

There are two tools to help identify the containment status of your database. The sys.dm_db_uncontained_entities (Transact-SQL) is a view that shows all the potentially uncontained entities in your database. The database_uncontained_usage Xevent fires when any actual uncontained entity is identified at run time. Once the containment breaches have been identified and resolved, you are ready to go to the next step as part of migration.

View : sys.dm_db_uncontained_entities

This view shows any entities in your database that have the potential to be uncontained. This includes those user entities that may use objects outside the application model. However, because the containment of some entities (for example, those using dynamic SQL) cannot be determined until run time, the view may show some entities that are not actually uncontained.

SELECT * FROM sys.dm_db_uncontained_entities

XEvent : database_uncontained_usage

This Xevent fires whenever an uncontained entity is identified at run time, including entities originating in client code. Since this XEvent will fire for actual uncontained entities at run time, it will not identify any uncontained user entities that you have not run.

----------------------------------------------------------
-- Setup the Xevent to log into a ring buffer as follows.
--
-- Create the event session
CREATE EVENT SESSION track_uncontained_usage ON SERVER
ADD EVENT sqlserver.database_uncontained_usage
(
    ACTION(
        sqlserver.session_id,
        sqlserver.session_nt_username,
        sqlserver.client_app_name,
        sqlserver.client_pid,
        sqlserver.client_hostname,
        sqlserver.database_id,
        sqlserver.database_name,
        sqlserver.sql_text
    )
    WHERE
        sqlserver.database_name='MyDatabase'
)
ADD TARGET package0.ring_buffer
GO
	
-- Start the event just created
ALTER EVENT SESSION track_uncontained_usage ON SERVER STATE=START
GO
		
-- Dump events logged so far
DECLARE @X XML
SELECT @X=CAST(XET.TARGET_DATA AS XML) 
FROM SYS.DM_XE_SESSION_TARGETS XET
    JOIN SYS.DM_XE_SESSIONS XE
    ON (XE.ADDRESS = XET.EVENT_SESSION_ADDRESS)
WHERE XE.NAME = 'track_uncontained_usage'
SELECT EVENTS.VALUE('(@timestamp)[1]','datetime') AS EVENT_TIME
    , D.VALUE('(@name)[1]','varchar(100)') AS FIELD
    , D.VALUE('(value)[1]', 'varchar(max)') AS VALUE
FROM (SELECT @X AS RINGBUF ) B
CROSS APPLY B.RINGBUF.NODES('//RingBufferTarget/event') AS RB(EVENTS) 
CROSS APPLY RB.EVENTS.NODES('data') AS EV(D)

Convert the database to a contained database

The steps here are super simple as follows:

  1. Using Management Studio
    1. In Object Explorer, expand Databases, right-click the database you need to convert, and then click Properties.
    2. On the Options page, change the Containment type option to Partial
    3. Click
  2. Using T-SQL, we take advantage of the new CONTAINMENT option of the ALTER DATABASE command
USE [master]
GO
ALTER DATABASE [Accounting] SET CONTAINMENT = PARTIAL
GO

Determine if the containment option has been taken

The sys.databases view has two columns, namely containment and containment_desc, which can be used to determine the containment state of the databases. The following T-SQL will select all databases with containment enabled:

SELECT * FROM sys.databases WHERE containment &gt; 0

As I conclude this blog, I hope this will give a rough process for you to work with as you would like to take an existing database to contained database in your environments.

On a side note, would love to know if any of you is using contained databases in your environments? What are some of the usecase for using these? Let me know via your comments.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – PowerShell – Knowing SQL Server Information

If you search this blog, there are a number of PowerShell scripts that I have shared in the recent past. Some of them include:

Using PowerShell and Native Client to run queries in SQL Server

Identify Disk Latency on SQL Server Box Using PowerShell

How to Find Logical and Physical Processors on SQL Server?

As I write a number of them, every time I explore and do something totally different. So in this blog, I want to do a neat trick of using PowerShell. Here I we will try to get the details of SQL Server running on my localhost and then we will try to create an HTML page which will hold and show the data.

</pre>
$OutputFile = "c:\temp\test.html"
$SQLServer = "localhost"
[Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo")
$Result = @()
$SQLServerInfo = new-object “Microsoft.SqlServer.Management.Smo.Server” $SQLServer
$Result += $SQLServerInfo | Select Name, Edition,ProductLevel, Version, ServerType, Platform, IsClustered, PhysicalMemory, Processors
if($Result -ne $null)
{
$HTML = '<style type="text/css">
#Header{font-family:"Trebuchet MS", Arial, Helvetica, sans-serif;width:100%;border-collapse:collapse;}
#Header td, #Header th {font-size:14px;border:1px solid #98bf21;padding:3px 7px 2px 7px;}
#Header th {font-size:14px;text-align:left;padding-top:5px;padding-bottom:4px;background-color:#A7C942;color:#fff;}
#Header tr.alt td {color:#000;background-color:#EAF2D3;}
</Style>'
$HTML += "<HTML><BODY><Table border=1 cellpadding=0 cellspacing=0 width=100% id=Header>
<TR>
<TH><B>Server Name</B></TH>
<TH><B>Edition</B></TD>
<TH><B>ProductLevel</B></TH>
<TH><B>Version</B></TH>
<TH><B>ServerType</B></TH>
<TH><B>Platform</B></TH>
<TH><B>IsClustered</B></TH>
<TH><B>PhysicalMemory (MB)</B></TH>
<TH><B>Logical Processors</B></TH>
</TR>"
$HTML += "<TR>
<TD>$($Result.Name)</TD>
<TD>$($Result.Edition)</TD>
<TD align=center>$($Result.ProductLevel)</TD>
<TD>$($Result.Version)</TD>
<TD>$($Result.ServerType)</TD>
<TD>$($Result.Platform)</TD>
<TD align=center>$($Result.IsClustered)</TD>
<TD align=center>$($Result.PhysicalMemory)</TD>
<TD align=center>$($Result.Processors)</TD>
</TR>"
$HTML += "</Table></BODY></HTML>"
$HTML | Out-File $OutputFile
}
<pre>

The output of this will be a file on our c:\temp folder. If we open the same in a browser, you can see the SQL Server Information about version, edition, Server type, Memory and Processor on the box. This is a neat way to explore and play with a scripting language and PowerShell to create something interesting.

Have you ever created something really interesting like this in your environments? Will you be willing to share something via the comments so that others can also take advantage of this collective knowledge.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

Interview Question of the Week #032 – Best Practices Around FileGroups

Here is another question I just heard during a recent interview in the multinational company. I have observed over 1000s of interview, but this is the first time I have heard this is being discussed and I was very much glad to hear it. I have listed the answer of the candidate, right below it as well.

Question: What are the best practices for filegroups in SQL Server?

Answer:

It is suggested to place transaction logs on separate physical hard drives. In this manner, data can be recovered up to the second in the event of a media failure.

Tables and their non-clustered indexes separated into separate file groups can improve performance, because modifications to the table can be written to both the table and the index at the same time.

If tables and their corresponding indexes in a different file group, they must be backed up the two file groups as a single unit as they cannot be backed up separately.

Set a reasonable size of your database and transaction log (25% of database size).

Leave the Autogrow feature ON for the data files and for the log files with reasonable size of autogrow increment.

Place the log files on another physical disk arrays than those with the data files to improve I/O Performance.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – How to Identify Locked Table in SQL Server?

Here is a quick script which will help users to identify locked tables in the SQL Server.

SELECT
OBJECT_NAME(p.OBJECT_ID) AS TableName,
resource_type, resource_description
FROM
sys.dm_tran_locks l
JOIN sys.partitions p ON l.resource_associated_entity_id = p.hobt_id

When you run above script, it will display table name and lock on it.

I have written code to lock the person table in the database. After locking the database, when I ran above script – it gave us following resultset.

lock SQL SERVER   How to Identify Locked Table in SQL Server?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Unable to Bring SQL Cluster Resource Online – Online Pending and then Failed

Here is the situation which my client explained and I was asked for help.

Hi Pinal,
We are having 2 node windows cluster having 3 SQL Server instances clustered running on Windows 2012 R2 on VMWare. We have one instance that will start from the services.msc but not from the Failover Cluster Manager when attempting to bring the service online.  In reality the services start because during the ‘Online pending’ I am able to connect and query the databases on that instance, although it is in the ‘Online pending’ state. 

Do you know what could be the problem?

My first question to him was – is there any error in event log? From the email it sounds like SQL server service is able to start but cluster is not able to connect to SQL Server. Here are the errors in event viewer:

  1. [sqsrvres] ODBC sqldriverconnect failed
    [sqsrvres] checkODBCConnectError: sqlstate = 08001; native error = ffffffff; message = [Microsoft][SQL Server Native Client 10.0]SQL Server Network Interfaces: Error Locating Server/Instance Specified [xFFFFFFFF].
  2. [sqsrvres] ODBC sqldriverconnect failed
    [sqsrvres] checkODBCConnectError: sqlstate = HYT00; native error = 0; message = [Microsoft][SQL Server Native Client 10.0]Login timeout expired
  3. [sqsrvres] ODBC sqldriverconnect failed
    [sqsrvres] checkODBCConnectError: sqlstate = 08001; native error = ffffffff; message = [Microsoft][SQL Server Native Client 10.0]A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.

As I suspected, there is a connectivity issue from local machine to SQL instance. Cluster Service will attempt to connect to the SQL service every few minutes (setting in SQL cluster resource) for the IsAlive check, if this fails then the SQL resource is restarted even if the instance was online. Here are the steps I generally follow.

  1. Start SQL Service via Net Start NOT via Failover cluster Manager.
net start MSSQL$SQL2014

You need to change instance name/server name. For me it is named instance of SQL Server called SQL2014. If you have default instance, then it would be called as MSSQLServer.

  1. Once it’s started successfully, we need to make a connection to SQL Server via using SQLCMD
SQLCMD -S<ServerName\InstanceName>

To get the exact name you need to open Errorlog and look for “Server Name is” keyword. If you don’t know Errorlog location, then follow this below

SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location

  1. In most of the cases, you would get similar error which we I have provided above. Here are the various things to try.
  2. Make sure SQL Brower Service is running.
  3. Create a TCP alias to SQL Instance by giving IP and Port
  4. Make sure firewall is allowing the connections.
  5. Create exceptions of sqlservr.exe and sqlbrowser.exe in Firewall

Most of the troubleshooting steps are available in my earlier blog

SQL SERVER – FIX : ERROR : (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) (Microsoft SQL Server, Error: )

  1. Once the connectivity issue is resolved, stop SQL Service via
net stop MSSQL$SQL2014
  1. Now try to bring SQL Server resource online in cluster.

My friend informed me that they had an TCP alias already created by port number of SQL Server got change and hence connections were failing. After creating correct alias, SQL Server was able to come online in failover cluster manager.

Have you ever encountered such situation? Is there anything which you would like to share? Please comment and let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Performance Monitoring for Analysis Services – Notes from the Field #093

[Note from Pinal]: This is a new episode of Notes from the Field series. When we build any application, we build it with zeal and enthusiasm. We believe it will do well for many years but it is not the case always. After a while performance started to go down and everything is not in the best shape. This is the time when we need to monitor performance and take action based on our analysis.

Bill%20Anton SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

In this episode of the Notes from the Field series I asked SQL Expert Bill Anton a very crucial question – How to monitor performance of analysis services? Bill was very kind to answer the questions and provides plenty of information about how a novice developer can work with SSAS.


When it comes to ad-hoc query performance in business intelligence solutions, very few technologies rival a well-designed Analysis Services Multidimensional cube or Tabular model. And when that cube/tabular model is performing well, as it usually does in the beginning, life is good!

notes 93 1 SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

Photo Credit: SuperCar-RoadTrip.fr

Over time, however, things can change. The underlying business evolves (hopefully) and information workers start asking different questions, resulting in new query patterns. The business grows and now more users are asking more questions of larger data volumes. Performance starts to deteriorate. Queries are taking longer to complete and nightly processing starts to run past the end of the maintenance window into business hours. Users are complaining, management is unhappy, and life is no longer good.

notes 93 2 SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

Photo Credit: Charlie

How did we not see this coming?

If this describes your personal situation, don’t worry, you’re not alone. In fact the majority of clients who bring me in to troubleshoot and resolve their Analysis Services performance problems ask the same question (or some variation of it).

In my experience, 80% of the time the reason no one sees this type of issue coming is because there wasn’t a performance monitoring solution in place. The other 20% who have a performance monitoring solution simply aren’t using it or reviewing information being collected.

I don’t know why so many Analysis Services environments are neglected (I’m not a Business or IT therapist) but I’m going to tell you what steps need to be taken if you want to avoid it.

The Secret is Simple

The first step to maintaining a well running Analysis Services instance is simple. Take measurements!

notes 93 3 SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

Photo Credit: Official U.S. Navy Page

Taking regular measurements is the only way to know how things are performing overtime. This seems ridiculously obvious, but so few companies actually do it. My hunch is that these folks just don’t know what needs to be measured.

Analysis Services can be broken down into 2 fundamental types of workloads:

Processing Workloads

Processing is the term used to describe how data gets loaded into the Analysis Services database. This workload is usually done at night, outside of business hours, so as not to coincide with user activity.

From a performance perspective, the primary things to keep an eye on are:

Processing Duration
This is the amount of time it takes to load data into the Analysis Services database. As long as the processing duration fits within the scheduled maintenance window, there’s not much to worry about. However, if this duration is increasing over time and you think it will eventually run past that window, then you’ll need to review the rest of the information to figure out “why” and “what to do”.

How long does it take to process the Analysis Services database?
Is the processing duration increasing over time?

Resource Consumption
Keeping an eye on resource consumption (e.g. CPU, memory, disk, network) during processing is also a good idea. This kind of information can help shed some light on bottlenecks and provide guidance when coming up with a solution to processing related problems.

Is the processing workload requiring more and more memory? Are we maxing out CPU? How’s disk space?

There are many solutions to problems faced during processing, but without some insight into what’s happening on the system, it’s hard to know which solution is the optimal one.

For example, say we have a Multidimensional cube and notice that the processing duration for one of the measure groups is steadily increasing over time. We review the resource consumption and see that there’s plenty of CPU/Memory/IO to spare. In this case, we may consider partitioning this particular fact table and only processing the most recent partition or processing the partitions in parallel.

Pro Tip: Breaking up processing into stages will provide context to the information above and make it much easier to see which part(s) of the Analysis Services database are contributing to the increase in processing duration or memory consumption.

Query Workloads

This refers to any activity that generates queries against the SSAS database. It could be users running reports, loading dashboards, analyzing data via pivot tables, etc. Because users typically don’t run the same queries at the same time every day, this workload can be much more complicated to monitor.

The key to success is to start with the high level stuff and only go into the details if/when necessary.

Queries
The single most important thing to track for this type of workload is a log of the actual queries being executed against the Analysis Services database.

Not only will you be able to see which queries are slow, but you’ll also have the actual MDX/DAX executed. You won’t have to wait for the user to complain (telling you their report is taking too long to load) because you’ll already have the query and can start reviewing it immediately.

Some Analysis Services implementations actually have service level agreements (SLA) with criteria such as “no query should take more than 30 seconds to complete” and “the average query response time should be less than 5 seconds”. If you’re tracking every query against the Analysis Services database, not only will you know if the SLA has been violated, but you’ll know which query or queries it was that led up to the violation and can start troubleshooting immediately.

Users
Tracking the number of folks using your system (and when they are using it) will prove very helpful for knowing if/when to start considering options for scaling the system up and/or out.

This information can usually be extracted from whatever mechanism you use to track queries being executed against the Analysis Services database, but it is important enough to deserve its own section.

Resource Consumption
In the same vein as the above discussion around tracking resource consumption of the processing workload, you’ll also want to track the same measures (e.g. CPU, memory, disk, network) throughout the day. This information may provide some clues as to why a query is slow.

For example, say you’re reviewing the top 10 slowest queries from the previous week and find several of the queries are now running very fast. At this point you can switch over and start looking at the state of the system last week at the time the query was slow (from a resource consumption perspective) and perhaps find some clues, such as memory pressure or CPU bottleneck caused by a spike in activity.

Here are some examples of the types of questions you should be able to answer with the above information:

What are the top 10 slowest queries each week?

Who are your top users and what is the average number of queries they execute per day/week/month?

What are the average number of users per day/week/month?

What is the max number/average number of concurrent users per day/week/month?

Pro Tip: some folks incorrectly assume the OLAPQueryLog tracks queries. However, this table only tracks parts of queries (storage engine requests). A single query executed against an Analysis Services database could potentially generate 10s of records in this table. The implication is that it doesn’t give you the whole story and you won’t always be able to determine which queries are slow.

Next Steps

Now that you know what types of information you should be collecting and reviewing regularly, the next step is to figure out how you’re going to collect that information. The good news is that there are quite a few options available. The bad news is that you’ll have to wait until next time to find out. Here is the SQLPASS session which discusses about the same concept Analysis Services: Show Me Where It Hurts.

If you want to get started on performance monitoring with SSAS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Understanding the Basics of Write Ahead Logging (WAL) Protocol

The journey for IT started way back in my college days more than a decade back. It was a booming time and it was exciting to see everyone eyeing the Computer Science stream. Almost every other person in those days always wanted to take this new trend. Riding the waves was one part of excitement, while the other was to be unique in whatever we do. It is easy to get stereotyped with someone and we can never be different from the herd. I always believed in doing the basics right – this blog project that I started more than 8 years was to bring these basic learnings back. I am glad I have been doing this.

background logs SQL SERVER   Understanding the Basics of Write Ahead Logging (WAL) ProtocolWhen I was at my hometown, I planned to visit my college and spend some quality time with the students and live some moments back. I always dream of giving time back to the institution that made me who I am now. I think at this moment I request every reader to spend a couple of hours in a year with students from your college. I just wish I had the same exposure in my days back then.

During my last visit, I introduced myself as a database expert (based on what people say). One of the students asked me how durability is achieved in modern databases? Do I know anything about it and how it is done?

This question got me thinking and I immediately said, the first place to search will be my blog. I quickly checked to figure out I haven’t written about this subject. To complete the conversation in the real college style, I took the whiteboard and explained the simple WAL protocol. After a couple of hours of session on various topics, I got out exhausted – after getting out, I thought of writing back here.

The concept of Write Ahead Logging is very common to database systems. This process ensures that no modifications to a database page will be flushed to disk until the associated transaction log records with that modification are written to disk first. Why do we do this? The simple answer is to maintain the ACID properties for a transaction.

What if the database modifications were flushed first and a power failure occurred before the transaction log were written? Well, if the entire transaction was committed and all changes to all pages were also written to disk, it wouldn’t be a problem. But what if the page changes were flushed to disk due to a lazy write to free up buffers and the page changes were part of an active transaction? Transactional consistency would be comprised. The database page on disk will contain changes that are part of an uncommitted transaction (because the log records don’t exist to roll back the change).

This is the reason we write to the Log file first and hence this term is called “Write ahead logging”. Once the transaction gets persisted in the log first and when a power outage happens. The data files and data pages can be appropriately rolled forward (in case of committed transactions) or rolled back (in case of failed/rollback transactions) in the event of abrupt shutdown.

More information on this topic can be found in the SQL Book Online under the topic “Write-Ahead Transaction Log”.

Other sources on this topic include: INF: SQL Server 7.0 and SQL Server 2000 Logging and Data Storage

Reference: Pinal Dave (http://blog.sqlauthority.com)