SQL SERVER – Login failed for user . Reason: Token-based server access validation failed with an infrastructure error

This is one of the most common error searched on my blog search (http://search.sqlauthority.com) and lately I realized that I have not written any blog about the cause of such error and fixing that.

If you ever talk to an SQL Expert about login failed for user, he/she might ask for the state of the message or complete error message. All login failed for user message would have error number 18456 but the state of the message in the ERRORLOG would tell the exact cause of login failure. It is important to note that SQL Server does not tell the exact state to the client and it would be a state 1 always.

Login failed for user ‘sa’. (.Net SqlClient Data Provider)
——————————
Server Name: BIGPINAL
Error Number: 18456
Severity: 14
State: 1
Line Number: 65536

If you are new to SQL Server then use below to find ERRORLOG

SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location

Here is the message in ERRORLOG file

Error: 18456, Severity: 14, State: 58.
Login failed for user ‘sa’. Reason: An attempt to login using SQL authentication failed. Server is configured for Windows authentication only. [CLIENT: <local machine>]

As we can see, the message in ERRORLOG file is having state 58 and exact reason.

Coming back to message in title, here is the complete message

Error: 18456, Severity: 14, State: 11.
Login failed for user ‘domain\user$’. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: 10.10.10.10]

There are few things to note about this message.

  1. It would come for windows accounts only.
  2. If you are seeing $ in the user name, then it is a machine account / computer account. This mostly happens when a “service” running on the remote machine is connecting to SQL.
  3. If you are not seeing $ in the user name, then it is a user account.

Possible causes and solutions for State 11

  1. If you are seeing login failure for machine account, then you may want to give permission to machine account. You need to run T-SQL like below (replace domain and machine name)

CREATE LOGIN [<Domain>\<Machine account>$] FROM WINDOWS

  1. If it’s a windows domain user account, then it needs to have connect permission

GRANT CONNECT SQL TO [DOMAIN\User]

  1. If few cases, account is part of a group (or many groups) then you need to make sure there is no “DENY” permission inherited via group membership.

SELECT sp.[name],sp.type_desc
FROM sys.server_principals sp
INNER JOIN sys.server_permissions PERM ON sp.principal_id = PERM.grantee_principal_id
WHERE PERM.state_desc = 'DENY'

  1. If the message only comes with local connectivity and same account works fine remotely then it could be a UAC issue. This means that Elevation is required to connect properly, but you do not see any of the normal prompts to elevate. To use elevation (while launching the SSMS to connect to a locally running instance of SQL) Right click->Select “Run as administrator”.

Depending on the situation, out of four, any option might work. If this didn’t work for you, please comment and let me know. I would love to learn from you too.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – How to Migrate Existing Database to Contained Databases

Most of the queries that land into my inbox are based on something I have already written or something people want to explore more on. When it comes to learning and exploring the skills, there is no better way compared to people asking me some tough questions. This blog is an extension to couple of blogs I have already published on a concept called as Contained databases.

SQL SERVER – ‘Denali’ – A Simple Example of Contained Databases

Beginning Contained Databases – Notes from the Field #037

The question asked was, how can we convert an existing database to contained database. What is the process and how do I need to plan? I thought this blog will be a direction in that journey.

There are some initial considerations to determine if there are any existing containment breaches in the existing database. SQL Server makes this easy by providing a DMV and a XEvent to highlight the containment breaches for the database. The complete steps to convert a database to a contained database are as follows:

Identify database containment breaches

There are two tools to help identify the containment status of your database. The sys.dm_db_uncontained_entities (Transact-SQL) is a view that shows all the potentially uncontained entities in your database. The database_uncontained_usage Xevent fires when any actual uncontained entity is identified at run time. Once the containment breaches have been identified and resolved, you are ready to go to the next step as part of migration.

View : sys.dm_db_uncontained_entities

This view shows any entities in your database that have the potential to be uncontained. This includes those user entities that may use objects outside the application model. However, because the containment of some entities (for example, those using dynamic SQL) cannot be determined until run time, the view may show some entities that are not actually uncontained.

SELECT * FROM sys.dm_db_uncontained_entities

XEvent : database_uncontained_usage

This Xevent fires whenever an uncontained entity is identified at run time, including entities originating in client code. Since this XEvent will fire for actual uncontained entities at run time, it will not identify any uncontained user entities that you have not run.

----------------------------------------------------------
-- Setup the Xevent to log into a ring buffer as follows.
--
-- Create the event session
CREATE EVENT SESSION track_uncontained_usage ON SERVER
ADD EVENT sqlserver.database_uncontained_usage
(
    ACTION(
        sqlserver.session_id,
        sqlserver.session_nt_username,
        sqlserver.client_app_name,
        sqlserver.client_pid,
        sqlserver.client_hostname,
        sqlserver.database_id,
        sqlserver.database_name,
        sqlserver.sql_text
    )
    WHERE
        sqlserver.database_name='MyDatabase'
)
ADD TARGET package0.ring_buffer
GO
	
-- Start the event just created
ALTER EVENT SESSION track_uncontained_usage ON SERVER STATE=START
GO
		
-- Dump events logged so far
DECLARE @X XML
SELECT @X=CAST(XET.TARGET_DATA AS XML) 
FROM SYS.DM_XE_SESSION_TARGETS XET
    JOIN SYS.DM_XE_SESSIONS XE
    ON (XE.ADDRESS = XET.EVENT_SESSION_ADDRESS)
WHERE XE.NAME = 'track_uncontained_usage'
SELECT EVENTS.VALUE('(@timestamp)[1]','datetime') AS EVENT_TIME
    , D.VALUE('(@name)[1]','varchar(100)') AS FIELD
    , D.VALUE('(value)[1]', 'varchar(max)') AS VALUE
FROM (SELECT @X AS RINGBUF ) B
CROSS APPLY B.RINGBUF.NODES('//RingBufferTarget/event') AS RB(EVENTS) 
CROSS APPLY RB.EVENTS.NODES('data') AS EV(D)

Convert the database to a contained database

The steps here are super simple as follows:

  1. Using Management Studio
    1. In Object Explorer, expand Databases, right-click the database you need to convert, and then click Properties.
    2. On the Options page, change the Containment type option to Partial
    3. Click
  2. Using T-SQL, we take advantage of the new CONTAINMENT option of the ALTER DATABASE command
USE [master]
GO
ALTER DATABASE [Accounting] SET CONTAINMENT = PARTIAL
GO

Determine if the containment option has been taken

The sys.databases view has two columns, namely containment and containment_desc, which can be used to determine the containment state of the databases. The following T-SQL will select all databases with containment enabled:

SELECT * FROM sys.databases WHERE containment &gt; 0

As I conclude this blog, I hope this will give a rough process for you to work with as you would like to take an existing database to contained database in your environments.

On a side note, would love to know if any of you is using contained databases in your environments? What are some of the usecase for using these? Let me know via your comments.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – PowerShell – Knowing SQL Server Information

If you search this blog, there are a number of PowerShell scripts that I have shared in the recent past. Some of them include:

Using PowerShell and Native Client to run queries in SQL Server

Identify Disk Latency on SQL Server Box Using PowerShell

How to Find Logical and Physical Processors on SQL Server?

As I write a number of them, every time I explore and do something totally different. So in this blog, I want to do a neat trick of using PowerShell. Here I we will try to get the details of SQL Server running on my localhost and then we will try to create an HTML page which will hold and show the data.

</pre>
$OutputFile = "c:\temp\test.html"
$SQLServer = "localhost"
[Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo")
$Result = @()
$SQLServerInfo = new-object “Microsoft.SqlServer.Management.Smo.Server” $SQLServer
$Result += $SQLServerInfo | Select Name, Edition,ProductLevel, Version, ServerType, Platform, IsClustered, PhysicalMemory, Processors
if($Result -ne $null)
{
$HTML = '<style type="text/css">
#Header{font-family:"Trebuchet MS", Arial, Helvetica, sans-serif;width:100%;border-collapse:collapse;}
#Header td, #Header th {font-size:14px;border:1px solid #98bf21;padding:3px 7px 2px 7px;}
#Header th {font-size:14px;text-align:left;padding-top:5px;padding-bottom:4px;background-color:#A7C942;color:#fff;}
#Header tr.alt td {color:#000;background-color:#EAF2D3;}
</Style>'
$HTML += "<HTML><BODY><Table border=1 cellpadding=0 cellspacing=0 width=100% id=Header>
<TR>
<TH><B>Server Name</B></TH>
<TH><B>Edition</B></TD>
<TH><B>ProductLevel</B></TH>
<TH><B>Version</B></TH>
<TH><B>ServerType</B></TH>
<TH><B>Platform</B></TH>
<TH><B>IsClustered</B></TH>
<TH><B>PhysicalMemory (MB)</B></TH>
<TH><B>Logical Processors</B></TH>
</TR>"
$HTML += "<TR>
<TD>$($Result.Name)</TD>
<TD>$($Result.Edition)</TD>
<TD align=center>$($Result.ProductLevel)</TD>
<TD>$($Result.Version)</TD>
<TD>$($Result.ServerType)</TD>
<TD>$($Result.Platform)</TD>
<TD align=center>$($Result.IsClustered)</TD>
<TD align=center>$($Result.PhysicalMemory)</TD>
<TD align=center>$($Result.Processors)</TD>
</TR>"
$HTML += "</Table></BODY></HTML>"
$HTML | Out-File $OutputFile
}
<pre>

The output of this will be a file on our c:\temp folder. If we open the same in a browser, you can see the SQL Server Information about version, edition, Server type, Memory and Processor on the box. This is a neat way to explore and play with a scripting language and PowerShell to create something interesting.

Have you ever created something really interesting like this in your environments? Will you be willing to share something via the comments so that others can also take advantage of this collective knowledge.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

Interview Question of the Week #032 – Best Practices Around FileGroups

Here is another question I just heard during a recent interview in the multinational company. I have observed over 1000s of interview, but this is the first time I have heard this is being discussed and I was very much glad to hear it. I have listed the answer of the candidate, right below it as well.

Question: What are the best practices for filegroups in SQL Server?

Answer:

It is suggested to place transaction logs on separate physical hard drives. In this manner, data can be recovered up to the second in the event of a media failure.

Tables and their non-clustered indexes separated into separate file groups can improve performance, because modifications to the table can be written to both the table and the index at the same time.

If tables and their corresponding indexes in a different file group, they must be backed up the two file groups as a single unit as they cannot be backed up separately.

Set a reasonable size of your database and transaction log (25% of database size).

Leave the Autogrow feature ON for the data files and for the log files with reasonable size of autogrow increment.

Place the log files on another physical disk arrays than those with the data files to improve I/O Performance.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – How to Identify Locked Table in SQL Server?

Here is a quick script which will help users to identify locked tables in the SQL Server.

SELECT
OBJECT_NAME(p.OBJECT_ID) AS TableName,
resource_type, resource_description
FROM
sys.dm_tran_locks l
JOIN sys.partitions p ON l.resource_associated_entity_id = p.hobt_id

When you run above script, it will display table name and lock on it.

I have written code to lock the person table in the database. After locking the database, when I ran above script – it gave us following resultset.

lock SQL SERVER   How to Identify Locked Table in SQL Server?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Unable to Bring SQL Cluster Resource Online – Online Pending and then Failed

Here is the situation which my client explained and I was asked for help.

Hi Pinal,
We are having 2 node windows cluster having 3 SQL Server instances clustered running on Windows 2012 R2 on VMWare. We have one instance that will start from the services.msc but not from the Failover Cluster Manager when attempting to bring the service online.  In reality the services start because during the ‘Online pending’ I am able to connect and query the databases on that instance, although it is in the ‘Online pending’ state. 

Do you know what could be the problem?

My first question to him was – is there any error in event log? From the email it sounds like SQL server service is able to start but cluster is not able to connect to SQL Server. Here are the errors in event viewer:

  1. [sqsrvres] ODBC sqldriverconnect failed
    [sqsrvres] checkODBCConnectError: sqlstate = 08001; native error = ffffffff; message = [Microsoft][SQL Server Native Client 10.0]SQL Server Network Interfaces: Error Locating Server/Instance Specified [xFFFFFFFF].
  2. [sqsrvres] ODBC sqldriverconnect failed
    [sqsrvres] checkODBCConnectError: sqlstate = HYT00; native error = 0; message = [Microsoft][SQL Server Native Client 10.0]Login timeout expired
  3. [sqsrvres] ODBC sqldriverconnect failed
    [sqsrvres] checkODBCConnectError: sqlstate = 08001; native error = ffffffff; message = [Microsoft][SQL Server Native Client 10.0]A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.

As I suspected, there is a connectivity issue from local machine to SQL instance. Cluster Service will attempt to connect to the SQL service every few minutes (setting in SQL cluster resource) for the IsAlive check, if this fails then the SQL resource is restarted even if the instance was online. Here are the steps I generally follow.

  1. Start SQL Service via Net Start NOT via Failover cluster Manager.
net start MSSQL$SQL2014

You need to change instance name/server name. For me it is named instance of SQL Server called SQL2014. If you have default instance, then it would be called as MSSQLServer.

  1. Once it’s started successfully, we need to make a connection to SQL Server via using SQLCMD
SQLCMD -S<ServerName\InstanceName>

To get the exact name you need to open Errorlog and look for “Server Name is” keyword. If you don’t know Errorlog location, then follow this below

SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location

  1. In most of the cases, you would get similar error which we I have provided above. Here are the various things to try.
  2. Make sure SQL Brower Service is running.
  3. Create a TCP alias to SQL Instance by giving IP and Port
  4. Make sure firewall is allowing the connections.
  5. Create exceptions of sqlservr.exe and sqlbrowser.exe in Firewall

Most of the troubleshooting steps are available in my earlier blog

SQL SERVER – FIX : ERROR : (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) (Microsoft SQL Server, Error: )

  1. Once the connectivity issue is resolved, stop SQL Service via
net stop MSSQL$SQL2014
  1. Now try to bring SQL Server resource online in cluster.

My friend informed me that they had an TCP alias already created by port number of SQL Server got change and hence connections were failing. After creating correct alias, SQL Server was able to come online in failover cluster manager.

Have you ever encountered such situation? Is there anything which you would like to share? Please comment and let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Performance Monitoring for Analysis Services – Notes from the Field #093

[Note from Pinal]: This is a new episode of Notes from the Field series. When we build any application, we build it with zeal and enthusiasm. We believe it will do well for many years but it is not the case always. After a while performance started to go down and everything is not in the best shape. This is the time when we need to monitor performance and take action based on our analysis.

Bill%20Anton SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

In this episode of the Notes from the Field series I asked SQL Expert Bill Anton a very crucial question – How to monitor performance of analysis services? Bill was very kind to answer the questions and provides plenty of information about how a novice developer can work with SSAS.


When it comes to ad-hoc query performance in business intelligence solutions, very few technologies rival a well-designed Analysis Services Multidimensional cube or Tabular model. And when that cube/tabular model is performing well, as it usually does in the beginning, life is good!

notes 93 1 SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

Photo Credit: SuperCar-RoadTrip.fr

Over time, however, things can change. The underlying business evolves (hopefully) and information workers start asking different questions, resulting in new query patterns. The business grows and now more users are asking more questions of larger data volumes. Performance starts to deteriorate. Queries are taking longer to complete and nightly processing starts to run past the end of the maintenance window into business hours. Users are complaining, management is unhappy, and life is no longer good.

notes 93 2 SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

Photo Credit: Charlie

How did we not see this coming?

If this describes your personal situation, don’t worry, you’re not alone. In fact the majority of clients who bring me in to troubleshoot and resolve their Analysis Services performance problems ask the same question (or some variation of it).

In my experience, 80% of the time the reason no one sees this type of issue coming is because there wasn’t a performance monitoring solution in place. The other 20% who have a performance monitoring solution simply aren’t using it or reviewing information being collected.

I don’t know why so many Analysis Services environments are neglected (I’m not a Business or IT therapist) but I’m going to tell you what steps need to be taken if you want to avoid it.

The Secret is Simple

The first step to maintaining a well running Analysis Services instance is simple. Take measurements!

notes 93 3 SQL SERVER   Performance Monitoring for Analysis Services   Notes from the Field #093

Photo Credit: Official U.S. Navy Page

Taking regular measurements is the only way to know how things are performing overtime. This seems ridiculously obvious, but so few companies actually do it. My hunch is that these folks just don’t know what needs to be measured.

Analysis Services can be broken down into 2 fundamental types of workloads:

Processing Workloads

Processing is the term used to describe how data gets loaded into the Analysis Services database. This workload is usually done at night, outside of business hours, so as not to coincide with user activity.

From a performance perspective, the primary things to keep an eye on are:

Processing Duration
This is the amount of time it takes to load data into the Analysis Services database. As long as the processing duration fits within the scheduled maintenance window, there’s not much to worry about. However, if this duration is increasing over time and you think it will eventually run past that window, then you’ll need to review the rest of the information to figure out “why” and “what to do”.

How long does it take to process the Analysis Services database?
Is the processing duration increasing over time?

Resource Consumption
Keeping an eye on resource consumption (e.g. CPU, memory, disk, network) during processing is also a good idea. This kind of information can help shed some light on bottlenecks and provide guidance when coming up with a solution to processing related problems.

Is the processing workload requiring more and more memory? Are we maxing out CPU? How’s disk space?

There are many solutions to problems faced during processing, but without some insight into what’s happening on the system, it’s hard to know which solution is the optimal one.

For example, say we have a Multidimensional cube and notice that the processing duration for one of the measure groups is steadily increasing over time. We review the resource consumption and see that there’s plenty of CPU/Memory/IO to spare. In this case, we may consider partitioning this particular fact table and only processing the most recent partition or processing the partitions in parallel.

Pro Tip: Breaking up processing into stages will provide context to the information above and make it much easier to see which part(s) of the Analysis Services database are contributing to the increase in processing duration or memory consumption.

Query Workloads

This refers to any activity that generates queries against the SSAS database. It could be users running reports, loading dashboards, analyzing data via pivot tables, etc. Because users typically don’t run the same queries at the same time every day, this workload can be much more complicated to monitor.

The key to success is to start with the high level stuff and only go into the details if/when necessary.

Queries
The single most important thing to track for this type of workload is a log of the actual queries being executed against the Analysis Services database.

Not only will you be able to see which queries are slow, but you’ll also have the actual MDX/DAX executed. You won’t have to wait for the user to complain (telling you their report is taking too long to load) because you’ll already have the query and can start reviewing it immediately.

Some Analysis Services implementations actually have service level agreements (SLA) with criteria such as “no query should take more than 30 seconds to complete” and “the average query response time should be less than 5 seconds”. If you’re tracking every query against the Analysis Services database, not only will you know if the SLA has been violated, but you’ll know which query or queries it was that led up to the violation and can start troubleshooting immediately.

Users
Tracking the number of folks using your system (and when they are using it) will prove very helpful for knowing if/when to start considering options for scaling the system up and/or out.

This information can usually be extracted from whatever mechanism you use to track queries being executed against the Analysis Services database, but it is important enough to deserve its own section.

Resource Consumption
In the same vein as the above discussion around tracking resource consumption of the processing workload, you’ll also want to track the same measures (e.g. CPU, memory, disk, network) throughout the day. This information may provide some clues as to why a query is slow.

For example, say you’re reviewing the top 10 slowest queries from the previous week and find several of the queries are now running very fast. At this point you can switch over and start looking at the state of the system last week at the time the query was slow (from a resource consumption perspective) and perhaps find some clues, such as memory pressure or CPU bottleneck caused by a spike in activity.

Here are some examples of the types of questions you should be able to answer with the above information:

What are the top 10 slowest queries each week?

Who are your top users and what is the average number of queries they execute per day/week/month?

What are the average number of users per day/week/month?

What is the max number/average number of concurrent users per day/week/month?

Pro Tip: some folks incorrectly assume the OLAPQueryLog tracks queries. However, this table only tracks parts of queries (storage engine requests). A single query executed against an Analysis Services database could potentially generate 10s of records in this table. The implication is that it doesn’t give you the whole story and you won’t always be able to determine which queries are slow.

Next Steps

Now that you know what types of information you should be collecting and reviewing regularly, the next step is to figure out how you’re going to collect that information. The good news is that there are quite a few options available. The bad news is that you’ll have to wait until next time to find out. Here is the SQLPASS session which discusses about the same concept Analysis Services: Show Me Where It Hurts.

If you want to get started on performance monitoring with SSAS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Understanding the Basics of Write Ahead Logging (WAL) Protocol

The journey for IT started way back in my college days more than a decade back. It was a booming time and it was exciting to see everyone eyeing the Computer Science stream. Almost every other person in those days always wanted to take this new trend. Riding the waves was one part of excitement, while the other was to be unique in whatever we do. It is easy to get stereotyped with someone and we can never be different from the herd. I always believed in doing the basics right – this blog project that I started more than 8 years was to bring these basic learnings back. I am glad I have been doing this.

background logs SQL SERVER   Understanding the Basics of Write Ahead Logging (WAL) ProtocolWhen I was at my hometown, I planned to visit my college and spend some quality time with the students and live some moments back. I always dream of giving time back to the institution that made me who I am now. I think at this moment I request every reader to spend a couple of hours in a year with students from your college. I just wish I had the same exposure in my days back then.

During my last visit, I introduced myself as a database expert (based on what people say). One of the students asked me how durability is achieved in modern databases? Do I know anything about it and how it is done?

This question got me thinking and I immediately said, the first place to search will be my blog. I quickly checked to figure out I haven’t written about this subject. To complete the conversation in the real college style, I took the whiteboard and explained the simple WAL protocol. After a couple of hours of session on various topics, I got out exhausted – after getting out, I thought of writing back here.

The concept of Write Ahead Logging is very common to database systems. This process ensures that no modifications to a database page will be flushed to disk until the associated transaction log records with that modification are written to disk first. Why do we do this? The simple answer is to maintain the ACID properties for a transaction.

What if the database modifications were flushed first and a power failure occurred before the transaction log were written? Well, if the entire transaction was committed and all changes to all pages were also written to disk, it wouldn’t be a problem. But what if the page changes were flushed to disk due to a lazy write to free up buffers and the page changes were part of an active transaction? Transactional consistency would be comprised. The database page on disk will contain changes that are part of an uncommitted transaction (because the log records don’t exist to roll back the change).

This is the reason we write to the Log file first and hence this term is called “Write ahead logging”. Once the transaction gets persisted in the log first and when a power outage happens. The data files and data pages can be appropriately rolled forward (in case of committed transactions) or rolled back (in case of failed/rollback transactions) in the event of abrupt shutdown.

More information on this topic can be found in the SQL Book Online under the topic “Write-Ahead Transaction Log”.

Other sources on this topic include: INF: SQL Server 7.0 and SQL Server 2000 Logging and Data Storage

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Error while starting SQL Server logFileSize <= BlkToByteOffset ((ULONG)-1)

Recently I was playing with tempdb database in SQL Server and made some mistake. Due to that I was then not able to start SQL Service itself. Since it was an interesting problem so worth blogging about it.

Let’s have a look at problem. Whenever there is a SQL startup trouble, always start from SQL ERRORLOG. If you are

SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location


2015-08-08 15:07:16.67 spid10s Clearing tempdb database.
…
2015-08-08 15:07:16.99 spid10s Error: 17066, Severity: 16, State: 1.
2015-08-08 15:07:16.99 <span style="color: #ff0000;">spid10s SQL Server Assertion: File: <"logmgr.cpp"</span>>,<span style="color: #ff0000;"> line=14870 Failed Assertion = '</span>logFileSize<span style="color: #ff0000;"> <= BlkToByteOffset ((ULONG</span>)<span style="color: #ff0000;">-1)'</span>. This error may be timing-related. If the error persists after rerunning the statement, use DBCC CHECKDB to check the database for structural integrity, or restart the server to ensure in-memory data structures are not corrupted.

2015-08-08 15:07:16.99 spid10s Error: 3624, Severity: 20, State: 1.
2015-08-08 15:07:16.99 spid10s A system assertion check has failed. Check the SQL Server error log for details. Typically, an assertion failure is caused by a software bug or data corruption. To check for database corruption, consider running DBCC CHECKDB. If you agreed to send dumps to Microsoft during setup, a mini dump will be sent to Microsoft. An update might be available from Microsoft in the latest Service Pack or in a Hotfix from Technical Support.

2015-08-08 15:07:17.00 spid10s Error: 5173, Severity: 16, State: 1.
2015-08-08 15:07:17.00 spid10s One or more files do not match the primary file of the database. If you are attempting to attach a database, retry the operation with the correct files. If this is an existing database, the file may be corrupted and should be restored from a backup.

2015-08-08 15:07:17.00 spid10s Error: 1802, Severity: 16, State: 4.
2015-08-08 15:07:17.00 spid10s CREATE DATABASE failed. Some file names listed could not be created. Check related errors.
2015-08-08 15:07:17.00 spid10s Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive and then restart SQL Server. Check for additional errors in the event log that may indicate why the tempdb files could not be initialized.

Above is snippet of ERRORLOG where I have removed many things to make this clear in the blog. Here is something interesting. If we look at the file size of the tempdb transaction log file, its 0 KB (yes zero KB). I thought its TempDB so it would be recreated anyways, so I deleted them but still no joy. I was having another SQL instance with the same build of SQL Server so I picked files from there and dumped to desired location but SQL Startup automatically replaced those files.

 SQL SERVER   Error while starting SQL Server logFileSize <= BlkToByteOffset ((ULONG) 1)

Solution:

A) Start SQL Server with startup parameter /f which stands for minimal configuration. You need to change instance name/server name. For me it is named instance of SQL Server called SQL2014.

net start MSSQL$SQL2014 /f

If you have default instance, then it would be called as MSSQLServer.

B) Connect to SQL via SQLCMD. You need to change instance name/server name

SQLCMD -S.\SQL2014

C) After connection is made, check the file sizes of tempdb database

SELECT size FROM sys.master_files WHERE database_id = 2

In my case I got 0 for LDF file and that was the cause of the whole problem.

D) If you get zero, then alter the database and change the size again

ALTER DATABASE [tempdb] MODIFY FILE(NAME=N'templog',SIZE= 10240KB);

E) Check size again by running the same command as step 3. Then exit from SQLCMD

F) Stop SQL Service

net stop MSSQL$SQL2014

G) Start SQL normally and if you were hitting the same issue, then it should be fixed. The same command as step 1 but without /f parameter.

net start MSSQL$SQL2014

Here is the screenshot of the solution

 SQL SERVER   Error while starting SQL Server logFileSize <= BlkToByteOffset ((ULONG) 1)

What caused the problem?

As I said in the beginning, I was playing with TempDB and by mistake I ran below command

ALTER DATABASE [tempdb] MODIFY FILE(NAME=N'templog',SIZE= 0);

That zero in size has caused the whole problem.

Note: Please do not run above command on production server. 

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Error: Msg 4305, Level 16, State 1 – The log in this backup set terminates at LSN, which is too early to apply to the database.

While working with transaction log backup restore or log shipping, you must have seen below error message.

Msg 4305, Level 16, State 1, Line 47
The log in this backup set begins at LSN 33000000048000001, which is too recent to apply to the database. An earlier log backup that includes LSN 33000000044800001 can be restored.
Msg 3013, Level 16, State 1, Line 47
RESTORE LOG is terminating abnormally.

Let’s have a look at the cause of this message. Before that let’s have our sample database ready with backups so that we can play with them.

--Create an empty database
CREATE DATABASE SQLAuthority
GO
-- use the database
USE SQLAuthority
GO
--create first test table.
CREATE TABLE MyTestTable1 (iCol INT)
-- perform full backup F1
BACKUP DATABASE SQLAuthority TO DISK = 'c:\SQLAuthority_F1.bak' WITH FORMAT
--create second test table.
CREATE TABLE MyTestTable2 (a INT)
-- perform transaction log backup T1
BACKUP LOG SQLAuthority TO DISK = 'c:\SQLAuthority_T1.trn' WITH FORMAT
--create third test table.
CREATE TABLE MyTestTable3 (a INT)
-- perform transaction log backup T2
BACKUP LOG SQLAuthority TO DISK =  'c:\SQLAuthority_T2.trn' WITH FORMAT
--create forth test table.
CREATE TABLE MyTestTable4 (a INT)
-- perform full backup F2
BACKUP DATABASE SQLAuthority TO DISK = 'c:\SQLAuthority_F2.bak' WITH FORMAT

Now, we have backups taken in below order:

  1. Full back up ‘c:\SQLAuthority_F1.bak’
  2. First Transaction log backup ‘c:\SQLAuthority_T1.trn’
  3. Second Transaction log backup ‘c:\SQLAuthority_T2.trn’
  4. Full back up ‘c:\SQLAuthority_F2.bak’

Reproduce the Error

-- Let’s perform restore now but before that, lets drop the database
USE MASTER
GO
ALTER DATABASE SQLAuthority SET single_user WITH ROLLBACK IMMEDIATE
GO
DROP DATABASE SQLAuthority
GO
RESTORE DATABASE SQLAuthority FROM DISK = 'c:\SQLAuthority_F1.bak' WITH NORECOVERY
GO
RESTORE LOG SQLAuthority FROM DISK = 'c:\SQLAuthority_T2.trn' WITH NORECOVERY
GO

Here is the error.

Processed 280 pages for database ‘SQLAuthority’, file ‘SQLAuthority’ on file 1.
Processed 6 pages for database ‘SQLAuthority’, file ‘SQLAuthority_log’ on file 1.
RESTORE DATABASE successfully processed 286 pages in 0.148 seconds (15.097 MB/sec).
Msg 4305, Level 16, State 1, Line 43
The log in this backup set begins at LSN 33000000026400001, which is too recent to apply to the database. An earlier log backup that includes LSN 33000000023200001 can be restored.
Msg 3013, Level 16, State 1, Line 43
RESTORE LOG is terminating abnormally.

The error appeared because we have skipped T1. As the message says “An earlier log backup that includes … can be restored”. We can get list of backups taken from MSDB database on source server.

DECLARE @db_name VARCHAR(100)
SELECT @db_name = 'SQLAuthority'
--Get Backup History for required database
SELECT TOP (30) s.database_name
,m.physical_device_name
,CAST(CAST(s.backup_size / 1000000 AS INT) AS VARCHAR(14)) + ' ' + 'MB' AS bkSize
,CAST(DATEDIFF(second, s.backup_start_date, s.backup_finish_date) AS VARCHAR(4)) + ' ' + 'Seconds' TimeTaken
,s.backup_start_date
,CAST(s.first_lsn AS VARCHAR(50)) AS first_lsn
,CAST(s.last_lsn AS VARCHAR(50)) AS last_lsn
,CASE s.[type]
WHEN 'D'
THEN 'Full'
WHEN 'I'
THEN 'Differential'
WHEN 'L'
THEN 'Transaction Log'
END AS BackupType
,s.server_name
,s.recovery_model
FROM msdb.dbo.backupset s
INNER JOIN msdb.dbo.backupmediafamily m ON s.media_set_id = m.media_set_id
WHERE s.database_name = @db_name
ORDER BY backup_start_date DESC
,backup_finish_date

We can also get the same information from ERRORLOG as well.

SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location

Here are the sample messages. I have trimmed them for clarity.

2015-08-08 17:49:06.340 Backup Database backed up. Database: SQLAuthority, creation date(time): 2015/08/08(17:49:05), pages dumped: 291, first LSN: 33:136:179, last LSN: 33:232:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {‘c:\SQLAuthority_F1.bak’}).

2015-08-08 17:49:06.550 Backup Log was backed up. Database: SQLAuthority, creation date(time): 2015/08/08(17:49:05), first LSN: 33:136:179, last LSN: 33:264:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {‘c:\SQLAuthority_T1.trn’}).

2015-08-08 17:49:06.680 Backup Log was backed up. Database: SQLAuthority, creation date(time): 2015/08/08(17:49:05), first LSN: 33:264:1, last LSN: 33:280:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {‘c:\SQLAuthority_T2.trn’}).

2015-08-08 17:49:07.110 Backup Database backed up. Database: SQLAuthority, creation date(time): 2015/08/08(17:49:05), pages dumped: 291, first LSN: 33:312:143, last LSN: 33:392:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {‘c:\SQLAuthority_F2.bak’}).

So, once we find missing transaction log backup, we need to restore them in sequence. Do you have any other trick to solve this issue? If you can share, it would help others as well.
Reference: Pinal Dave (http://blog.sqlauthority.com)