SQL SERVER – Database Mirroring login attempt failed – Connection handshake failed. An OS call failed: (8009030c)

The best way I have learnt about building SQL Server AlwaysOn Availability Group is by contacting my close friends who have written the only book on this topic. It is not common that I ping them for such technical queries but that is what friendship is made off. Recently, while preparing a demo for AlwaysOn Availability Group, I ran into interesting issue and was able to fix it as well.

First, let us look at the configuration.

SERVER1              –              Primary Replica
SERVER2              –              Secondary Replica

They are synchronous and configured for automatic failover. I have shutdown SERVER1 and as expected, SERVER2 became primary. As soon as SERVER1 came back, it became secondary but databases were NOT synchronizing. That was a surprise to me as I was expecting to see the data movement from SERVER2 to SERVER1 now.

As always, my troubleshooting checklist has ERRORLOG is my first point of investigation. If you have not seen earlier, you can use below blog to know how to find ERRORLOG

SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location

On SERVER1, I found below messages multiple times.

2015-08-24 01:24:18.480 Logon        Database Mirroring login attempt failed with error: 'Connection handshake failed. An OS call failed: (8009030c) 0x8009030c(The logon attempt failed). State 67.'.  [CLIENT: 10.0.0.2]

There are some interesting learning/finding from above messages.

  1. Database Mirroring and AlwaysOn uses same endpoint and that is why we are seeing Database Mirroring in error message even if I am using AlwaysOn Availability Group.
  2. IP Address in the message belong to SERVER2 (which is primary as of now).

This means that SERVER2 is not able to talk to SERVER1 via Mirroring port. I searched on hex code 8009030c is not helping much because it means – The logon attempt failed, which is already there is the same message.

Steps which I tried after searching on internet (and they didn’t help)

  1. I logged in to SERVER2 via remote desktop and tried making connection to SERVER1 via Management Studio and it was working fine.
  2. Restart endpoints on both replicas. No joy!

USE MASTER
GO
ALTER ENDPOINT hadr_endpoint STATE = STOPPED;
GO
ALTER ENDPOINT hadr_endpoint STATE = STARTED;
GO

  1. PING from and to each other. This also worked well which means DNS was not an issue.
  2. Verified that SQL was running under domain account and account was not locked.

None of the above helped. Taking last hint, Then I started backtracking to recall what I did with my SQL Service account. It did not take long to realize that I had change service account domain password in recent past. I rarely use AlwaysOn Availability Group so I changed the service account password on SERVER1 but forgot on SERVER2.

When I was looking at errorlog, I also found below

SQL Server failed to communicate with filter daemon launch service  (Windows error: The service did not start due to a logon failure.). Full-Text filter daemon process failed to start. Full-text search functionality will not be available.

Above message came when SQL was trying to start full-text service automatically. Since I didn’t change the password for full-text service, I was seeing above message.

Solution (which worked for me): Correct the Service account password for SQL Server related service via SQL Server Configuration Manager.

handshake 01 SQL SERVER   Database Mirroring login attempt failed   Connection handshake failed. An OS call failed: (8009030c)

handshake 02 SQL SERVER   Database Mirroring login attempt failed   Connection handshake failed. An OS call failed: (8009030c)

When I corrected service account password, the databases were synchronized and I was happy again.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Fix @@ServerName Property Value When Incorrect

Our software systems are funny and sometimes give us strange values. I don’t quite venture out trying to fiddle around with system level properties on my servers, but having said that, this query had come from one of my colleagues who was working on his SQL Server instance and he found something strange.

When working on the server and when an error occurs, I generally ask for the @@Version. This is a standard practice for me to do it because I don’t have to ask about the version installed, what service pack, CU update has been done on the server etc. This output would give me all this information in black and white.

error SQL SERVER   Fix @@ServerName Property Value When IncorrectAs I was so used to this kind of output, my friend said that the value that is returned by his @@Servername was NULL. This was such a surprise and I wanted to see it in real life. He was kind enough to show me via Skype that this was indeed the case. This got me thinking and I needed to solve this trivial problem. I had gone to bed early as it was a lazy Sunday and I couldn’t put myself to sleep as this was plaguing my mind – I needed a solution. Finally, after an hour and getting some help from my SQL buddies, I found the deal to crack this problem. Strange but I am sure many of you have encountered this somewhere and were looking for a solution.

The @@ServerName refers to the local server that SQL Server is running on.  This property is set during installation, but there are instances where @@ServerName may be NULL or may not be correct.

Running the following will show the current TSQL value for this property:

SELECT @@SERVERNAME;
GO

To Fix the Problem

If the @@ServerName property is NULL, running the following will fix the issue- but only after you restart the SQL Server instance. Yep, fixing this requires an outage.

EXEC sp_addserver '<LocalServerName>', local;
GO

If the @@ServerName property is incorrect, run the following to correct the issue:

EXEC sp_dropserver 'old_name';
GO
EXEC sp_addserver 'new_name', 'local';
GO

Though the solution to this was so simple, I did find it strange about this behavior. I am yet to come to terms this can every happen. If you ever encountered this error on your server, can you please let us know how you landed to this situation. I am eager to learn the cause for this rather than just looking at solution to the problem.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #033 – How to Invalidate Procedure Cache of SQL Server?

icon clean Interview Question of the Week #033   How to Invalidate Procedure Cache of SQL Server?Question: How to invalidate all the stored procedure plans in SQL Server?

Answer: DBCC FREEPROCCACHE will invalidate all stored procedure plans that the optimizer has cached in memory and force SQL Server to compile new plans the next time those procedures are run.

Remember not to run this on a production server without proper planning as once you run this DBCC command it will invalidate all the stored procedure plans. This means if the user runs this during pick time, many Stored Procedures will execute at the same time and create a resource bottleneck.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL Authority News – 100 Million Views and Onwards

I will always remember August 20, 2015 as a landmark milestone for SQLAuthority.com.

100m sky SQL Authority News   100 Million Views and Onwards

On August 20th, this blog crossed 100 Million Views. When I started this journey on November 1, 2006, I had no idea that there will be over 100 Million Views on this blog. I am indeed very proud and happy that my humble attempt of writing my experience online has been appreciated by YOU!

No matter how much I want to thank you, I seriously have no words to express. I am happy that you accepted SQLAuthority.com and showered your love.

100m SQL Authority News   100 Million Views and Onwards

Thank you! Seriously, Thank YOU!

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – DBCC CHECKDB errors – Msg 2520 during executions

Some of the simplest recommendations can lead you to some bizarre output that we never anticipated. I always recommend running DBCC CHECKDB on mission critical databases because we need to be 200% sure that the database disk system is healthy from time to time. During one of my sessions, I gave this as a generic recommendation and guess what, I got a strange message from one of the attendees. They then claimed that they were getting errors of mag 2520. One further probing and requesting for the actual error, I understood the problem. They were running CHECKDB on the ResourceDB. I asked – “Why are you doing this on ResourceDB?”. They answered, “Pinal, isn’t ResourceDB a database? Wouldn’t it not get corrupt? What is wrong in doing a CHECKDB on it? If I cant, how can I do it?”

This was a fully loaded question and I had to answer it elaborately. So I took this blog post to explain some of the basics.

Resource database is a system database in SQL Server 2005. Since this database has some special attributes and differs from regular databases in some respects, these indirectly affect the ability to do some operations on this database. One of the areas is the ability to perform DBCC checks and repair on the resource database. The following are the key points to keep in mind when you plan to run any kind of check against the resource database:

When the server is running in multi-user mode, running the command
DBCC CHECKDB ( mssqlsystemresource )

will give the following output:

Msg 2520, Level 16, State 11, Line 1
Could not find database ‘mssqlsystemresource‘. The database either does not exist, or was dropped before a statement tried to use it. Verify if the database exists by querying the sys.databases catalog view.

This is applicable for commands like DBCC CHECKALLOC and DBCC CHECKCATALOG.

When the server is running in multi-user mode, running DBCC CHECKDB on the master database will run DBCC CHECKDB on resource database as well. In the DBCC CHECKDB output result set, we will notice the following sections:

DBCC CHECKDB ( MASTER )

CHECKDB found 0 allocation errors and 0 consistency errors in database ‘master’.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
CHECKDB found 0 allocation errors and 0 consistency errors in database ‘mssqlsystemresource’.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Errorlog will show the following entries:

2015-08-25 14:44:22.650                spid63   DBCC CHECKDB (master) executed by sa found 0 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.  Internal database snapshot has split point LSN = 000000e4:0000017c:0001 and first LSN = 000000e4:00000179:0001.
2015-08-25 14:44:22.840                spid63   DBCC CHECKDB (mssqlsystemresource) executed by sa found 0 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.

When the above checks are run, for the master database checks alone the internal transient replica is created. For resource database, no replica is created for the checks.

When the server is started in single_user mode to perform repairs in master, a special mechanism is used. Only for master database, the check command is run with the repair option. When it comes to checking the resource database, the normal check is done with repair disabled. You can see the effect of that from the errorlog entries:

2015-08-25 15:44:22.650 spid61      DBCC CHECKDB (master, repair_allow_data_loss) executed by sa found 0 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.
2015-08-25 15:44:22.650 spid61      DBCC CHECKDB (mssqlsystemresource) executed by sa found 0 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.

If there is a corruption problem in the resource database and there is a need to get it repaired for the efficient functioning of the SQL Server, then you will need to replace the resource database if there is any physical corruption introduced in the database. Normally users will not have the ability to write to this database. So unless there is a hardware problem, there is less chance that this database should get corrupted ever.

Though this blog started for an unusual error message, I thought it was important to talk some of these fine prints when working with ResourceDB. Hope you have never encountered any error with ResourceDB, if you have – do let me know via comments to what you did in that situation. Would love to learn from you.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Rewriting Database History – Notes from the Field #094

[Note from Pinal]: This is a 94th episode of Notes from the Field series. When I read the title of this article – I was extremely intrigued with it – Rewriting Database History! When I was a kid, history was my favorite subject and till today when I have to deal with history, I always jump first to read and enjoy it. When I see this article from Kevin, I was so delighted. I started to read it immediately, and I did not stop reading it till it was finished. It was 20 minutes well spent. The journey starts from ORM and end at the ORM, however, when it comes to full circle there are so many things in between. Trust me Kevin can be a great historical novel writer.

In this episode of the Notes from the Field series database expert Kevin Hazzard explains the interesting subject of rewriting database history. Read the experience of Kevin in his own words.


KevinHazzard SQL SERVER   Rewriting Database History   Notes from the Field #094

Way back in June of 2006, my friend Ted Neward wrote a very detailed piece called The Vietnam of Computer Science. Juxtaposing America’s involvement in Southeast Asia in the last century with the battles we fight in modern, business-driven enterprises, that article examined the unwinnable war that Object-Relational Mapping (ORM) tools have become. The title of the article is probably a bit politically incorrect these days given that Vietnam’s world standing and economy have improved vastly over the last decade. Moreover, lots of database and application developers working today don’t even remember the war in Vietnam so the allegory may fail for them. But for those of us who grew up in the 1960s and 1970s, the writing evokes plenty of painful memories that paint ORMs as technological quagmires.

ORM tools have evolved quite a bit since 2006 but they’re still generally pretty awful. The metadata-driven code generators buried within tools like Hibernate and Microsoft’shistory1 SQL SERVER   Rewriting Database History   Notes from the Field #094 Entity Framework do a much better job of writing decently-performing queries than they did in the early days. But the cruft, wacky idioms and generally non-sensible constraints one must suffer to use these tools is overwhelming for all but a handful of gladiators who gain some odd sense of purpose in buttressing their companies for long-running conflicts. Nowhere is this more apparent than in the Ruby on Rails space where some sort of addiction to total database ignorance seems to have infected the vast majority of that community. If you want to understand why Ruby on Rails is in decline in recent years, look no further than the chatty, miserably-performing queries that the Active Record ORM generates when accessing complex, relational data structures. Any DBA or database developer who has been enlisted to debug performance problems in those sorts of melees knows there’s a better way to win hearts and minds.

For most developers who have shunned traditional ORMs, I often hear them speak about how unwieldy, brittle or inflexible they found the tools to be. The relational abstractions that they provide is sometimes too weak, making common business operations difficult to express in languages like C# and Java. To solve that problem, the abstraction may become leaky, exposing details that create dependencies which complicate deployments or lock you into vendor or version-specific features.

For database people, ORM aversion is typically related to the naiveté of machine-generated queries. Chattiness is a big problem with full-service ORMs, for example. Mature ORMs do a better job today emitting correlated sub-queries or common table expressions than they did in the past. But that often depends on good mappings which depend in turn on good metadata. Unfortunately, perfect database metadata is quite rare. Anyone who watches the tracing of a database server suffering under the garrulous assault of an ORM-driven application understands just how little the ORM really knows about the database.

history2 SQL SERVER   Rewriting Database History   Notes from the Field #094If the protestors can convince management that the ORM war is unwinnable, the response can move in several potential directions. One might rationalize that the relational representation of data is the real foe, i.e. that because data is rarely used in third normal form within applications, it shouldn’t be stored that way on disks. The pundits of the NoSQL movement say that relational storage models are a holdover from an age when storage and bandwidth were much more expensive than they are today. That is a massive oversimplification for sure, but there are some cases where a document-oriented databases make a lot of sense.

For example, I worked in a shop long ago that used a UniVerse database to manage large caches of medical claims data. As a multi-valued database much like today’s NoSQL databases, UniVerse stores data with all the data that it references in a single document. So a patient’s demographic information can be stored in line with the related medical claims, diagnostic history and lab results. For that system, there was only one index on the database which was used to find individual, semi-structured blobs of patient records. The rules engine which sat atop the database always needed all of a single patient’s information whenever it ran a rule so storing the entire document describing a patient as a single database entity was both practical and highly efficient.

In those days, UniVerse was an IBM product that competed against their own DB2 relational database. For some reason, IBM was keen on getting our company to move from UniVerse to DB2 so we invited them in for a proof of concept to build a relational model of our medical database in DB2. When they were done, they loaded several terabytes of data into the new relational database and aimed the rules engine there. After weeks of tweaking, rewriting and hand-wringing, they never could make the relational database outperform UniVerse. Moreover, it wasn’t even a close race. The UniVerse database running on much older hardware could fetch whole patient documents in a few milliseconds. The best we could get DB2 to do using its normalized implementation was measured in hundreds of milliseconds. The difference between five milliseconds and half a second may not sound like a major problem, but when you’re handling millions of queries per day and trying to satisfy a three second, end-to-end Service Level Agreement with your Point of Sale providers, those delays add up very quickly.

Of course, document-oriented databases like Mongo and multi-valued databases like UniVerse are not always viable solutions to bridging the gulf between on-disk and in-memory data history3 SQL SERVER   Rewriting Database History   Notes from the Field #094representations. For 20 years, I’ve made the claim that databases are an unfortunate consequence of history. Bear with me because I don’t mean to disparage database technologies at all. Quite the opposite, I love and respect modern database systems of many types. However, it’s very true that if you could travel back in time to 1943 and give the developers of the ENIAC computer at the University of Pennsylvania a large scale memory device to attach to their invention, databases as we know them would never have evolved. Instead, all of the rich query tools, and consistency features of modern transactional databases would have been developed inside of our applications instead.

In that sense, document-oriented databases and ORMs are both attempts to undo history. The former solution means to eliminate the mismatch between applications and databases altogether by making the data storage model match the common data utilization pattern. The problem is that by coercing the database to adapt to the application’s model, many of the features that evolved in transactional systems simply aren’t available. So-called eventual consistency scares developers who have come to depend on databases that offer transactions which are Atomic, Consistent, Isolated and Durable (ACID). This is most unfortunate because had the ENIAC developers had unlimited in-memory storage available to them, I believe that in-memory data handling semantics would have evolved to be ACID-compliant just as they are in modern relational databases. Document-oriented databases redo history in a way that doesn’t honor so many of the good choices that were made in real history.

Rather than attempting to undo history per se, ORMs try to hide history from us by making it seem that the underlying database’s storage and query semantics exist inside the application’s space. While this is a better way to undo history because it preserves the best parts of it, this is perhaps the hardest problem in all of computer science today. The reasons that it is so difficult are myriad. Incomplete metadata is often the culprit. The cardinality between entities, for example, is nearly impossible to get right simply by inspecting a database’s available foreign keys. Moreover, the relationships between entities in separate databases or in separate but related data integration processes often has no metadata to tie them together. The Master Data Management (MDM) structures and processes used to manage code assignments can also have a huge impact on an ORM’s query generation. Languages like C# and Java lack even the most basic Design by Contract (DbC) features so it’s no wonder that their handling of database constraints is also quite weak. For example, imagine a simple database constraint that limits an integer value to multiples of ten. Now imagine an ORM trying to convey and enforce that constraint in a language like C#. You get the picture. This is a very difficult, effectively impossible, problem to solve.

history4 SQL SERVER   Rewriting Database History   Notes from the Field #094So what’s the solution? Should ORMs be used at all? Should we abandon relational databases and move entirely over to document-oriented databases? Yes and no, not necessarily in that order. ORMs have taught us a lot over the past 20 years. The number one thing I’ve learned from them is to be pragmatic about my approach to data. I use stored procedures whenever there’s a way to reduce the chattiness and excessive round trip calls that ORMs often induce. That keeps my DBAs happy. For simple, table accesses that are indexed well for my common query predicates, I’ll use an ORM just to save me some time.

Nowadays, I rely on so-called micro-ORMs and auto-mapper tools more than traditional ORMs. Micro-ORMs are an admission that full-service ORMs are often too complex or that they unnecessarily constrain my use of data. Using less ambitious micro-ORMs like Dapper gives me the benefits I desire without all the weight and ceremony of a full-featured ORM. Lastly, when document-oriented storage seems warranted, I evaluate the criteria objectively and openly admit when NoSQL is the correct choice. However, this is less common than avid NoSQL proponents would have you believe. Relational databases are still the best choices for storage and data handling in the vast majority of cases, mostly because of their maturity, rich feature sets, high reliability and great support.

Nine years later and my friend Ted is still mostly correct. ORMs are an unwinnable war. But they’ve taught us a lot and there are other options for solving what is perhaps the hardest problem out there today: undoing history and redoing it as we think it might have been done under different circumstances. No matter which way the market goes, the technical equivalent of sober diplomacy should triumph over conflict.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Login failed for user . Reason: Token-based server access validation failed with an infrastructure error

This is one of the most common error searched on my blog search (http://search.sqlauthority.com) and lately I realized that I have not written any blog about the cause of such error and fixing that.

If you ever talk to an SQL Expert about login failed for user, he/she might ask for the state of the message or complete error message. All login failed for user message would have error number 18456 but the state of the message in the ERRORLOG would tell the exact cause of login failure. It is important to note that SQL Server does not tell the exact state to the client and it would be a state 1 always.

Login failed for user ‘sa’. (.Net SqlClient Data Provider)
——————————
Server Name: BIGPINAL
Error Number: 18456
Severity: 14
State: 1
Line Number: 65536

If you are new to SQL Server then use below to find ERRORLOG

SQL SERVER – Where is ERRORLOG? Various Ways to Find its Location

Here is the message in ERRORLOG file

Error: 18456, Severity: 14, State: 58.
Login failed for user ‘sa’. Reason: An attempt to login using SQL authentication failed. Server is configured for Windows authentication only. [CLIENT: <local machine>]

As we can see, the message in ERRORLOG file is having state 58 and exact reason.

Coming back to message in title, here is the complete message

Error: 18456, Severity: 14, State: 11.
Login failed for user ‘domain\user$’. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: 10.10.10.10]

There are few things to note about this message.

  1. It would come for windows accounts only.
  2. If you are seeing $ in the user name, then it is a machine account / computer account. This mostly happens when a “service” running on the remote machine is connecting to SQL.
  3. If you are not seeing $ in the user name, then it is a user account.

Possible causes and solutions for State 11

  1. If you are seeing login failure for machine account, then you may want to give permission to machine account. You need to run T-SQL like below (replace domain and machine name)

CREATE LOGIN [<Domain>\<Machine account>$] FROM WINDOWS

  1. If it’s a windows domain user account, then it needs to have connect permission

GRANT CONNECT SQL TO [DOMAIN\User]

  1. If few cases, account is part of a group (or many groups) then you need to make sure there is no “DENY” permission inherited via group membership.

SELECT sp.[name],sp.type_desc
FROM sys.server_principals sp
INNER JOIN sys.server_permissions PERM ON sp.principal_id = PERM.grantee_principal_id
WHERE PERM.state_desc = 'DENY'

  1. If the message only comes with local connectivity and same account works fine remotely then it could be a UAC issue. This means that Elevation is required to connect properly, but you do not see any of the normal prompts to elevate. To use elevation (while launching the SSMS to connect to a locally running instance of SQL) Right click->Select “Run as administrator”.

Depending on the situation, out of four, any option might work. If this didn’t work for you, please comment and let me know. I would love to learn from you too.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – How to Migrate Existing Database to Contained Databases

Most of the queries that land into my inbox are based on something I have already written or something people want to explore more on. When it comes to learning and exploring the skills, there is no better way compared to people asking me some tough questions. This blog is an extension to couple of blogs I have already published on a concept called as Contained databases.

SQL SERVER – ‘Denali’ – A Simple Example of Contained Databases

Beginning Contained Databases – Notes from the Field #037

The question asked was, how can we convert an existing database to contained database. What is the process and how do I need to plan? I thought this blog will be a direction in that journey.

There are some initial considerations to determine if there are any existing containment breaches in the existing database. SQL Server makes this easy by providing a DMV and a XEvent to highlight the containment breaches for the database. The complete steps to convert a database to a contained database are as follows:

Identify database containment breaches

There are two tools to help identify the containment status of your database. The sys.dm_db_uncontained_entities (Transact-SQL) is a view that shows all the potentially uncontained entities in your database. The database_uncontained_usage Xevent fires when any actual uncontained entity is identified at run time. Once the containment breaches have been identified and resolved, you are ready to go to the next step as part of migration.

View : sys.dm_db_uncontained_entities

This view shows any entities in your database that have the potential to be uncontained. This includes those user entities that may use objects outside the application model. However, because the containment of some entities (for example, those using dynamic SQL) cannot be determined until run time, the view may show some entities that are not actually uncontained.

SELECT * FROM sys.dm_db_uncontained_entities

XEvent : database_uncontained_usage

This Xevent fires whenever an uncontained entity is identified at run time, including entities originating in client code. Since this XEvent will fire for actual uncontained entities at run time, it will not identify any uncontained user entities that you have not run.

----------------------------------------------------------
-- Setup the Xevent to log into a ring buffer as follows.
--
-- Create the event session
CREATE EVENT SESSION track_uncontained_usage ON SERVER
ADD EVENT sqlserver.database_uncontained_usage
(
    ACTION(
        sqlserver.session_id,
        sqlserver.session_nt_username,
        sqlserver.client_app_name,
        sqlserver.client_pid,
        sqlserver.client_hostname,
        sqlserver.database_id,
        sqlserver.database_name,
        sqlserver.sql_text
    )
    WHERE
        sqlserver.database_name='MyDatabase'
)
ADD TARGET package0.ring_buffer
GO
	
-- Start the event just created
ALTER EVENT SESSION track_uncontained_usage ON SERVER STATE=START
GO
		
-- Dump events logged so far
DECLARE @X XML
SELECT @X=CAST(XET.TARGET_DATA AS XML) 
FROM SYS.DM_XE_SESSION_TARGETS XET
    JOIN SYS.DM_XE_SESSIONS XE
    ON (XE.ADDRESS = XET.EVENT_SESSION_ADDRESS)
WHERE XE.NAME = 'track_uncontained_usage'
SELECT EVENTS.VALUE('(@timestamp)[1]','datetime') AS EVENT_TIME
    , D.VALUE('(@name)[1]','varchar(100)') AS FIELD
    , D.VALUE('(value)[1]', 'varchar(max)') AS VALUE
FROM (SELECT @X AS RINGBUF ) B
CROSS APPLY B.RINGBUF.NODES('//RingBufferTarget/event') AS RB(EVENTS) 
CROSS APPLY RB.EVENTS.NODES('data') AS EV(D)

Convert the database to a contained database

The steps here are super simple as follows:

  1. Using Management Studio
    1. In Object Explorer, expand Databases, right-click the database you need to convert, and then click Properties.
    2. On the Options page, change the Containment type option to Partial
    3. Click
  2. Using T-SQL, we take advantage of the new CONTAINMENT option of the ALTER DATABASE command
USE [master]
GO
ALTER DATABASE [Accounting] SET CONTAINMENT = PARTIAL
GO

Determine if the containment option has been taken

The sys.databases view has two columns, namely containment and containment_desc, which can be used to determine the containment state of the databases. The following T-SQL will select all databases with containment enabled:

SELECT * FROM sys.databases WHERE containment &gt; 0

As I conclude this blog, I hope this will give a rough process for you to work with as you would like to take an existing database to contained database in your environments.

On a side note, would love to know if any of you is using contained databases in your environments? What are some of the usecase for using these? Let me know via your comments.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – PowerShell – Knowing SQL Server Information

If you search this blog, there are a number of PowerShell scripts that I have shared in the recent past. Some of them include:

Using PowerShell and Native Client to run queries in SQL Server

Identify Disk Latency on SQL Server Box Using PowerShell

How to Find Logical and Physical Processors on SQL Server?

As I write a number of them, every time I explore and do something totally different. So in this blog, I want to do a neat trick of using PowerShell. Here I we will try to get the details of SQL Server running on my localhost and then we will try to create an HTML page which will hold and show the data.

</pre>
$OutputFile = "c:\temp\test.html"
$SQLServer = "localhost"
[Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo")
$Result = @()
$SQLServerInfo = new-object “Microsoft.SqlServer.Management.Smo.Server” $SQLServer
$Result += $SQLServerInfo | Select Name, Edition,ProductLevel, Version, ServerType, Platform, IsClustered, PhysicalMemory, Processors
if($Result -ne $null)
{
$HTML = '<style type="text/css">
#Header{font-family:"Trebuchet MS", Arial, Helvetica, sans-serif;width:100%;border-collapse:collapse;}
#Header td, #Header th {font-size:14px;border:1px solid #98bf21;padding:3px 7px 2px 7px;}
#Header th {font-size:14px;text-align:left;padding-top:5px;padding-bottom:4px;background-color:#A7C942;color:#fff;}
#Header tr.alt td {color:#000;background-color:#EAF2D3;}
</Style>'
$HTML += "<HTML><BODY><Table border=1 cellpadding=0 cellspacing=0 width=100% id=Header>
<TR>
<TH><B>Server Name</B></TH>
<TH><B>Edition</B></TD>
<TH><B>ProductLevel</B></TH>
<TH><B>Version</B></TH>
<TH><B>ServerType</B></TH>
<TH><B>Platform</B></TH>
<TH><B>IsClustered</B></TH>
<TH><B>PhysicalMemory (MB)</B></TH>
<TH><B>Logical Processors</B></TH>
</TR>"
$HTML += "<TR>
<TD>$($Result.Name)</TD>
<TD>$($Result.Edition)</TD>
<TD align=center>$($Result.ProductLevel)</TD>
<TD>$($Result.Version)</TD>
<TD>$($Result.ServerType)</TD>
<TD>$($Result.Platform)</TD>
<TD align=center>$($Result.IsClustered)</TD>
<TD align=center>$($Result.PhysicalMemory)</TD>
<TD align=center>$($Result.Processors)</TD>
</TR>"
$HTML += "</Table></BODY></HTML>"
$HTML | Out-File $OutputFile
}
<pre>

The output of this will be a file on our c:\temp folder. If we open the same in a browser, you can see the SQL Server Information about version, edition, Server type, Memory and Processor on the box. This is a neat way to explore and play with a scripting language and PowerShell to create something interesting.

Have you ever created something really interesting like this in your environments? Will you be willing to share something via the comments so that others can also take advantage of this collective knowledge.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

Interview Question of the Week #032 – Best Practices Around FileGroups

Here is another question I just heard during a recent interview in the multinational company. I have observed over 1000s of interview, but this is the first time I have heard this is being discussed and I was very much glad to hear it. I have listed the answer of the candidate, right below it as well.

Question: What are the best practices for filegroups in SQL Server?

Answer:

It is suggested to place transaction logs on separate physical hard drives. In this manner, data can be recovered up to the second in the event of a media failure.

Tables and their non-clustered indexes separated into separate file groups can improve performance, because modifications to the table can be written to both the table and the index at the same time.

If tables and their corresponding indexes in a different file group, they must be backed up the two file groups as a single unit as they cannot be backed up separately.

Set a reasonable size of your database and transaction log (25% of database size).

Leave the Autogrow feature ON for the data files and for the log files with reasonable size of autogrow increment.

Place the log files on another physical disk arrays than those with the data files to improve I/O Performance.

Reference: Pinal Dave (http://blog.SQLAuthority.com)