SQLAuthority News – Presenting 3 Technology Session at GIDS 2015

Great Indian Developer Summit is my favorite technology event and I have been extremely happy to present technology sessions here for over 5 years. Just like every year, this year, I will be presenting three technology session on SQL Server 2014. This time the event is at two locations. First one is Bangalore and the second one is in Hyderabad.

If you are able to attend this event in person – do show up in my session as I will have some goodies to share. Here is the link to the website of the event. If you are not going to be in an event, I suggest you sign up for my newsletter as I will be sending all the scripts, demos for this event in email over here. The event organizer is not planning to record the sessions.

Performance in 60 Seconds – SQL Tricks Everybody MUST Know

Date and Time: APRIL 21, 2015 14:00-15:00
Location: J. N. Tata Auditorium, National Science Symposium Complex (NSSC), Sir C.V.Raman Avenue, Bangalore, India

Data and Database is a very important aspect of application development for businesses. Developers often come across situations where they face a slow server response, even though their hardware specifications are above par. This session is for all the Developers who want their server to perform at blazing fast speed, but want to invest very little time to make it happen. We will go over various database tricks which require absolutely no time to master and require practically no SQL coding at all. After attending this session, Developers will only need 60 seconds to improve performance of their database server in their implementation. We will have a quiz during the session to keep the conversation alive. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Additionally, all attendees of the session will have access to learning material presented in the session.

Troubleshooting CPU Performance Issue for SQL Developers

Date and Time: APRIL 21, 2015 17:30-18:30
Location: J. N. Tata Auditorium, National Science Symposium Complex (NSSC), Sir C.V.Raman Avenue, Bangalore, India

Developers are in the most challenging situations when they see CPU running 100%. There are many solutions of this situation, but there is very little time to implement those solutions. In this critical situation developers need a sure solution which gives stability to their system and buys more time to troubleshoot the problem. Many believe Performance Tuning and Troubleshooting is an art which has been lost in time. The truth is that art has evolved with the time and there are more tools and techniques to overcome ancient troublesome scenarios. There three major resource when bottlenecked creates performance problem: CPU, IO, and Memory. In this session we will focus on some of the common performance issues and their resolution. If time permits we will cover other performance related tips and tricks. At the end of the session attendee will have clear ideas and action items regarding what to do in when facing any of the above resource intensive scenarios.

SQL Server 2014 – A 60 Minutes Crash Course

Date and Time: APRIL 25, 2015 09:55-10:55
Hyderabad Marriott Hotel & Convention Centre, Tank Bund Road, Opposite Hussain Sagar Lake, Hyderabad, India

Every new release of SQL Server brings a whole load of new features that an administrator can add to their arsenal of efficiency. SQL Server 2014 has introduced many new features. In this 60 minute session we will be learning quite a few of the new features of SQL Server 2014. Here is the glimpse of the features we will cover in this 60 minute session.

Live plans for long running queries
Transaction durability and its impact on queries
New cardinality estimate for optimal performance
In-memory OLTP optimization for superior query performance
Resource governor and IO enhancements
Columnstore indexes and performance tuning
And many more tricks and tips
Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Additionally, all attendees of the session will have access to learning material presented in the session. This one hour will be the most productive one hour for any developer who wants to quickly jump start with SQL Server 2014 and its new features.

Remember – do not worry if you can’t attend the event. Just subscribe to newsletter and I will share all scripts and slides in the email right after the event.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Walking the Table Hierarchy in Microsoft SQL Server Database – Notes from the Field #076

[Note from Pinal]: This is a 76th episode of Notes from the Field series. Hierarchy is one of the most important concepts in SQL Server but there are not clear tutorial for it. I have often observed that this simple concept is often ignored or poorly handled due to lack of understanding.

In this episode of the Notes from the Field series database expert Kevin Hazzard explains Table Hierarchy in Microsoft SQL Server Database. Read the experience of Kevin in his own words.


When you need to manage a set of tables in Microsoft SQL Server, it’s good to know the required order of operations. The order could be hard-coded into the process but such approaches tend to fail when the database schema evolves. Instead, I prefer to use the catalog view named [sys].[foreign_keys] to discover the relationships between tables dynamically. Long ago, I wrote a function called [LoadLevels] that I’ve used in hundreds of processes to make them reusable and more resilient. The code for that function is shown in Listing 1:

-- ==========================================================================
-- Description: Get the load levels by tracing foreign keys in the database.
-- License:     Creative Commons (Free / Public Domain)
-- Rights:      This work (Linchpin People LLC Database Load Levels Function,
--              by W. Kevin Hazzard), identified by Linchpin People LLC, is
--              free of known copyright restrictions.
-- Warranties:  This code comes with no implicit or explicit warranties.
--              Linchpin People LLC and W. Kevin Hazzard are not responsible
--              for the use of this work or its derivatives.
-- ==========================================================================
CREATE FUNCTION [dbo].[LoadLevels]()
RETURNS @results TABLE
(
[SchemaName] SYSNAME,
[TableName] SYSNAME,
[LoadLevel] INT
)
AS
BEGIN
WITH
[key_info] AS
(
SELECT
[parent_object_id] AS [from_table_id],
[referenced_object_id] AS [to_table_id]
FROM [sys].[foreign_keys]
WHERE
[parent_object_id] <> [referenced_object_id]
AND [is_disabled] = 0
),
[level_info] AS
(
SELECT -- anchor part
[st].[object_id] AS [to_table_id],
0 AS [LoadLevel]
FROM [sys].[tables] AS [st]
LEFT OUTER JOIN [key_info] AS [ki] ON
[st].[object_id] = [ki].[from_table_id]
WHERE [ki].[from_table_id] IS NULL
UNION ALL
SELECT -- recursive part
[ki].[from_table_id],
[li].[LoadLevel] + 1
FROM [key_info] AS [ki]
INNER JOIN [level_info] AS [li] ON
[ki].[to_table_id] = [li].[to_table_id]
)
INSERT @results
SELECT
OBJECT_SCHEMA_NAME([to_table_id]) AS [SchemaName],
OBJECT_NAME([to_table_id]) AS [TableName],
MAX([LoadLevel]) AS [LoadLevel]
FROM [level_info]
GROUP BY [to_table_id];
RETURN
END

The [LoadLevels] function walks through the table relationships in the database to discover how they’re connected to one another. As the function moves from one relationship to the next, it records the levels where they exist in the hierarchy. A partial output of the function as executed against Microsoft’s AdventureWorks2014 sample database is shown in Figure 1.

Ordering to show the highest load levels first, notice that the most dependent table in the AdventureWorks2014 database is [Sales].[SalesOrderDetails]. Since the load levels are zero-based in the function output, that table is eight levels high in the hierarchy. In other words, if I were developing an Extract, Transform & Load (ETL) system for [Sales].[SalesOrderDetails], there are at least seven other tables that need to be loaded before it. For all 71 tables in the AdventureWorks2014 database, the function reveals some interesting facts about the load order:

  • Level 0 – 25 tables, these can be loaded first
  • Level 1 – 8 tables, these can be loaded after level 0
  • Level 2 – 8 tables, these can be loaded after level 1
  • Level 3 – 19 tables, …
  • Level 4 – 7 tables
  • Level 5 – 1 table
  • Level 6 – 1 table
  • Level 7 – 2 tables, these must be loaded last

The [LoadLevels] function uses two Common Table Expressions (CTE) to do its work. The first one is called [key_info]. It is non-recursive and gathers just the foreign keys in the database that aren’t self-referencing and aren’t disabled. The second CTE is called [level_info] and it is recursive. It starts by left joining the tables in the database to the foreign keys from the first CTE, picking out just those tables that have no dependencies. For the AdventureWorks2014 database, for example, these would be the 25 tables at level zero (0).

Then the recursion begins by joining the output from the previous iteration back to the key information. This time however, the focus is on the target of each foreign key. Whenever matches are found, the reference level is incremented by one to indicate the layer of recursion where the relationship was discovered. Finally, the results are harvested from the [level_info] CTE by grouping the table object identifiers, resolving the schema and table names, and picking off the maximum load level discovered for each entity.

The reason for grouping and selecting the maximum load level for any table becomes clear if you remove the GROUP BY clause and the MAX() operator from the code. Doing that reveals every foreign key relationship in the database. So for example, in the AdventureWorks2014 database, the [Sales].[SalesOrderDetails] table appears in 22 different relationships ranging from three levels high in the hierarchy to eight levels high, output as [LoadLevel] 7 in Figure 1. By grouping and selecting the maximum level for any table, I’m certain to avoid loading tables too early in my dynamic ETL processes.

In summary, you can use the [LoadLevels] function to identify dependencies enforced by foreign key constraints between tables in your database. This information is very useful when developing a process to copy this data to another database while preserving the referential integrity of the source data.

If you want to get started with SQL Server with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Finding What Policies Are Enabled on Our Databases

When I wrote about Policy Management last week (SQL SERVER – Introduction to Policy Management), lesser did I know I will get queries from blog readers on the basics. As I always said, Policy Management is always an underappreciated capability inside SQL Server. In one of the mails, I was asked – “How do I know which policy is enabled on my Server?” Before I can answer the same with a simple query to our metadata, let me walk through what happens in words before going into the solution.

When SQL Server “policy engine” executes a policy, it determines which objects to evaluate against the policy’s condition. It uses the target set and target set level data to determine this. The information can be viewed in the syspolicy_target_sets and syspolicy_target_set_level views. A target set has data for the target type (for example, [Database]) and target skeleton (for example, [Server/Database]). This target skeleton includes all databases on the server. The target set can be further narrowed with a filter. This filter can be another condition. This is the data stored in the target set level. Below is a query to help see this metadata.

/* Query the metadata to find the Policy, Condition, Target Set informations on our database */
SELECT p.policy_id,
p.is_enabled,
p.name AS 'policy_name',
c.condition_id,
c.name AS 'condition_name',
c.expression AS 'condition_expression',
ts.target_set_id,
ts.TYPE,
ts.type_skeleton,
tsl.condition_id AS 'target_set_condition_id'
FROM msdb.dbo.syspolicy_policies p
INNER JOIN msdb.dbo.syspolicy_conditions c
ON p.condition_id = c.condition_id
INNER JOIN msdb.dbo.syspolicy_target_sets ts
ON ts.object_set_id = p.object_set_id
INNER JOIN msdb.dbo.syspolicy_target_set_levels tsl
ON ts.target_set_id = tsl.target_set_id
-- WHERE p.is_enabled <> 0 -- Use this to get only enabled Policies on the DB

If you plan to run these on a SQL Server 2012+ server box, then you are likely to see the AlwaysOn related policies for reference. Go ahead and uncomment the last line to see only policies that are enabled on your SQL Server box. I would be curious to know how many of you actively use this capability and for what reasons. Do let me know via your comments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Script – Removing Multiple Databases from Log Shipping

Have you ever come across a situation where you have large number of databases in log shipping and you have to remove all of them? If you use SQL Server Management Studio, it would take a long time because you have to Right Click on each database, go to properties, choose Transaction Log Shipping tab, choose remove for secondary, uncheck the box and then hit OK. Though monotonous, these are painful when the number of databases we are really huge.

In the background, it executes stored procedures to remove the metadata from the log shipping related tables in MSDB database.

Below is the sample which runs on primary. I have changed input parameters.

-- primary
EXEC MASTER.dbo.sp_delete_log_shipping_primary_secondary
@primary_database = N'Primary_Database_Name'
,@secondary_server = N'Secondary_Server_Name'
,@secondary_database = N'Secondary_Database_Name'
GO
EXEC MASTER.dbo.sp_delete_log_shipping_primary_database @database = N'Primary_Database_Name'
GO

And below runs on secondary (here also I have changed input parameter)

-- secondary
EXEC MASTER.dbo.sp_delete_log_shipping_secondary_database
@secondary_database = N'Seconday_Database_Name'

Essentially, if we want to remove log shipping we need to get primary database name, secondary database name and secondary server name. I have used metadata table to find that details.

-- Script for removal of Log Shipping from primary
SET NOCOUNT ON
GO
DECLARE @ExecString VARCHAR(MAX)
SELECT @ExecString = 'EXEC master.dbo.sp_delete_log_shipping_primary_secondary
@primary_database = N'''
+ pd.primary_database +'''
,@secondary_server = N'''
+ ps.secondary_server+ '''
,@secondary_database = N'''
+ ps.secondary_database + ''''
+'
go'
FROM   msdb.dbo.log_shipping_primary_secondaries ps,
msdb.dbo.log_shipping_primary_databases pd
WHERE ps.primary_id = pd.primary_id
SELECT @ExecString
GO
DECLARE @ExecString VARCHAR(MAX)
SELECT @ExecString = 'EXEC master.dbo.sp_delete_log_shipping_primary_database @database = N'''+primary_database+'''
go'
FROM msdb.dbo.log_shipping_primary_databases
SELECT @ExecString
GO

Once you run the script, you would get output with execute statement, just copy paste and run into new query window.

Here is the similar script for secondary server.

-- Script for removal of LS from Secondary
DECLARE @ExecString VARCHAR(MAX)
SELECT @ExecString = 'EXEC master.dbo.sp_delete_log_shipping_secondary_database @secondary_database = N'''+secondary_database+'''
go'
FROM log_shipping_secondary_databases
SELECT @ExecString
GO

Note: Above scripts would generate which you need to copy paste in new query window on respective servers.  Please verify before running the output on production server.

Trying to automate and use the power of T-SQL is one of the best things I have always felt. Do let me know if you did these sort of things in your environments? Can you share some of the experiences?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Is XP_CMDSHELL Enabled on the Server?

I am not a big fan of using command line utilities in general. But from time to time I do explore and play around with command line tools that make my life easier. Having said that, when working with SQL Server, I do often give out the recommendation of not trying to use the native xp_cmdshell commands with SQL Server. Even the SQL_Server_2012_Security_Best_Practice_Whitepaper talks about xp_cmdshell and recommends to avoid the same if it is possible. These are basic building blocks of security that I highly recommend my customers from time to time.

In this blog, let me show you two ways of doing the same.

TSQL Way

The easier way using standard T-SQL commands is shown below.

-- To allow advanced options to be changed.
EXEC sp_configure 'show advanced options', 1
GO
-- To update the currently configured value for advanced options.
RECONFIGURE
GO
-- Disable xp_cmdshell option now
EXEC sp_configure 'xp_cmdshell', 0
GO
RECONFIGURE
GO

This works great when you have direct access to the SQL Server box and you want to just do it on the box under question.

PowerShell Way

For the world of administrators who manager literally 10’s or 100’s of servers in a datacenter, doing a line-by-line or a server-by-server T-SQL script doesn’t work. They want something as a script. Below is a PowerShell script that will find out if the xp_cmdshell option is enabled on a server or not.

[Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo")  | Out-Null;
 $srv = new-object Microsoft.SqlServer.Management.Smo.Server("MSSQLSERVER")
 if ($srv.Configuration.XPCmdShellEnabled -eq $TRUE)
 {
 Write-Host "xp_cmdshell is enabled in instance" $srv.Name
 }
 else
 {
 Write-Host "xp_cmdshell is Disabled in instance" $srv.Name
 }

I am sure this can be run in a loop for all your servers. Here the default instance MSSQLSERVER is being used. I am sure you will be able to run the same script in PowerShell ISE to know your instance status wrt xp_cmdshell. I would highly recommend you to share your experience and how you ever used such scripts in your environments.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #015 – How to Move TempDB to Different Drive

Here is one of the most popular questions I often come across-

Question – How to move the TempDB to different drive when the log files are filled?

Answer – In most of the cases which I have observed one has to move the TempDB to different drive when TempDB log file is filled up or one believes when moving to different drive will help the growth of the file. Sometimes user also moves to different drive due to performance reasons as keeping TempDB on a different drive from your main database helps.

Here is the error user usually engage when they come across TempDB log file growth.

The LOG FILE FOR DATABASE ‘tempdb’ IS FULL.
Back up the TRANSACTION LOG FOR the DATABASE TO free up SOME LOG SPACE

Here is my earlier blog post where I have described how one can change the TempDB location to another drive SQL SERVER – TempDB is Full. Move TempDB from one drive to another drive.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – CONCAT function and NULL values

CONCAT is the new T-SQL function introduced in SQL Server 2012. It is used to concatenate the values. It accepts many parameter values seperated by comma. All parameter values are concatenated to a single string.

A simple example is

SELECT CONCAT('SQL Server',' 2012')

which results to SQL Server 2012

The same can be done using + in the earlier versions

SELECT 'SQL Server'+' 2012'

which results to SQL Server 2012

But did you know the advantage of CONCAT function over +?

SELECT 'SQL Server'+' 2012'+NULL

When you execute the above, the result is NULL

But the CONCAT function will simply ignore NULL values

SELECT CONCAT('SQL Server',' 2012',NULL)

The result is SQL Server 2012

So by using CONCAT function, you do not need to worry about handling NULL values.

How many of you know this?

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Login Failed For User – Reason Server is in Script Upgrade Mode

I just can’t get enough of the error messages that land into my Inbox almost every single day. I do my due diligence of making a search in my blogs and I try to hang around in the forums from time to time to learn what people are getting as errors. While browsing through forums, I have been able to find that this error was faced by many users. It is one of the most common errors that seem to land, let me know if you ever faced the following error in your environments:

Login failed for user ‘MyDomain\Username. Reason: Server is in script upgrade mode. Only administrator can connect at this time. (.Net SqlClient Data Provider)

I did some more research and found below facts:

  • Whenever any SQL Server patch is applied, setup would patch the binaries first.
  • During the restart of instance, SQL Server startup would go though “script upgrade mode” during recovery phase.
  • Script upgrade mode is the phase where objects inside the databases are upgraded based on recently patch applied.
  • Based on features installed and number of databases available, it would take varying amount of time.

The best way to look at the upgrade script status is to keep observing ERRORLOG continuously. Since we would be not able to connect to SQL via SSMS or client tools, we need to open ERRORLOG via windows explorer by going to physical location. Once “Recovery is complete” message is printed in ERRORLOG, we should not see the message any further.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – The Basics of the File System Task – Part 2 – Notes from the Field #075

[Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications.

In this episode of the Notes from the Field series I asked SSIS Expert Andy Leonard a very crucial question – What are the Basics of the File System Task and where do we start with it? Andy was very kind to answer the questions and provides plenty of information about how a novice developer can learn SSIS from the beginning and become expert in the technology.


Many data integration scenarios involve reading data stored in flat files or performing extracts from a relational (or legacy) system into flat files. Learning how to configure and use the SQL Server Integration Services (SSIS) File System Task will support your efforts when loading data to and from flat files. In a previous article, I described configuring the File System Task to archive a file. In this article, I will repeat the exercise, but I will add flexibility (and complexity – the two always go together) by using SSIS Variables to manage the Source File and Destination Directory locations. This article is an edited version of The Basics of the File System Task, Part 1. I chose to write it this way for those who find this article but haven’t read Part 1.

Remember: SSIS is a software development platform. With “SQL Server” included in the name, it is easy for people to confuse SSIS as a database tool or accessory, but Control Flow Tasks put that confusion to rest.

SSIS provides several Control Flow tasks. Here is a list that provides a good approximation of which tasks I use most, from most-used to least-used:

In this article I provide an advanced example of configuring the SSIS File System Task, shown in Figure 1:


Figure 1: SSIS File System Task

The File System Task provides one way to implement an SSIS Design Pattern for source file archival. When you first open the File System Task Editor, you will note several properties in the property grid. Whenever you see an Operation property in an SSIS task editor, know that that property drives the other property selections. Options for the Operation property of the SSIS File System Task are shown in Figure 2:

Figure 2: SSIS File System Task Operation Property Options

The Operation options are:

  • Copy directory
  • Copy file (default)
  • Create directory
  • Delete directory
  • Delete directory content
  • Delete file
  • Move directory
  • Move file
  • Rename file
  • Set Attributes

I stated the Operation property drives the other property selections. Take a look at the File System Task Editor when I change the Operation option from “Copy file” (Figure 2) to “Delete file” as shown in Figure 3:

Figure 3: The File System Task Editor with the “Delete file” Operation Selected

See? There are less properties required for the “Delete file” operation. The available properties are even more different for the “Set Attributes” operation, shown in Figure 4:

Figure 4: The File System Task Editor with the “Set Attributes” Operation Selected

The Operation property changes the editable properties, exposing some and hiding others. With flexibility come complexity. Even though the File System Task is complex, I’ve found the task is stable and extremely useful. Let’s look at a practical example; using the File System Task to archive a flat file.

To begin configuring the SSIS File System Task for file archival, select the “Move file” operation as shown in Figure 5:

Figure 5: SSIS File System Task with the “Move file” Operation Selected

Using the IsSourcePathVariable and IsDestinationPathVariable properties extends the flexibility of the File System Task and further changes the list of available properties in the property grid, as shown in Figure 6:

Figure 6: Opting to Use Variables for Source and Destination Paths

Note the SourceConnection and DestinationConnection properties are hidden and the SourceVariable and DestinationVariable properties are available in their place. Click the SourceVariable property dropdown, and click “<New variable…>” as shown in Figure 7:

Figure 7: Selecting “<New variable…>” from the SourceVariable Property

When the Add Variable window displays, enter “SourceFilePath” for the variable name property and a full path to your source file in the Value textbox, as shown in Figure 8:

Figure 8: Configuring the SourceFilePath SSIS Variable

Click the OK button to close the Add Variable window and return to the File System Task Editor. Click the DestinationVariable property dropdown, and then click “<New variable…>” to open a new Add Variable window. Configure the new variable by setting the Name property to “DestinationFolder” and the Value property to a location you wish to move the file, as shown in Figure 9:

Figure 9: Configuring the DestinationFolder SSIS Variable

Click the OK button to close the Add Variable window and return to the File System Task Editor. You have configured an SSIS File System Task to move a file using SSIS Variables to manage the source and destination of the file, as shown in Figure 10:

Figure 10: An SSIS File System Task Configured to Move a File Using SSIS Variables

The SSIS File System Task is now configured to archive a file. Let’s test it! Click the OK button to close the File System Task Editor. Press the F5 key or select SSIS->Start Debugging to test your work. My result is shown in Figure 11:

Figure 11: Successful Test Execution of the SSIS File System Task

Viewing the source and destination directories, we see the file was successfully moved – shown in Figure 12:

Figure 12: The File, Moved!

One tricky part when configuring the SSIS File System Task to move a file is realizing that you need to select the actual file for the source and the directory for the destination.

As I stated earlier, the SSIS File System Task is powerful, flexible, and robust. This article has demonstrated another way you can use the File System Task to archive files. Archiving files after loading the data they contain is a common practice in data integration.

If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Introduction to Policy Management

As March comes, most of us are trying to chase our dreams and hope the much awaited yearend review comes good for us. I know many pray and don’t realize it is just an assessment of what was done the whole of last year. It is not a myopic view, but a view that is based of rules on how consistent we have been the complete year. These rules and policy decisions are generally governed by how our organizations work and how policy decisions are made in each of the region where you work. Though we understand how these policies work, we never look at the policy as a guiding rail for our superiors to work. In the same context, I wish all of us also understand the nuances of policy management with SQL Server too.

SQL Server 2008 introduced this neat feature which is lesser appreciated by many in the SQL Server world. This blog is to set some context into what are the building blocks of SQL Server Policy Management. If we understand these, then in the future we will build on some real world examples.

Target Set

Targets are entities that can be managed by the Policy Based Management. All similar type targets within an instance of SQL Server form a target hierarchy. A target filter is then applied to all targets within the target hierarchy to form a target set. The Policy Based Management engine, which is implemented in SQL CLR, will traverse the target set and compare the policy’s condition to each target in the target set to determine compliance or a violation.

Facet

A facet has properties that model the behavior or characteristics of managed targets. The properties are built into the facet and are determined by the facet developer. SQL Server currently supports Microsoft built-in facets.

Condition

A condition is a Boolean expression that sets allowed states by the managed target for a particular facet property. A condition must evaluate to either true or false. A condition can include multiple properties of a facet that are and\or together.

Policy

A policy includes a condition based on facet properties that are evaluated against a set of targets. A policy can only have one condition. The policy is executed with one of the evaluation modes which determines how it is implemented and if it logs or prevents out of compliance violations.

Evaluation Mode

Either the On Demand evaluation mode can check the validity of the policy or it can configure the target to adhere to the policy. Not all facet properties are settable. Some are read-only. These read-only properties cannot be configured.

The Check On Change (prevent out of compliance) evaluation mode uses a DDL Server trigger to enforce a policy. When the policy is created, a DDL Server trigger is updated to fire on create, drop or alter activity that starts the policy execution. If the policy is violated, it will roll back the transaction which prevents the change.

The Check On Change (log out of compliance) evaluation mode uses event notification and service broker to log a policy violation. When the policy is created, an event notification is updated to capture create, drop or alter activity which starts the policy execution. If the policy is violated, it will log the policy violation along with information to the Policy Based Management execution history and health state tables in msdb.

The Check on Schedule (log out of compliance) evaluation mode uses a SQL Server Agent job and schedule to start the policy execution. If the policy is violated, it will log the policy violation along with information to the Policy Based Management execution history and health state tables in msdb.

Subscriptions

A subscription allows a database to subscribe to a policy group. All of the policies within the policy group will be applied to a single database. Policy groups and subscriptions make it easier to manage a large group of policies for a database.

Now that we have got the basics cleared out. In subsequent blog posts, we will take examples of each of above building blocks and bring some real world use cases. As readers, I would like to know how many of you have been using Policy Management in your environments today. Can you share few examples where you found it useful?

Reference: Pinal Dave (http://blog.sqlauthority.com)