SQL SERVER – PowerShell Script – When Was SQL Server Last Restarted?

I have always wanted to learn a bit of scripting and I was always searching for a good reason to learn something new. As I said, this is my learning and I am open to learning some scripting tricks from you too. So do let me know how the below script can be optimized in your comments. Now, what was my reason to learn this script?

Someone asked me, “How can I find out when SQL Server was started?” There are multiple ways to find out, but I took the simple route and said – “Why don’t you check the SQL Server Error logs?” I know, the SQL Server error logs can get recycled and this data maynot be available. But in most cases, this gives us some indication.

So the other question was how can we script this requirement? So the below script is written to read SQL Server ErrorLog, find a specific text and report the number of times the string was found. We also show the time when the last/recent occurrence was found.

param
 (
 $serverinstance = ".", #Change this to your instance name
 $stringToSearchFor = "Starting up database ''master''"
 )
 # Load Assemblies
 [Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.Smo') | Out-Null
 # Server object
 $server = New-Object -TypeName "Microsoft.SqlServer.Management.Smo.Server" -ArgumentList $serverinstance
 $searchCriteria = "Text like '%{0}%'" -f $stringToSearchFor
 $SQLErrorLogs = $Server.EnumErrorLogs()
 $SearchEntry = @()
 ForEach ($SQLErrorLog in $SQLErrorLogs)
 {
 $SearchResult = $server.ReadErrorLog($SQLErrorLog.ArchiveNo).select($searchCriteria) | Select-Object -Property LogDate, Text
 $SearchEntry = $SearchEntry + $searchResult
 }
 $MeasureOccurences = $SearchEntry | Measure-Object -Property LogDate -Minimum -Maximum
 $SQLSearchInfo = New-Object psobject -Property @{
 SearchText = $stringToSearchFor
 Occurances = $MeasureOccurences.Count
 MinDateOccurred = $MeasureOccurences.Minimum
 MaxDateOccurred = $MeasureOccurences.Maximum
 }
 Write-Output $SQLSearchInfo | FT -AutoSize

Do let me know if you ever used such scripting capability to search your errorlogs? You can change the search string and the default instance to your environment.
I wrote this generically because I was planning to use this to find error messages inside ErrorLogs like “I/O requests taking longer than 15 seconds to complete”. The script can be modified to your needs. Do share your scripts too over the comments so that others can benefit from the sharing.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Identify Page Splits Using Extended Events in SQL Server

Digging some of my older posts reveal I have written a blog already on Page Splits with SQL Server. This was no surprise because I am in constant need to reinvent as I write the blog posts. Now you might ask, why was I searching. Well, when I was talking about a performance tuning topic at a conference, I told the harmful effects of page splits and one of the attendees asked – “How can we track Page splits inside SQL Server?”.

The easiest way to track if page splits are happening in SQL Server is to use the PerfMon Counters. For the records you can start from- “SQL Server: Access Methods -> Page Splits/sec”. This counter is quite handy to understand if this behavior happens in our SQL Server instance. Now there was an interesting counter question someone asked, “Is there a way to know each of the page splits that happen in the system?”

This question had got me thinking and I wanted to somehow find how this can be found. And to my surprise, I found this was already available with us all along with Extended Events. So here is what I did.

TSQL Script

Let us first create a database for our experiment. We will also create our Extended Event to track the PageSplits.

-- Create a dummy database
CREATE DATABASE PageSplits
GO
-- Get the DB ID, we will use it in next step
SELECT DB_ID('PageSplits')
GO
-- Create Session
CREATE EVENT SESSION [Catch_PageSplit] ON SERVER
ADD EVENT sqlserver.page_split(
WHERE ([package0].[equal_uint64]([database_id],(10))))  -- NOTE: Please make sure you change the DATABASE_ID
ADD TARGET package0.ring_buffer
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,
MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,
MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF)
GO
-- Start the event session
ALTER EVENT SESSION [Catch_PageSplit] ON SERVER
STATE
= start;
GO

Before we get to the next step, let us start monitoring our Extended Events for any possible page splits.

USE PageSplits
GO
CREATE TABLE mytest (
myCompany CHAR(5) NOT NULL CONSTRAINT pk_mytest PRIMARY KEY CLUSTERED,
FillData VARCHAR(3000) NOT NULL
)
GO
INSERT mytest ( myCompany, FillData )
VALUES( '00001', REPLICATE( 'A', 3000 ) ),
(
'00002', REPLICATE( 'B', 1000 ) ),
(
'00003', REPLICATE( 'C', 3000 ) ),
(
'00004', REPLICATE( 'A', 3000 ) ),
(
'00005', REPLICATE( 'B', 1000 ) ),
(
'00006', REPLICATE( 'C', 3000 ) ),
(
'00007', REPLICATE( 'A', 3000 ) ),
(
'00008', REPLICATE( 'B', 1000 ) ),
(
'00009', REPLICATE( 'C', 3000 ) )
GO

Next step is to create a table with some values. Later we will use this to create a page split scenario.

page splits xEvent 01 SQL SERVER   Identify Page Splits Using Extended Events in SQL Server

Let us create the scenario of page split by updating a row with some extra data.

-- Update to introduce a page split
UPDATE mytest
SET FillData = REPLICATE( 'B', 3000)
WHERE myCompany = '00002'
GO

Don’t forget to look at the Live Data feed for entries. If the Page Split happen you will see something like:

page splits xEvent 02 SQL SERVER   Identify Page Splits Using Extended Events in SQL Server

As you can see, this Page Split has happened because of an Update. The two pages under question are also shown. We can also see the database ID under question that caused the page split.

I personally thought this was a wonderful concept hidden between tons of features of Extended Events. This is awesome to learn these fine prints.

Now that we learnt something new here, let us clean up the database we just created.

-- Clean up time
USE MASTER
GO
DROP DATABASE PageSplits
GO
DROP EVENT SESSION [Catch_PageSplit] ON SERVER
GO

Hope you found it interesting and do let me know how you were able to use Extended Events to learn something new inside SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Introduction to Database Engine Tuning Advisor (a.k.a. DTA)

While talking to one of the attendees after my session at GIDS, I realized that I don’t have much information available on my blog about the usage of one of the fantastic performance tuning tool called Database Engine Tuning Advisor. This is also called as Database Tuning Advisor or DTA in short.

The good thing about this tool is that it is part of the product itself. Once SQL Server Client Tools are installed, DTA is installed along with that. This tool has the capability to suggest index and statistics recommendations for a query given as input. You need not be an expert about query optimization to use this tool.

There are various ways to provide workload input. For a quick demonstration, I would use sample database “AdventureWorksDW2012” and tune a query. If you are not aware about this sample database, then you can read the information provided in my earlier blog.

SQL Authority News – Download and Install Adventure Works 2014 Sample Databases

Once AdventureWorksDW2012 is restored, let’s assume that we need to tune below query.

SELECT [ProductKey]
,[DateKey]
,[MovementDate]
,[UnitCost]
,[UnitsIn]
,[UnitsOut]
,[UnitsBalance]
FROM [AdventureWorksDW2012].[dbo].[FactProductInventory]
WHERE [MovementDate] =  '2005-07-06'

 

There are multiple ways, we can provide input to DTA tool. The simplest way is to use management studio query windows and right click over there to choose “Analyze Query in Database Engine Tuning Advisor” as shown below.

DTA 01 SQL SERVER   Introduction to Database Engine Tuning Advisor (a.k.a. DTA)

Once that is clicked, the tool would open like below.

DTA 02 SQL SERVER   Introduction to Database Engine Tuning Advisor (a.k.a. DTA)

We need to choose the proper database in “Database for workload analysis:” and “Select databases and tables to tune”. In our example, we are using AdventureWorksDW2012 database so we will choose that in both the places. Once the database is selected we can directly use “Start Analysis” under “Actions” menu or press the button as well.

DTA 03 SQL SERVER   Introduction to Database Engine Tuning Advisor (a.k.a. DTA)

It would take some time to perform the analysis

DTA 04 SQL SERVER   Introduction to Database Engine Tuning Advisor (a.k.a. DTA)

…and finally provide recommendation(s).

DTA 05 SQL SERVER   Introduction to Database Engine Tuning Advisor (a.k.a. DTA)

One of the interesting piece which is highlighted in above image is “Estimated Improvements”. This the example which we have selected, we are seeing 99% improvement possible. The recommendation provided is to “create” an index.

To get that improvement, we can get the recommendations from Menu “Action” and choose either “Apply Recommendations” or “Save Recommendations”

DTA 06 SQL SERVER   Introduction to Database Engine Tuning Advisor (a.k.a. DTA)

Here is the recommendations file which I have saved and opened in SSMS.

DTA 07 SQL SERVER   Introduction to Database Engine Tuning Advisor (a.k.a. DTA)

Here is the comparison of query plan before and after recommendations.

Plan before Index

DTA 08 SQL SERVER   Introduction to Database Engine Tuning Advisor (a.k.a. DTA)

And here is the plan after index. We can see that clustered index scan has changed to non-clustered index seek.

 DTA 09 SQL SERVER   Introduction to Database Engine Tuning Advisor (a.k.a. DTA)

In future blogs, I would show some more example and ways to tune the workload by using database engine tuning advisor.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #019 – How to Reset Identity of Table

Here is the question which I received in email:

Question: I have a table which has an identity column. I have recently deleted old and archive it to another place. Now I want to reset the identity of the table to original value of 1. How should I do that?

Answer: It is very easy to do so – you can use the command DBCC CHECKIDENT to do the same.

If a table has to start with an identity of 1 with the next insert then the table should be reseeded with the identity to 0. If identity seed is set below values that currently are in the table, it will violate the uniqueness constraint as soon as the values start to duplicate and will generate error.

DBCC CHECKIDENT (yourtable, reseed, 0)

Here is the quick video I have created to demonstrate the same example.

http://www.youtube.com/watch?v=V6yRvhwUs9Y

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Script: Current IO Related Waits on SQL Server

Playing around with DMVs is so much fun. Many a times we need little inspiration and a problem in hand to solve. Out of many things I monitor, I have personally seen folks having trouble in finding what is going wrong when the server turns slow. In a recent conversation with a friend who was troubleshooting a SQL Server instance running inside a VM on Azure, he was not sure what suddenly went wrong and was curious to know what is going on LIVE with his SQL Server.

This was an interesting question and I asked what does he want to monitor. From his experience he was quick to bounce and let me know it was around IO. I went back into my handy little scripts folder to find what I had. I share the below script which was used for this problem.

/* sys.dm_io_pending_io_requests : It is important to recognize that these are current active waits that are I/O related. */
SELECT f.database_id, f.[file_id], DB_NAME(f.database_id) AS database_name, f.name AS logical_file_name, f.type_desc,
CAST (CASE
-- Handle UNC paths (e.g. '\\fileserver\DBs\readonly_dw.ndf')
WHEN LEFT (LTRIM (f.physical_name), 2) = '\\'
THEN LEFT (LTRIM (f.physical_name),CHARINDEX('\',LTRIM(f.physical_name),CHARINDEX('\',LTRIM(f.physical_name), 3) + 1) - 1)
-- Handle local paths (e.g. 'C:\Program Files\...\master.mdf')
WHEN CHARINDEX('\', LTRIM(f.physical_name), 3) > 0
THEN UPPER(LEFT(LTRIM(f.physical_name), CHARINDEX ('\', LTRIM(f.physical_name), 3) - 1))
ELSE f.physical_name
END AS NVARCHAR(255)) AS logical_disk,
fs.io_stall/1000/60 AS io_stall_min,
fs.io_stall_read_ms/1000/60 AS io_stall_read_min,
fs.io_stall_write_ms/1000/60 AS io_stall_write_min,
(
fs.io_stall_read_ms / (1.0 + fs.num_of_reads)) AS avg_read_latency_ms,
(
fs.io_stall_write_ms / (1.0 + fs.num_of_writes)) AS avg_write_latency_ms,
((
fs.io_stall_read_ms/1000/60)*100)/(CASE WHEN fs.io_stall/1000/60 = 0 THEN 1 ELSE fs.io_stall/1000/60 END) AS io_stall_read_pct,
((
fs.io_stall_write_ms/1000/60)*100)/(CASE WHEN fs.io_stall/1000/60 = 0 THEN 1 ELSE fs.io_stall/1000/60 END) AS io_stall_write_pct,
ABS((fs.sample_ms/1000)/60/60) AS 'sample_HH',
((
fs.io_stall/1000/60)*100)/(ABS((fs.sample_ms/1000)/60))AS 'io_stall_pct_of_overall_sample',
PIO.io_pending_ms_ticks,
PIO.scheduler_address
FROM sys.dm_io_pending_io_requests AS PIO
INNER JOIN sys.dm_io_virtual_file_stats (NULL,NULL) AS fs
ON fs.file_handle = PIO.io_handle
INNER JOIN sys.master_files AS f
ON fs.database_id = f.database_id AND fs.[file_id] = f.[file_id]

 

Also a small extension to this script includes the addition of os_schedulers to know if there is any skew in usage. So the same query is modified to include the same. I know it is just an extension, but thought to add it here in a single location for completeness.

/* The query below can be used to determine if pending IO is causing work to be queued on the scheduler. */
SELECT f.database_id, f.[file_id], DB_NAME(f.database_id) AS database_name, f.name AS logical_file_name, f.type_desc,
CAST (CASE
-- Handle UNC paths (e.g. '\\fileserver\DBs\readonly_dw.ndf')
WHEN LEFT (LTRIM (f.physical_name), 2) = '\\'
THEN LEFT (LTRIM (f.physical_name),CHARINDEX('\',LTRIM(f.physical_name),CHARINDEX('\',LTRIM(f.physical_name), 3) + 1) - 1)
-- Handle local paths (e.g. 'C:\Program Files\...\master.mdf')
WHEN CHARINDEX('\', LTRIM(f.physical_name), 3) > 0
THEN UPPER(LEFT(LTRIM(f.physical_name), CHARINDEX ('\', LTRIM(f.physical_name), 3) - 1))
ELSE f.physical_name
END AS NVARCHAR(255)) AS logical_disk,
fs.io_stall/1000/60 AS io_stall_min,
fs.io_stall_read_ms/1000/60 AS io_stall_read_min,
fs.io_stall_write_ms/1000/60 AS io_stall_write_min,
(
fs.io_stall_read_ms / (1.0 + fs.num_of_reads)) AS avg_read_latency_ms,
(
fs.io_stall_write_ms / (1.0 + fs.num_of_writes)) AS avg_write_latency_ms,
((
fs.io_stall_read_ms/1000/60)*100)/(CASE WHEN fs.io_stall/1000/60 = 0 THEN 1 ELSE fs.io_stall/1000/60 END) AS io_stall_read_pct,
((
fs.io_stall_write_ms/1000/60)*100)/(CASE WHEN fs.io_stall/1000/60 = 0 THEN 1 ELSE fs.io_stall/1000/60 END) AS io_stall_write_pct,
ABS((fs.sample_ms/1000)/60/60) AS 'sample_HH',
((
fs.io_stall/1000/60)*100)/(ABS((fs.sample_ms/1000)/60))AS 'io_stall_pct_of_overall_sample',
PIO.io_pending_ms_ticks,
PIO.scheduler_address,
os.scheduler_id,
os.pending_disk_io_count,
os.work_queue_count
FROM sys.dm_io_pending_io_requests AS PIO
INNER JOIN sys.dm_io_virtual_file_stats (NULL,NULL) AS fs
ON fs.file_handle = PIO.io_handle
INNER JOIN sys.master_files AS f
ON fs.database_id = f.database_id AND fs.[file_id] = f.[file_id]
INNER JOIN sys.dm_os_schedulers AS os
ON PIO.scheduler_address = os.scheduler_address

Though this is a simple script that I have used. Do let me know, which component of SQL Server is of importance for you? Has IO been a point of problem for you anytime? Have you used scripts similar to this in your environments? Do let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Backup Failure – Error: 3203, Severity: 16, State: 1

In the recent past, I have been writing about different error messages on SQL Server that one encounters. Thanks to my blog readers who have been sharing a lot of errors with me either by email or by blog comments and asking for solution. If I find something interesting and more helpful to other, I write a blog for that. This is one of such blog.

2015-05-05 10:00:00.440 Backup       Error: 3203, Severity: 16, State: 1.
2015-05-05 10:00:00.440 Backup       Read on “M:\MSSQL\TLog\Production_1.ldf” failed: 1(Incorrect function.)
2015-05-05 10:00:00.440 Backup       Error: 3041, Severity: 16, State: 1.
2015-05-05 10:00:00.440 Backup       BACKUP failed to complete the command BACKUP LOG ProductionDB. Check the backup application log for detailed messages.

When I was searching for other emails with error message, I found another one as below.

2014-01-05 16:07:38.19 Backup   Error: 3203, Severity: 16, State: 1.
2014-01-05 16:07:38.19 Backup   Read on “M:\DatabaseName_DefaultData.ndf” failed: 1117(failed to retrieve text for this error. Reason: 15100)

This error would vary based on the actual cause. In first error message, it is operating system error 1 and in second case, it is operating system error 1117. To convert any operating system error number to text, we can use windows net command (net helpmsg) from command prompt as shown below

nethelpmsg 01 SQL SERVER   Backup Failure   Error: 3203, Severity: 16, State: 1

If you notice the second message, we are not seeing the text of the message 1117 but we are seeing “failed to retrieve text for this error. Reason: 15100”

What you should do in case of such error?

You should start looking at system and application event log based in operating system error. The example error which I have shown above are due to hardware level issues. Here is what we found in event log:

Log Name: System
Source: Disk
Date: 5/5/2015 09:32:11 AM
Event ID: 51
Task Category: None
Level: Warning
Keywords: Classic
User: N/A
Computer: SQLServerMachine
Description: An error was detected on device \Device\Harddisk4\DR4 during a paging operation.

Here is an article which explained event ID 51 https://support.microsoft.com/en-us/kb/244780/

Have you ever seen any similar errors? Put into the comment section to help others and share your knowledge. The best thing about these errors is that we get to learn from each other interesting concepts.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Using Package Configurations in SSIS 2012 and Beyond – Notes from the Field #079

[Notes from Pinal]: I know quite a lot of things about SSIS but every single time when I read notes from the field, I realize that there are so many small but very important features exist. A similar concept has been Using Package Configurations in SSIS 2012 and Beyond. Packages are the most critical part of the SSIS and configuring it correctly is extremely important.

Tim Mitchell SQL SERVER   Using Package Configurations in SSIS 2012 and Beyond   Notes from the Field #079Linchpin People are database coaches and wellness experts for a data driven world. In this 79th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to using package configurations in SSIS 2012 and beyond.


If you are considering upgrading from an older version of SSIS to version 2012 or 2014 but are worried that you’ll lose the ability to use those package configurations you spent so much time developing, there is good news. Although it is not a heavily advertised feature in later versions, the classic package configuration option is still alive and well in SSIS 2012 and 2014.

The Configuration Design Pattern

Storing runtime configuration data outside of SSIS packages is a critical feature of a mature ETL process. Building a design pattern that externalizes values such as database connection strings, file paths, and other data that may change over time can reduce the amount of maintenance effort required later when those values need to be updated.

In versions of SSIS before 2012, the most common way to externalize connection strings and other runtime values was to use one or more SSIS package configurations. Although package configurations could be a little clunky at times, they provided a very effective means through which the ETL developer could avoid hard-coding variable data in packages.

This configuration pattern evolved significantly in 2012.  For new development in SSIS 2012 and later, the typical setup now involves using the SSIS catalog (which was first released with version 2012) to store and execute packages. Similarly, those designs usually include the use of package parameters and SSIS environments to supply runtime values for said parameters. As a result, the package configuration option is no longer the preferred method for variable externalization in new package development.

However, there are many organizations with significant investments in the old-style package configurations. One of the more common questions I’m asked about upgrading SSIS is whether package configurations can still be used in newer versions of SSIS. I’m happy to report that package configurations are still around (albeit a bit harder to find) and are just as usable in later versions of SSIS as they were in prior versions.

Configuring Package Parameters in SSIS 2012 and Later

In SSIS 2005 and 2008, you could access package configurations by simply right-clicking on an empty space in the package and selecting Package Configurations similar to what is shown below.

 notf 79 1 SQL SERVER   Using Package Configurations in SSIS 2012 and Beyond   Notes from the Field #079

However, if you’re using SSIS 2012 or 2014 in project deployment mode (the default setting for new projects), this option no longer exists.

 notf 79 2 SQL SERVER   Using Package Configurations in SSIS 2012 and Beyond   Notes from the Field #079

Even though the option no longer appears in this shortcut menu, it can still be accessed directly by using the package properties. In the package properties window, there is a collection called Configurations that will allow you to set one or more package configurations.

notf 79 3 SQL SERVER   Using Package Configurations in SSIS 2012 and Beyond   Notes from the Field #079

Clicking on the ellipsis next to this collection brings up the familiar package configurations menu, in which you can create XML, environment variable, or table storage configurations.

notf 79 4 SQL SERVER   Using Package Configurations in SSIS 2012 and Beyond   Notes from the Field #079

There are a couple of things to keep in mind on using package configurations in SSIS 2012 and beyond. First of all, you can use package configurations in addition to newer configuration methods (including package parameterization and SSIS environments). However, my recommendation is that you choose just one configuration method per project to avoid confusion or conflicting values. Also, be aware that the way package configuration values are logged differs from the way package parameter and SSIS environment values are logged in the SSIS catalog. If you do use the classic package configuration design pattern, be sure to review your execution logs to confirm that you’re getting all of the information you need to test, troubleshoot, and audit your package executions.

Conclusion

The old-style SSIS package configurations have largely yielded to the newer and more popular package parameters and SSIS environments. However, package configurations are still around and are fully accessible in later versions of the product.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL Server – Knowing the Use of Deprecated or Discontinued Features

Upgrading your SQL Server environment is inevitable and I know most many are doing this given SQL Server 2005 support will end first half next year. Whenever I get in front of customers, I have questions always coming my way around upgrades. In a recent session, I had one of the attendees send me a mail stating I have got him into trouble. I was taken aback and wrote back to know what was going wrong and how I can help him get out of trouble. This is what he wrote back:

“ Hi Pinal,

It was great to be part of your session the other day, but after the session my management had an ask which I was not sure and want your guidance.

As a DBA, I have just upgraded from a previous version of SQL Server on one of our production environment. Prior to upgrading I utilized the SQL Server 2014 Upgrade Advisor to identify deprecated and discontinued features in SQL Server 2014. Our database and development teams have regression tested the changes identified in the Upgrade Advisor for deprecated and discontinued features and remediated the items found.

As the SQL Server Administrator I need to verify all of the deprecated and discontinued features have been removed. How can I do this? I have rerun the upgrade advisor and assume that is good enough. My Management has come back strongly with an approach to make sure we are using the latest and greatest capabilities because of this upgrade and there are no legacy around.”

This was an interesting question and I wrote back to help him. This blog has been inspired by this interaction.

Thinking Simple

There are multiple options and we need a systematic way to solve this problem. We will start by doing some simple queries to DMVs to understand if we are using any deprecated features.

SELECT OBJECT_NAME,
counter_name,
instance_name AS 'Deprecated Feature',
cntr_value AS 'Number of Times Used'
FROM sys.dm_os_performance_counters
WHERE OBJECT_NAME LIKE '%:Deprecated%'
AND cntr_value > 0
ORDER BY 'Number of Times Used' DESC
GO

I told my DBA friend to run the above query to find out if anything is still around. I always say to have a baseline trace to rerun on an upgraded test environment to know if there are still features we need to work on. This is always not simple but this is same as what we get from Perfmon counters. These two must match. If you are not aware, here are the steps:

Open up Performance Monitor (Perfmon) and under the SQL Server counters add the Deprecated Features / Usage for all counters by selecting all and Clicking ADD.

deprecated discontinuted 01 SQL Server   Knowing the Use of Deprecated or Discontinued Features

Conclusion

As I said this was a simple solution. We also exchanged few other emails after a week of interaction. I will try to write for a different blog. Having said that, I would highly recommend using these simple techniques to know if we are using Deprecated features. I would also like to learn from my blog readers if you have ever used these features in your environments prior to upgrade and as a validation process? Do let me know what you have been doing.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Generating Row Number Without Ordering Any Columns

Row_number function is used to generate a serial number for a given record set. But you need to always use ORDER BY clause so that the numbers are assigned to the specific order.

Let us create the following dataset

CREATE TABLE #TEST (NAMES VARCHAR(100))
INSERT INTO #TEST
SELECT 'PINAL' UNION ALL
SELECT 'MAHESH' UNION ALL
SELECT 'SUNIL' UNION ALL
SELECT 'ARVIND' UNION ALL
SELECT 'MURUGAN'

Suppose you want to generate row number, you can use the following statement

SELECT *,ROW_NUMBER() OVER (ORDER BY NAMES) AS SNO FROM #TEST

The reasult is

orderneeded1 SQL SERVER   Generating Row Number Without Ordering Any Columns
The numbers are assigned based on ascending order of name

But what if you want to generate row numbers in the same order the data are added.

Can you omit the ORDER BY Clause?

SELECT *,ROW_NUMBER() OVER () AS SNO FROM #TEST

The above throws the following error

Msg 4112, Level 15, State 1, Line 1
The function ‘ROW_NUMBER’ must have an OVER clause with ORDER BY.

But there is a way. Just do not ORDER BY any columns, but ORDER BY a literal value as shown below

SELECT *,ROW_NUMBER() OVER (ORDER BY (SELECT 100)) AS SNO FROM #TEST

The result is

orderneeded2 SQL SERVER   Generating Row Number Without Ordering Any Columns
In place of SELECT 100, you can use anything like SELECT 1, SELECT ‘A’, SELECT NULL, etc

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Who ALTER’ed My Database? Catch Them Via DDL Trigger

Here is an interesting comment from one of my previous blogs on how to change the compatibility level of SQL server post posted earlier. Some of these comments are interesting and can provide some interesting extension to blog ideas like this. So the comment was:

Under what circumstances would SQL Server automatically change the compatibility level ? I am currently working on SQL Server 2008 and there is a user database which has a compatibility level of 80. Apparently this database was moved to this server prior to my time, and the compatibility level was never changed/upgraded. When I execute the following script, it is set successfully to 100, however, the next day, it goes back to 80. Any ideas ?

USE [master]
GO
ALTER DATABASE [MyDB]
SET COMPATIBILITY_LEVEL = 100
GO

There is only a SQL Maintenance plan to perform integrity check, full database backups / transaction log backups, index rebuild/update stats on this instance.

Here is my reply:

SQL Server doesn’t do that automatically. There must be some automated activity doing it.

When the reader asked how to track such changes done in SQL Server. I had replied separately over mail and now sharing with you. I tried to write a script, which would create a DDL trigger on ALTER database command. Below script would do below things.

  1. Find who fired ALTER DATABASE Command.
  2. From which Application it was fired.
  3. What was the exact command fired.

Here is the code of the trigger.

IF EXISTS (SELECT * FROM sys.server_triggers
WHERE name = 'DDL_Database_Trigger')
DROP TRIGGER DDL_Database_Trigger
ON ALL SERVER;
GO
CREATE TRIGGER DDL_Database_Trigger
ON ALL SERVER
FOR ALTER_DATABASE
AS
DECLARE
@cmd VARCHAR(200)
DECLARE @who VARCHAR(200)
DECLARE @fromwhere VARCHAR(200)
SELECT @cmd = EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','nVARCHAR(max)')
SELECT @who = EVENTDATA().value('(/EVENT_INSTANCE/LoginName)[1]','nVARCHAR(max)')
SELECT @fromwhere = EVENTDATA().value('(/EVENT_INSTANCE/ServerName)[1]','nVARCHAR(max)')
RAISERROR (N'!!!Database ALTERED!!!
Command - %s
By - %s
From - %s.'
,
10,
1,
@cmd, @who, @fromwhere) WITH LOG
GO

For testing purpose, I fired below two commands:

USE [master]
USE [master]
GO
ALTER DATABASE [SQLAuthority] SET COMPATIBILITY_LEVEL = 100
GO
ALTER DATABASE [SQLAuthority]
ADD FILE
( NAME = N'MM',
FILENAME = N'E:\InvalidPath\MM.ndf', SIZE = 5120KB, FILEGROWTH = 1024KB)
TO FILEGROUP [PRIMARY]

And here is what I saw in ERROROG

ddl trigger 01 SQL SERVER   Who ALTERed My Database? Catch Them Via DDL Trigger

If you have not pre-configured DDL trigger to catch that information in ERRORLOG, don’t worry. You still have some information logged in default trace. Go through below blog where I explored Standard Reports which are available out of the box with SQL Server.

SQL SERVER – SSMS: Schema Change History Report

Have you ever used such tricks on your server? Do let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)