Interview Question of the Week #019 – How to Reset Identity of Table

Here is the question which I received in email:

Question: I have a table which has an identity column. I have recently deleted old and archive it to another place. Now I want to reset the identity of the table to original value of 1. How should I do that?

Answer: It is very easy to do so – you can use the command DBCC CHECKIDENT to do the same.

If a table has to start with an identity of 1 with the next insert then the table should be reseeded with the identity to 0. If identity seed is set below values that currently are in the table, it will violate the uniqueness constraint as soon as the values start to duplicate and will generate error.

DBCC CHECKIDENT (yourtable, reseed, 0)

Here is the quick video I have created to demonstrate the same example.

http://www.youtube.com/watch?v=V6yRvhwUs9Y

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Script: Current IO Related Waits on SQL Server

Playing around with DMVs is so much fun. Many a times we need little inspiration and a problem in hand to solve. Out of many things I monitor, I have personally seen folks having trouble in finding what is going wrong when the server turns slow. In a recent conversation with a friend who was troubleshooting a SQL Server instance running inside a VM on Azure, he was not sure what suddenly went wrong and was curious to know what is going on LIVE with his SQL Server.

This was an interesting question and I asked what does he want to monitor. From his experience he was quick to bounce and let me know it was around IO. I went back into my handy little scripts folder to find what I had. I share the below script which was used for this problem.

/* sys.dm_io_pending_io_requests : It is important to recognize that these are current active waits that are I/O related. */
SELECT f.database_id, f.[file_id], DB_NAME(f.database_id) AS database_name, f.name AS logical_file_name, f.type_desc,
CAST (CASE
-- Handle UNC paths (e.g. '\\fileserver\DBs\readonly_dw.ndf')
WHEN LEFT (LTRIM (f.physical_name), 2) = '\\'
THEN LEFT (LTRIM (f.physical_name),CHARINDEX('\',LTRIM(f.physical_name),CHARINDEX('\',LTRIM(f.physical_name), 3) + 1) - 1)
-- Handle local paths (e.g. 'C:\Program Files\...\master.mdf')
WHEN CHARINDEX('\', LTRIM(f.physical_name), 3) > 0
THEN UPPER(LEFT(LTRIM(f.physical_name), CHARINDEX ('\', LTRIM(f.physical_name), 3) - 1))
ELSE f.physical_name
END AS NVARCHAR(255)) AS logical_disk,
fs.io_stall/1000/60 AS io_stall_min,
fs.io_stall_read_ms/1000/60 AS io_stall_read_min,
fs.io_stall_write_ms/1000/60 AS io_stall_write_min,
(
fs.io_stall_read_ms / (1.0 + fs.num_of_reads)) AS avg_read_latency_ms,
(
fs.io_stall_write_ms / (1.0 + fs.num_of_writes)) AS avg_write_latency_ms,
((
fs.io_stall_read_ms/1000/60)*100)/(CASE WHEN fs.io_stall/1000/60 = 0 THEN 1 ELSE fs.io_stall/1000/60 END) AS io_stall_read_pct,
((
fs.io_stall_write_ms/1000/60)*100)/(CASE WHEN fs.io_stall/1000/60 = 0 THEN 1 ELSE fs.io_stall/1000/60 END) AS io_stall_write_pct,
ABS((fs.sample_ms/1000)/60/60) AS 'sample_HH',
((
fs.io_stall/1000/60)*100)/(ABS((fs.sample_ms/1000)/60))AS 'io_stall_pct_of_overall_sample',
PIO.io_pending_ms_ticks,
PIO.scheduler_address
FROM sys.dm_io_pending_io_requests AS PIO
INNER JOIN sys.dm_io_virtual_file_stats (NULL,NULL) AS fs
ON fs.file_handle = PIO.io_handle
INNER JOIN sys.master_files AS f
ON fs.database_id = f.database_id AND fs.[file_id] = f.[file_id]

 

Also a small extension to this script includes the addition of os_schedulers to know if there is any skew in usage. So the same query is modified to include the same. I know it is just an extension, but thought to add it here in a single location for completeness.

/* The query below can be used to determine if pending IO is causing work to be queued on the scheduler. */
SELECT f.database_id, f.[file_id], DB_NAME(f.database_id) AS database_name, f.name AS logical_file_name, f.type_desc,
CAST (CASE
-- Handle UNC paths (e.g. '\\fileserver\DBs\readonly_dw.ndf')
WHEN LEFT (LTRIM (f.physical_name), 2) = '\\'
THEN LEFT (LTRIM (f.physical_name),CHARINDEX('\',LTRIM(f.physical_name),CHARINDEX('\',LTRIM(f.physical_name), 3) + 1) - 1)
-- Handle local paths (e.g. 'C:\Program Files\...\master.mdf')
WHEN CHARINDEX('\', LTRIM(f.physical_name), 3) > 0
THEN UPPER(LEFT(LTRIM(f.physical_name), CHARINDEX ('\', LTRIM(f.physical_name), 3) - 1))
ELSE f.physical_name
END AS NVARCHAR(255)) AS logical_disk,
fs.io_stall/1000/60 AS io_stall_min,
fs.io_stall_read_ms/1000/60 AS io_stall_read_min,
fs.io_stall_write_ms/1000/60 AS io_stall_write_min,
(
fs.io_stall_read_ms / (1.0 + fs.num_of_reads)) AS avg_read_latency_ms,
(
fs.io_stall_write_ms / (1.0 + fs.num_of_writes)) AS avg_write_latency_ms,
((
fs.io_stall_read_ms/1000/60)*100)/(CASE WHEN fs.io_stall/1000/60 = 0 THEN 1 ELSE fs.io_stall/1000/60 END) AS io_stall_read_pct,
((
fs.io_stall_write_ms/1000/60)*100)/(CASE WHEN fs.io_stall/1000/60 = 0 THEN 1 ELSE fs.io_stall/1000/60 END) AS io_stall_write_pct,
ABS((fs.sample_ms/1000)/60/60) AS 'sample_HH',
((
fs.io_stall/1000/60)*100)/(ABS((fs.sample_ms/1000)/60))AS 'io_stall_pct_of_overall_sample',
PIO.io_pending_ms_ticks,
PIO.scheduler_address,
os.scheduler_id,
os.pending_disk_io_count,
os.work_queue_count
FROM sys.dm_io_pending_io_requests AS PIO
INNER JOIN sys.dm_io_virtual_file_stats (NULL,NULL) AS fs
ON fs.file_handle = PIO.io_handle
INNER JOIN sys.master_files AS f
ON fs.database_id = f.database_id AND fs.[file_id] = f.[file_id]
INNER JOIN sys.dm_os_schedulers AS os
ON PIO.scheduler_address = os.scheduler_address

Though this is a simple script that I have used. Do let me know, which component of SQL Server is of importance for you? Has IO been a point of problem for you anytime? Have you used scripts similar to this in your environments? Do let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Backup Failure – Error: 3203, Severity: 16, State: 1

In the recent past, I have been writing about different error messages on SQL Server that one encounters. Thanks to my blog readers who have been sharing a lot of errors with me either by email or by blog comments and asking for solution. If I find something interesting and more helpful to other, I write a blog for that. This is one of such blog.

2015-05-05 10:00:00.440 Backup       Error: 3203, Severity: 16, State: 1.
2015-05-05 10:00:00.440 Backup       Read on “M:\MSSQL\TLog\Production_1.ldf” failed: 1(Incorrect function.)
2015-05-05 10:00:00.440 Backup       Error: 3041, Severity: 16, State: 1.
2015-05-05 10:00:00.440 Backup       BACKUP failed to complete the command BACKUP LOG ProductionDB. Check the backup application log for detailed messages.

When I was searching for other emails with error message, I found another one as below.

2014-01-05 16:07:38.19 Backup   Error: 3203, Severity: 16, State: 1.
2014-01-05 16:07:38.19 Backup   Read on “M:\DatabaseName_DefaultData.ndf” failed: 1117(failed to retrieve text for this error. Reason: 15100)

This error would vary based on the actual cause. In first error message, it is operating system error 1 and in second case, it is operating system error 1117. To convert any operating system error number to text, we can use windows net command (net helpmsg) from command prompt as shown below

nethelpmsg 01 SQL SERVER   Backup Failure   Error: 3203, Severity: 16, State: 1

If you notice the second message, we are not seeing the text of the message 1117 but we are seeing “failed to retrieve text for this error. Reason: 15100”

What you should do in case of such error?

You should start looking at system and application event log based in operating system error. The example error which I have shown above are due to hardware level issues. Here is what we found in event log:

Log Name: System
Source: Disk
Date: 5/5/2015 09:32:11 AM
Event ID: 51
Task Category: None
Level: Warning
Keywords: Classic
User: N/A
Computer: SQLServerMachine
Description: An error was detected on device \Device\Harddisk4\DR4 during a paging operation.

Here is an article which explained event ID 51 https://support.microsoft.com/en-us/kb/244780/

Have you ever seen any similar errors? Put into the comment section to help others and share your knowledge. The best thing about these errors is that we get to learn from each other interesting concepts.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Using Package Configurations in SSIS 2012 and Beyond – Notes from the Field #079

[Notes from Pinal]: I know quite a lot of things about SSIS but every single time when I read notes from the field, I realize that there are so many small but very important features exist. A similar concept has been Using Package Configurations in SSIS 2012 and Beyond. Packages are the most critical part of the SSIS and configuring it correctly is extremely important.

Tim Mitchell SQL SERVER   Using Package Configurations in SSIS 2012 and Beyond   Notes from the Field #079Linchpin People are database coaches and wellness experts for a data driven world. In this 79th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to using package configurations in SSIS 2012 and beyond.


If you are considering upgrading from an older version of SSIS to version 2012 or 2014 but are worried that you’ll lose the ability to use those package configurations you spent so much time developing, there is good news. Although it is not a heavily advertised feature in later versions, the classic package configuration option is still alive and well in SSIS 2012 and 2014.

The Configuration Design Pattern

Storing runtime configuration data outside of SSIS packages is a critical feature of a mature ETL process. Building a design pattern that externalizes values such as database connection strings, file paths, and other data that may change over time can reduce the amount of maintenance effort required later when those values need to be updated.

In versions of SSIS before 2012, the most common way to externalize connection strings and other runtime values was to use one or more SSIS package configurations. Although package configurations could be a little clunky at times, they provided a very effective means through which the ETL developer could avoid hard-coding variable data in packages.

This configuration pattern evolved significantly in 2012.  For new development in SSIS 2012 and later, the typical setup now involves using the SSIS catalog (which was first released with version 2012) to store and execute packages. Similarly, those designs usually include the use of package parameters and SSIS environments to supply runtime values for said parameters. As a result, the package configuration option is no longer the preferred method for variable externalization in new package development.

However, there are many organizations with significant investments in the old-style package configurations. One of the more common questions I’m asked about upgrading SSIS is whether package configurations can still be used in newer versions of SSIS. I’m happy to report that package configurations are still around (albeit a bit harder to find) and are just as usable in later versions of SSIS as they were in prior versions.

Configuring Package Parameters in SSIS 2012 and Later

In SSIS 2005 and 2008, you could access package configurations by simply right-clicking on an empty space in the package and selecting Package Configurations similar to what is shown below.

 notf 79 1 SQL SERVER   Using Package Configurations in SSIS 2012 and Beyond   Notes from the Field #079

However, if you’re using SSIS 2012 or 2014 in project deployment mode (the default setting for new projects), this option no longer exists.

 notf 79 2 SQL SERVER   Using Package Configurations in SSIS 2012 and Beyond   Notes from the Field #079

Even though the option no longer appears in this shortcut menu, it can still be accessed directly by using the package properties. In the package properties window, there is a collection called Configurations that will allow you to set one or more package configurations.

notf 79 3 SQL SERVER   Using Package Configurations in SSIS 2012 and Beyond   Notes from the Field #079

Clicking on the ellipsis next to this collection brings up the familiar package configurations menu, in which you can create XML, environment variable, or table storage configurations.

notf 79 4 SQL SERVER   Using Package Configurations in SSIS 2012 and Beyond   Notes from the Field #079

There are a couple of things to keep in mind on using package configurations in SSIS 2012 and beyond. First of all, you can use package configurations in addition to newer configuration methods (including package parameterization and SSIS environments). However, my recommendation is that you choose just one configuration method per project to avoid confusion or conflicting values. Also, be aware that the way package configuration values are logged differs from the way package parameter and SSIS environment values are logged in the SSIS catalog. If you do use the classic package configuration design pattern, be sure to review your execution logs to confirm that you’re getting all of the information you need to test, troubleshoot, and audit your package executions.

Conclusion

The old-style SSIS package configurations have largely yielded to the newer and more popular package parameters and SSIS environments. However, package configurations are still around and are fully accessible in later versions of the product.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL Server – Knowing the Use of Deprecated or Discontinued Features

Upgrading your SQL Server environment is inevitable and I know most many are doing this given SQL Server 2005 support will end first half next year. Whenever I get in front of customers, I have questions always coming my way around upgrades. In a recent session, I had one of the attendees send me a mail stating I have got him into trouble. I was taken aback and wrote back to know what was going wrong and how I can help him get out of trouble. This is what he wrote back:

“ Hi Pinal,

It was great to be part of your session the other day, but after the session my management had an ask which I was not sure and want your guidance.

As a DBA, I have just upgraded from a previous version of SQL Server on one of our production environment. Prior to upgrading I utilized the SQL Server 2014 Upgrade Advisor to identify deprecated and discontinued features in SQL Server 2014. Our database and development teams have regression tested the changes identified in the Upgrade Advisor for deprecated and discontinued features and remediated the items found.

As the SQL Server Administrator I need to verify all of the deprecated and discontinued features have been removed. How can I do this? I have rerun the upgrade advisor and assume that is good enough. My Management has come back strongly with an approach to make sure we are using the latest and greatest capabilities because of this upgrade and there are no legacy around.”

This was an interesting question and I wrote back to help him. This blog has been inspired by this interaction.

Thinking Simple

There are multiple options and we need a systematic way to solve this problem. We will start by doing some simple queries to DMVs to understand if we are using any deprecated features.

SELECT OBJECT_NAME,
counter_name,
instance_name AS 'Deprecated Feature',
cntr_value AS 'Number of Times Used'
FROM sys.dm_os_performance_counters
WHERE OBJECT_NAME LIKE '%:Deprecated%'
AND cntr_value > 0
ORDER BY 'Number of Times Used' DESC
GO

I told my DBA friend to run the above query to find out if anything is still around. I always say to have a baseline trace to rerun on an upgraded test environment to know if there are still features we need to work on. This is always not simple but this is same as what we get from Perfmon counters. These two must match. If you are not aware, here are the steps:

Open up Performance Monitor (Perfmon) and under the SQL Server counters add the Deprecated Features / Usage for all counters by selecting all and Clicking ADD.

deprecated discontinuted 01 SQL Server   Knowing the Use of Deprecated or Discontinued Features

Conclusion

As I said this was a simple solution. We also exchanged few other emails after a week of interaction. I will try to write for a different blog. Having said that, I would highly recommend using these simple techniques to know if we are using Deprecated features. I would also like to learn from my blog readers if you have ever used these features in your environments prior to upgrade and as a validation process? Do let me know what you have been doing.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Generating Row Number Without Ordering Any Columns

Row_number function is used to generate a serial number for a given record set. But you need to always use ORDER BY clause so that the numbers are assigned to the specific order.

Let us create the following dataset

CREATE TABLE #TEST (NAMES VARCHAR(100))
INSERT INTO #TEST
SELECT 'PINAL' UNION ALL
SELECT 'MAHESH' UNION ALL
SELECT 'SUNIL' UNION ALL
SELECT 'ARVIND' UNION ALL
SELECT 'MURUGAN'

Suppose you want to generate row number, you can use the following statement

SELECT *,ROW_NUMBER() OVER (ORDER BY NAMES) AS SNO FROM #TEST

The reasult is

orderneeded1 SQL SERVER   Generating Row Number Without Ordering Any Columns
The numbers are assigned based on ascending order of name

But what if you want to generate row numbers in the same order the data are added.

Can you omit the ORDER BY Clause?

SELECT *,ROW_NUMBER() OVER () AS SNO FROM #TEST

The above throws the following error

Msg 4112, Level 15, State 1, Line 1
The function ‘ROW_NUMBER’ must have an OVER clause with ORDER BY.

But there is a way. Just do not ORDER BY any columns, but ORDER BY a literal value as shown below

SELECT *,ROW_NUMBER() OVER (ORDER BY (SELECT 100)) AS SNO FROM #TEST

The result is

orderneeded2 SQL SERVER   Generating Row Number Without Ordering Any Columns
In place of SELECT 100, you can use anything like SELECT 1, SELECT ‘A’, SELECT NULL, etc

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Who ALTER’ed My Database? Catch Them Via DDL Trigger

Here is an interesting comment from one of my previous blogs on how to change the compatibility level of SQL server post posted earlier. Some of these comments are interesting and can provide some interesting extension to blog ideas like this. So the comment was:

Under what circumstances would SQL Server automatically change the compatibility level ? I am currently working on SQL Server 2008 and there is a user database which has a compatibility level of 80. Apparently this database was moved to this server prior to my time, and the compatibility level was never changed/upgraded. When I execute the following script, it is set successfully to 100, however, the next day, it goes back to 80. Any ideas ?

USE [master]
GO
ALTER DATABASE [MyDB]
SET COMPATIBILITY_LEVEL = 100
GO

There is only a SQL Maintenance plan to perform integrity check, full database backups / transaction log backups, index rebuild/update stats on this instance.

Here is my reply:

SQL Server doesn’t do that automatically. There must be some automated activity doing it.

When the reader asked how to track such changes done in SQL Server. I had replied separately over mail and now sharing with you. I tried to write a script, which would create a DDL trigger on ALTER database command. Below script would do below things.

  1. Find who fired ALTER DATABASE Command.
  2. From which Application it was fired.
  3. What was the exact command fired.

Here is the code of the trigger.

IF EXISTS (SELECT * FROM sys.server_triggers
WHERE name = 'DDL_Database_Trigger')
DROP TRIGGER DDL_Database_Trigger
ON ALL SERVER;
GO
CREATE TRIGGER DDL_Database_Trigger
ON ALL SERVER
FOR ALTER_DATABASE
AS
DECLARE
@cmd VARCHAR(200)
DECLARE @who VARCHAR(200)
DECLARE @fromwhere VARCHAR(200)
SELECT @cmd = EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','nVARCHAR(max)')
SELECT @who = EVENTDATA().value('(/EVENT_INSTANCE/LoginName)[1]','nVARCHAR(max)')
SELECT @fromwhere = EVENTDATA().value('(/EVENT_INSTANCE/ServerName)[1]','nVARCHAR(max)')
RAISERROR (N'!!!Database ALTERED!!!
Command - %s
By - %s
From - %s.'
,
10,
1,
@cmd, @who, @fromwhere) WITH LOG
GO

For testing purpose, I fired below two commands:

USE [master]
USE [master]
GO
ALTER DATABASE [SQLAuthority] SET COMPATIBILITY_LEVEL = 100
GO
ALTER DATABASE [SQLAuthority]
ADD FILE
( NAME = N'MM',
FILENAME = N'E:\InvalidPath\MM.ndf', SIZE = 5120KB, FILEGROWTH = 1024KB)
TO FILEGROUP [PRIMARY]

And here is what I saw in ERROROG

ddl trigger 01 SQL SERVER   Who ALTERed My Database? Catch Them Via DDL Trigger

If you have not pre-configured DDL trigger to catch that information in ERRORLOG, don’t worry. You still have some information logged in default trace. Go through below blog where I explored Standard Reports which are available out of the box with SQL Server.

SQL SERVER – SSMS: Schema Change History Report

Have you ever used such tricks on your server? Do let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Interview Question of the Week #018 – Script to Remove Special Characters – Script to Parse Alpha Numerics

If you ask me – there are certain questions not appropriate for an interview as they are often very subjective. There are some questions, which really do not help to identify if the candidate has the necessary skills or not. Here is one such question, if you ask me in the interview, I may not get it right too, as this requires some experience in writing scripts as well as trial and error mechanics to get it right.

Question: How to write will remove all the special characters and parses Alpha Numeric values only?

Answer: Honestly, it is not that easy to write this script. However, if you are stars are not bright you may end up with an interviewer who believes writing this script is the ultimate test. Do not get me wrong, writing this script is not a bad test, I believe it is not the best question for the interview.

CREATE FUNCTION dbo.UDF_ParseAlphaChars
(
@string VARCHAR(8000)
)
RETURNS VARCHAR(8000)
AS
BEGIN
DECLARE
@IncorrectCharLoc SMALLINT
SET @IncorrectCharLoc = PATINDEX('%[^0-9A-Za-z]%', @string)
WHILE @IncorrectCharLoc > 0
BEGIN
SET
@string = STUFF(@string, @IncorrectCharLoc, 1, '')
SET @IncorrectCharLoc = PATINDEX('%[^0-9A-Za-z]%', @string)
END
SET
@string = @string
RETURN @string
END
GO

You can test above function with the help of following test:

-- Test
SELECT dbo.UDF_ParseAlphaChars('AB"_I+{D[]}4|:e;"5,<.F>/?6')
GO

The above query will return following result set:

ABCID4e5F6

You can read about this function and additional comments in my earlier blog post over here: Function to Parse AlphaNumeric Characters from String

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQLAuthority News – Presented 3 Technical Session at Great Indian Developer Summit 2015

GIDS Website Logo SQLAuthority News   Presented 3 Technical Session at Great Indian Developer Summit 2015Great Indian Developer Summit is my favorite technology event and I have been extremely happy to present technology sessions here for over 5 years. Just like every year, this year, I presented three technology session on SQL Server 2014. This time the event was at two locations. First one is Bangalore and the second one is in Hyderabad.

If you did not attend the event, I suggest you sign up for my newsletter as I will be sending all the scripts, demos for this event in email over here. The event organizer is not planning to record the sessions.

GIDS%20(1) SQLAuthority News   Presented 3 Technical Session at Great Indian Developer Summit 2015

Performance in 60 Seconds – SQL Tricks Everybody MUST Know

Date and Time: APRIL 21, 2015 14:00-15:00
Location: J. N. Tata Auditorium, National Science Symposium Complex (NSSC), Sir C.V.Raman Avenue, Bangalore, India

Data and Database is a very important aspect of application development for businesses. Developers often come across situations where they face a slow server response, even though their hardware specifications are above par. This session is for all the Developers who want their server to perform at blazing fast speed, but want to invest very little time to make it happen. We will go over various database tricks which require absolutely no time to master and require practically no SQL coding at all. After attending this session, Developers will only need 60 seconds to improve performance of their database server in their implementation. We will have a quiz during the session to keep the conversation alive. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Additionally, all attendees of the session will have access to learning material presented in the session.

GIDS%20(2) SQLAuthority News   Presented 3 Technical Session at Great Indian Developer Summit 2015

Troubleshooting CPU Performance Issue for SQL Developers

Date and Time: APRIL 21, 2015 17:30-18:30
Location: J. N. Tata Auditorium, National Science Symposium Complex (NSSC), Sir C.V.Raman Avenue, Bangalore, India

Developers are in the most challenging situations when they see CPU running 100%. There are many solutions of this situation, but there is very little time to implement those solutions. In this critical situation developers need a sure solution which gives stability to their system and buys more time to troubleshoot the problem. Many believe Performance Tuning and Troubleshooting is an art which has been lost in time. The truth is that art has evolved with the time and there are more tools and techniques to overcome ancient troublesome scenarios. There three major resource when bottlenecked creates performance problem: CPU, IO, and Memory. In this session we will focus on some of the common performance issues and their resolution. If time permits we will cover other performance related tips and tricks. At the end of the session attendee will have clear ideas and action items regarding what to do in when facing any of the above resource intensive scenarios.

GIDS%20(3) SQLAuthority News   Presented 3 Technical Session at Great Indian Developer Summit 2015

SQL Server 2014 – A 60 Minutes Crash Course

Date and Time: APRIL 25, 2015 09:55-10:55
Hyderabad Marriott Hotel & Convention Centre, Tank Bund Road, Opposite Hussain Sagar Lake, Hyderabad, India

Every new release of SQL Server brings a whole load of new features that an administrator can add to their arsenal of efficiency. SQL Server 2014 has introduced many new features. In this 60 minute session we will be learning quite a few of the new features of SQL Server 2014. Here is the glimpse of the features we will cover in this 60 minute session.

GIDS%20(4) SQLAuthority News   Presented 3 Technical Session at Great Indian Developer Summit 2015

  • Live plans for long running queries
  • Transaction durability and its impact on queries
  • New cardinality estimate for optimal performance
  • In-memory OLTP optimization for superior query performance
  • Resource governor and IO enhancements
  • Columnstore indexes and performance tuning
  • And many more tricks and tips

GIDS%20(5) SQLAuthority News   Presented 3 Technical Session at Great Indian Developer Summit 2015

Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Additionally, all attendees of the session will have access to learning material presented in the session. This one hour will be the most productive one hour for any developer who wants to quickly jump start with SQL Server 2014 and its new features.

GIDS%20(6) SQLAuthority News   Presented 3 Technical Session at Great Indian Developer Summit 2015

Remember – do not worry if you can’t attend the event. Just subscribe to newsletter and I will share all scripts and slides in the email right after the event.

GIDS%20(7) SQLAuthority News   Presented 3 Technical Session at Great Indian Developer Summit 2015

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL AZURE – How to Disable and Enable All Constraint for Table and Database

Recently I wrote blog to enable and disable all constraints in the database SQL SERVER – How to Disable and Enable All Constraint for Table and Database.

One of my reader told me that this trick of sp_msforeachtable doesn’t work in Azure SQL Database (WASD). It is interesting that a lot of things that we assume when working with on-premise SQL Server are not completely available when working on SQL Azure because it is Database as a service. Though some of the system commands are not exposed, we can always write something using the Dynamic Management Views (DMVs) something similar to achieve the same effect.

Here is the modified version of script which can be used on SQL Azure database. You can check the same on your servers and let me know.

-- Get ALTER TABLE Script for all tables to Disable all constraints for database
SELECT 'ALTER TABLE [' + s.name + '].[' + o.name + '] WITH CHECK CHECK CONSTRAINT [' + i.name + ']'
FROM sys.foreign_keys i
INNER JOIN sys.objects o ON i.parent_object_id = o.OBJECT_ID
INNER JOIN sys.schemas s ON o.schema_id = s.schema_id
GO
-- Get ALTER TABLE Script for all tables to Enable all constraints for database
SELECT 'ALTER TABLE [' + s.name + '].[' + o.name + '] NOCHECK CONSTRAINT all'
FROM sys.foreign_keys i
INNER JOIN sys.objects o ON i.parent_object_id = o.OBJECT_ID
INNER JOIN sys.schemas s ON o.schema_id = s.schema_id

Note that Executing above would NOT make any change. By executing above script, there would be an output in Management Studio. This output is a set of commands that you will need to explicitly run it on the servers post that.

Let me know if you ever required such a capability when working with SQL Azure and what have you been doing to achieve the same?

Reference: Pinal Dave (http://blog.sqlauthority.com)