SQL SERVER – 3 Challenges for DBA and Smart Solutions

Developer’s life is never easy. DBA’s life is even crazier.

DBA’s Life

When a developer wakes up in the morning, most of the time have no idea what different challenges they are going to face that day. Of course, most of the developers know the project and roadmap, which they are working on. However, developers have no clue what coding challenges which they are going face for that day.

DBA’s life is even crazier. When DBA wakes up in the morning – they often thank that they were not disturbed during the night due to server issues. The very next thing they wish is that they do not want to challenge which they can’t solve for that day. The problems DBA face every single day are mostly unpredictable and they just have to solve them as they come during the day.

Though the life of DBA is not always bad. There are always ways and methods how one can overcome various challenges. Let us see three of the challenges and how a DBA can use various tools to overcome them.

Challenge #1 Synchronize Data Across Server

A Very common challenge DBA receive is that they have to synchronize the data across the servers. If you try to manually write that up, it may take forever to accomplish the task. It is nearly impossible to do the same with the help of the T-SQL. However, thankfully there are tools like dbForge Studio which can save a day and synchronize data across servers. Read my detailed blog post about the same over here: SQL SERVER – Synchronize Data Exclusively with T-SQL.

Challenge #2 SQL Report Builder

DBA’s are often asked to build reports on the go. It really annoys DBA’s, but hardly people care about it. No matter how busy a DBA is, they are just called upon to build reports on things on very short notice. I personally like to avoid any task which is given to me accidently and personally building report can be boring. I rather spend time with High Availability, disaster recovery, performance tuning rather than building report. I use SQL third party tool when I have to work with SQL Report. Others have extended reporting capabilities. The latter group of products includes the SQL report builder built-in todbForge Studio for SQL Server. I have blogged about this earlier over here: SQL SERVER – SQL Report Builder in dbForge Studio for SQL Server.

Challenge #3 Work with the OTHER Database

The manager does not understand that MySQL is different from SQL Server and SQL Server is different from Oracle. For them everything is same. In my career hundreds of times I have faced a situation that I am given a database to manage or do some task when their regular DBA is on vacation or leave. When I try to explain I do not understand the underlying the technology, I have been usually told that my manager has trust on me and I can do anything. Honestly, I can’t but I hardly dare to argue. I fall back on the third party tool to manage database when it is not in my comfort zone. For example, I was once given MySQL performance tuning task (at that time I did not know MySQL so well). To simplify search for a problem query let us use MySQL Profiler in dbForge Studio for MySQL. It provides such commands as a Query Profiling Mode and Generate Execution Plan. Here is the blog post discussing about the same: MySQL – Profiler : A Simple and Convenient Tool for Profiling SQL Queries.

Well, that’s it! There were many different such occasions when I have been saved by the tool. May be some other day I will write part 2 of this blog post.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – Synchronize Data Exclusively with T-SQL

UPDATE: This review and exercise is based on dbForge Studio.

Database developers often face the task to synchronize users’ data. Currently, to solve this task, there was developed quite a big number of various utilities. In this article, we will review the process of creating the new functionality to synchronize data exclusively with T-SQL.

Let’s split the synchronization process with T-SQL onto the sub-tasks:

  1. Retrieving the table structure of a master database
  2. Preparing data stored in the tables of the master database and saving it at the public locations
  3. Turning off relations and triggers between the tables of a target database for synchronization time
  4. Synchronization of data between tables
  5. Turning on relations and triggers between the tables of the target database

To make the process of creating the synchronization functionality comparatively easy to understand, let’s discuss some restrictions:

  1. Synchronization will be elapsing within one server, syntax will be supported on the level of MS SQL Server 2005 or higher
  2. The schemes between the synchronized databases are identical
  3. Only system data types are supported
  4. It’s possible to synchronize any amount of tables at a time
  5. The structure of tables is not essential

1) Retrieving the table structure of the master database

The list of tables can be obtained with the following query:
SELECT table_name = '[' + s.name + '].[' + t.name + ']'
FROM sys.tables t
JOIN sys.schemas s ON t.[schema_id] = s.[schema_id]

 

It’s important to remember that some system views may contain an excessive amount of connections and calls of system functions, that will negatively affect the run-time performance of the query, such as sys.tables:

SELECT *
FROM sys.objects$ o
LEFT JOIN sys.sysidxstats lob ON lob.id = o.[object_id] AND lob.indid <= 1
LEFT JOIN sys.syssingleobjrefs ds ON ds.depid = o.[object_id] AND ds.class = 8 AND ds.depsubid <= 1
LEFT JOIN sys.syssingleobjrefs rfs ON rfs.depid = o.[object_id] AND rfs.class = 42 AND rfs.depsubid = 0
LEFT JOIN sys.syspalvalues ts ON ts.class = 'LEOP' AND ts.value = o.lock_escalation_option
WHERE o.[type] = 'U '

Therefore, you have to use as simple system views as possible. In our case, you can use sys.objects:

SELECT table_name = '[' + s.name + '].[' + t.name + ']'
FROM sys.objects o
JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
WHERE o.[type] = 'U '

To make the query not to return excessive rows, we need to preliminary filter out the tables containing no data:

SELECT table_name = '[' + s.name + '].[' + t.name + ']'
FROM (
SELECT
o.[object_id]
, o.[schema_id]
, o.name
FROM sys.objects o
WHERE o.[type] = 'U '
AND EXISTS(
SELECT 1
FROM sys.partitions p
WHERE p.[object_id] = o.[object_id]
AND p.[rows] > 0
)
)
t
JOIN sys.schemas s ON t.[schema_id] = s.[schema_id]

The list of columns containing only system data types can be obtained in the following way:

SELECT
table_name = '[' + s.name + '].[' + o.name + ']'
, column_name = '[' + c.name + ']'
FROM sys.columns c
JOIN sys.objects o ON o.[object_id] = c.[object_id]
JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
JOIN sys.types b ON b.user_type_id = c.system_type_id AND b.user_type_id = b.system_type_id
WHERE o.[type] = 'U '

Next, to minimize the amount of data that we pass for comparison, it is suggested not to take into account column values, that have default values and can’t be empty. So, the query will look as follows:

SELECT
table_name = '[' + s.name + '].[' + o.name + ']'
, column_name =
CASE WHEN c2.[object_id] IS NULL
THEN '[' + c.name + ']'
ELSE '[' + c.name + '] = NULLIF([' + c.name + '], ' + c2.[definition] + ')'
END
FROM
sys.columns c
JOIN sys.objects o ON o.[object_id] = c.[object_id]
JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
JOIN sys.types b ON b.user_type_id = c.system_type_id AND b.user_type_id = b.system_type_id
LEFT JOIN sys.default_constraints c2 ON c.default_object_id = c2.[object_id]
AND b.name IN ('int', 'bit', 'char', 'nchar', 'ntext', 'nvarchar', 'text', 'varchar')
AND
c.is_nullable = 0
WHERE o.[type] = 'U '
AND c.is_computed = 0

2) Retrieving the data stored in the tables of the master database and saving it in public locations

First of all, we need some interim buffer, into which we will load data from a master database. The tempdb system database is public for any user database on the server, therefore, we can choose it.

Every time synchronization runs, we will check whether the table exists and create it if needed with the following query:

IF OBJECT_ID('tempdb.dbo.MergeTemp', 'U') IS NULL BEGIN
CREATE TABLE
tempdb.dbo.MergeTemp
(
[object_name] NVARCHAR(300) NOT NULL
,
create_date DATETIME NOT NULL DEFAULT GETDATE()
,
data XML NULL
,
CONSTRAINT [PK_MergeTemp] PRIMARY KEY CLUSTERED
(
id ASC, [object_name] ASC
) ON [PRIMARY]
)
END

Then, it’s necessary to decide how to store data from the master database. The main difficulty within this task is that tables may have different structures. The most obvious option is to create a temporary table for each synchronized table and upload data into it. However, such option may be not optimal because we will have to execute many DDL commands.

Therefore, we suggest to solve this task by means of XML, that can describe almost any database entity. The disadvantage of the suggested method is that not quite a correct XML data structure may be generated for some data types. You can retrieve a list of such data types with the following query:

SELECT t.name
FROM sys.types t
WHERE t.user_type_id != t.system_type_id

 

Let’s write a query to create XML with dynamic structure:

DECLARE
@Data NVARCHAR(4000)
,
@ObjectName NVARCHAR(300)
DECLARE [tables] CURSOR READ_ONLY FAST_FORWARD LOCAL FOR
SELECT
[object_name]
, [sql] = '
INSERT INTO tempdb.dbo.MergeTemp([object_name], data)
SELECT '''
+ quota_object_name + ''', (SELECT ' + object_columns + '
FROM '
+ quota_object_name + ' t
FOR XML AUTO, ROOT('''
+ [object_name] + '''))'
FROM (
SELECT
[object_name] = s.name + '.' + t.name
, quota_object_name = '[' + s.name + '].[' + t.name + ']'
, object_columns = STUFF((
SELECT ISNULL(', [' + c.name + '] = NULLIF([' + c.name + '], ' + c2.[definition] + ')',
', [' + c.name + ']')
+
CASE WHEN b.name = 'varbinary'
THEN ' = CAST('''' AS XML).value(''xs:base64Binary(sql:column("'+ c.name +'"))'',''VARCHAR(MAX)'')'
ELSE ''
END
FROM
sys.columns c WITH (NOLOCK)
JOIN sys.types b WITH (NOLOCK) ON b.user_type_id = c.system_type_id AND b.user_type_id = b.system_type_id
LEFT JOIN sys.default_constraints c2 ON c.default_object_id = c2.[object_id]
AND b.name IN ('char', 'nchar', 'ntext', 'nvarchar', 'text', 'varchar')
WHERE t.[object_id] = c.[object_id]
AND c.is_computed = 0
ORDER BY c.column_id
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)')
,
1, 2, '')
FROM (
SELECT
o.[object_id]
, o.[schema_id]
, o.name
FROM sys.objects o WITH (NOLOCK)
WHERE o.[type] = 'U'
AND EXISTS(
SELECT 1
FROM sys.partitions p WITH (NOLOCK)
WHERE p.[object_id] = o.[object_id]
AND p.[rows] > 0
)
AND NOT EXISTS(
SELECT 1
FROM sys.columns c WITH (NOLOCK)
JOIN sys.types t WITH (NOLOCK) ON c.system_type_id = t.system_type_id
WHERE (c.user_type_id != c.system_type_id OR t.name = 'xml')
AND
c.[object_id] = o.[object_id]
)
)
t
JOIN sys.schemas s WITH (NOLOCK) ON t.[schema_id] = s.[schema_id]
) data
OPEN [tables]
FETCH NEXT FROM [tables] INTO @ObjectName, @Data
IF (@@FETCH_STATUS = -1)
RAISERROR('Have nothing to export...', 16, 1)
WHILE @@FETCH_STATUS = 0 BEGIN
EXEC
sys.sp_executesql @Data
FETCH NEXT FROM [tables] INTO @ObjectName, @Data
END
CLOSE
[tables]
DEALLOCATE [tables]

Where, the following query will be generated for each table:

INSERT INTO tempdb.dbo.MergeTemp ([object_name], data)
SELECT <TABLE name>,
(
SELECT <columns list>
FROM <TABLE name> t
FOR XML AUTO, ROOT('<table name>')
)

After such query is executed, an XML with table data will be generated and written into the MergeTemp table.

3) Disabling relations and triggers between the tables of the master database for the time of synchronization

At the next stage, we will insert data into tables, therefore it’s preferably to turn off foreign keys and triggers. If you do not, various errors may appear at the inserting stage.

Here is a script to turn off/on foreign keys for tables:

DECLARE
@FKeysOff NVARCHAR(MAX)
,
@FKeysOn NVARCHAR(MAX)
;
WITH fkeys AS
(
SELECT DISTINCT o.[object_name]
FROM tempdb.dbo.MergeTemp mt
JOIN (
SELECT
o.[object_id]
, [object_name] = '[' + s.name + '].[' + o.name + ']'
FROM sys.objects o
JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
JOIN sys.foreign_keys fk ON o.[object_id] = fk.parent_object_id
WHERE o.[type] = 'U '
) o ON mt.[object_name] COLLATE DATABASE_DEFAULT = o.[object_name] COLLATE DATABASE_DEFAULT
)
SELECT
@FKeysOff = (
SELECT 'ALTER TABLE ' + fk.[object_name] + ' NOCHECK CONSTRAINT ALL;' + CHAR(13)
FROM fkeys fk
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)')
,
@FKeysOn = (
SELECT 'ALTER TABLE ' + fk.[object_name] + ' CHECK CONSTRAINT ALL;' + CHAR(13)
FROM fkeys fk
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)')
EXEC sys.sp_executesql @FKeysOff
EXEC sys.sp_executesql @FKeysOn

This query seems cumbrous and it doesn’t take into account more complicated relations between tables. In case when the database contains relatively few tables, the following query (turning off all table triggers and foreign keys in the database) may be used:

EXEC sys.sp_msforeachtable '
ALTER TABLE ? DISABLE TRIGGER ALL
ALTER TABLE ? NOCHECK CONSTRAINT ALL '

4) Data synchronization between tables

We have already access to data from the master database stored in XML.

With the following query we can find the unique identifier of the database object we are going to synchronize:

SELECT
o.[object_id]
, mt.[object_name]
FROM tempdb.dbo.MergeTemp mt
JOIN (
SELECT
o.[object_id]
, [object_name] = '[' + s.name + '].[' + o.name + ']'
FROM sys.objects o
JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
WHERE o.[type] = 'U '
) o ON mt.[object_name] COLLATE DATABASE_DEFAULT = o.[object_name] COLLATE DATABASE_DEFAULT

It’s important to note that the object identifier (object_id) is unique in the database and may be changed when the schema is modified (e.g., adding a new column). That’s why, initially, we were wrote the name of the object instead of its identifier. To synchronize tables, we need to parse data from XML.

As an option, for this, you can use the OPENXML construction:

DECLARE @IDoc INT
EXEC
sys.sp_xml_preparedocument @IDoc OUTPUT, @XML
SELECT [<column name>]
FROM OPENXML(@IDoc, '<table name>/t')
WITH ([<column name>] [<column datetype>]
EXEC sys.sp_xml_removedocument @IDoc

 

However, this approach is less effective when parsing significantly large XML structures. Therefore, it was decided to use the construction described below:

Here is an example of a query, that will generate dynamic SQL by XML parsing and synchronization with the target table:

DECLARE
@ObjectID INT
, @ObjName NVARCHAR(300)
,
@SQL NVARCHAR(MAX)
DECLARE package CURSOR FAST_FORWARD READ_ONLY LOCAL FOR
SELECT
o.[object_id]
, mt.[object_name]
FROM tempdb.dbo.MergeTemp mt
JOIN (
SELECT
o.[object_id]
, [object_name] = '[' + s.name + '].[' + o.name + ']'
FROM sys.objects o
JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
WHERE o.[type] = 'U '
) o ON mt.[object_name] COLLATE DATABASE_DEFAULT = o.[object_name] COLLATE DATABASE_DEFAULT
OPEN package
FETCH NEXT FROM package INTO @ObjectID, @ObjName
WHILE @@FETCH_STATUS = 0 BEGIN
SELECT
@SQL = '
DECLARE @XML XML
SELECT @XML = (
SELECT data
FROM tempdb.dbo.MergeTemp
WHERE [object_name] = '''
+ @ObjName + '''
)
IF EXISTS(
SELECT 1
FROM sys.columns c WITH (NOLOCK)
WHERE c.[object_id] = '
+ CAST(@ObjectID AS VARCHAR(10)) + '
AND c.is_identity = 1
) SET IDENTITY_INSERT '
+ @ObjName + ' ON;
DELETE FROM '
+ @ObjName + '
INSERT INTO '
+ @ObjName + '(' +
STUFF((
SELECT ', [' + c.name + ']'
FROM sys.columns c
WHERE c.[object_id] = @ObjectID
ORDER BY c.column_id
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)'), 1, 2, '') + ')
SELECT '
+ STUFF((
SELECT
', [' + c.name + '] = ' + CASE WHEN c2.name IS NOT NULL THEN 'ISNULL(' ELSE '' END + 'r.c.value(''@' + c.name + ''', ''' + b.name +
CASE
WHEN b.name IN ('char', 'varchar', 'nchar', 'nvarchar', 'varbinary') AND c.max_length = -1 THEN '(max)'
WHEN b.name IN ('char', 'varchar') AND c.max_length!= -1 THEN '(' + CAST(c.max_length AS VARCHAR(10)) + ')'
WHEN b.name IN ('nchar', 'nvarchar') AND c.max_length!= -1 THEN '(' + CAST(c.max_length / 2 AS VARCHAR(10)) + ')'
WHEN b.name = 'decimal' THEN '(' + CAST(c.[precision] AS NVARCHAR(10)) + ',' + CAST(c.[scale] AS VARCHAR(10)) + ')'
ELSE ''
END + ''')' + CASE WHEN c2.name IS NOT NULL THEN ', ' + c2.[definition] + ')' ELSE '' END
FROM
sys.columns c
JOIN sys.types b ON b.user_type_id = c.system_type_id AND b.user_type_id = b.system_type_id AND b.name!= 'xml'
LEFT JOIN sys.default_constraints c2 ON c.default_object_id = c2.[object_id]
AND b.name IN ('int', 'bit', 'char', 'nchar', 'ntext', 'nvarchar', 'text', 'varchar')
AND
c.is_nullable = 0
WHERE c.[object_id] = @ObjectID
AND c.is_computed = 0
ORDER BY c.column_id
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)')
,
1, 2, '') + '
FROM @XML.nodes('''
+ s.name + '.' + o.name + '/t'') r(c)
'
FROM sys.objects o
JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
WHERE o.[object_id] = @ObjectID
EXEC sys.sp_executesql @SQL
FETCH NEXT FROM package INTO @ObjectID, @ObjName
END
CLOSE
package
DEALLOCATE package

The following script will build such query for each synchronized table:

DECLARE @XML XML
SELECT
@XML = (
SELECT data
FROM tempdb.dbo.MergeTemp
WHERE [object_name] = '<table name>'
)
IF EXISTS(
SELECT 1
FROM sys.columns c
WHERE c.[object_id] = <table_id>
AND
c.is_identity = 1
) SET IDENTITY_INSERT <TABLE name> ON;
DELETE FROM <TABLE name>
INSERT INTO <TABLE name> (<columns list>)
SELECT [<column name>] = r.c.value('@<column name>', '<column datetype>')
FROM @XML.nodes('<table name>/t') r(c)

Synchronization can be performed in three steps:

  1. Inserting new data
  2. Updating existing data by a key field
  3. Deleting non-existing data in the master database

The provided sequence of actions can be organized through separate operators: INSERT/UPDATE/DELETE; or using a MERGE construction that was implemented in MS SQL Server 2008. However, in this case, SQL code will have quite a complicated structure, since we will have to take into account many factors (e.g., insert data in accordance to the order of the relations between tables). In case if the database has a more simple structure, an easier approach can be used.

If all the tables of the database or a specific isolated schema are synchronized, there is a possibility to directly remove all the records from the tables and insert data from the master database. On one hand, such approach can be the most cost-effective one in terms of implementation, if a database with simple logic is synchronized. On the other hand, this way of synchronization may lead to damages in the integrity and validity of the data in the database.

5) Enabling relations and triggers between the tables of the target database

After inserting data into required tables, it’s necessary to enable foreign keys and triggers in order not to damage the cascade integrity and internal business logic. This can be performed as follows:

EXEC sys.sp_msforeachtable '
ALTER TABLE ? ENABLE TRIGGER ALL
ALTER TABLE ? WITH CHECK CHECK CONSTRAINT ALL
'

When executing this command, the server will check whether the inserted data is correct. Therefore, if an error arises on this step, the problem may initially due to damaging of the cascade integrity.

The resulting version of the script:

USE AdventureWorks2008R2
SET NOCOUNT ON;
SET XACT_ABORT ON;
RAISERROR('Start...', 0, 1) WITH NOWAIT
DECLARE
@SQL NVARCHAR(MAX)
,
@Time DATETIME
, @Text NVARCHAR(300)
,
@SessionUID UNIQUEIDENTIFIER
SELECT
@SQL = ''
, @Time = GETDATE()
,
@SessionUID = NEWID()
SELECT @Text = CONVERT(NVARCHAR(25), @Time, 114) + ' Creating temporary table...'
RAISERROR(@Text, 0, 1) WITH NOWAIT
IF OBJECT_ID('tempdb.dbo.MergeTemp', 'U') IS NULL BEGIN
CREATE TABLE
tempdb.dbo.MergeTemp
(
id UNIQUEIDENTIFIER NOT NULL
,
[object_name] NVARCHAR(300) NOT NULL
,
create_date DATETIME NOT NULL DEFAULT GETDATE()
,
data XML NULL
,
CONSTRAINT [PK_MergeTemp] PRIMARY KEY CLUSTERED
(
id ASC, [object_name] ASC
) ON [PRIMARY]
)
END
SELECT
@Text = CONVERT(NVARCHAR(25), GETDATE(), 114) + ' Generate SQL queries...'
RAISERROR(@Text, 0, 1) WITH NOWAIT
DECLARE
@Data NVARCHAR(4000)
,
@ObjectName NVARCHAR(300)
DECLARE [tables] CURSOR READ_ONLY FAST_FORWARD LOCAL FOR
SELECT
[object_name]
, [sql] = '
INSERT INTO tempdb.dbo.MergeTemp(id, [object_name], data)
SELECT '''
+ CAST(@SessionUID AS VARCHAR(36)) + ''', ''' + quota_object_name + ''', (SELECT ' + object_columns + '
FROM '
+ quota_object_name + ' t
FOR XML AUTO, ROOT('''
+ [object_name] + '''))'
FROM (
SELECT
[object_name] = s.name + '.' + t.name
, quota_object_name = '[' + s.name + '].[' + t.name + ']'
, object_columns = STUFF((
SELECT ISNULL(', [' + c.name + '] = NULLIF([' + c.name + '], ' + c2.[definition] + ')',
', [' + c.name + ']')
+
CASE WHEN b.name = 'varbinary'
THEN ' = CAST('''' AS XML).value(''xs:base64Binary(sql:column("'+ c.name +'"))'',''VARCHAR(MAX)'')'
ELSE ''
END
FROM
sys.columns c WITH (NOLOCK)
JOIN sys.types b WITH (NOLOCK) ON b.user_type_id = c.system_type_id AND b.user_type_id = b.system_type_id
LEFT JOIN sys.default_constraints c2 ON c.default_object_id = c2.[object_id]
AND b.name IN ('char', 'nchar', 'ntext', 'nvarchar', 'text', 'varchar')
WHERE t.[object_id] = c.[object_id]
AND c.is_computed = 0
ORDER BY c.column_id
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)')
,
1, 2, '')
FROM (
SELECT
o.[object_id]
, o.[schema_id]
, o.name
FROM sys.objects o WITH (NOLOCK)
WHERE o.[type] = 'U'
AND EXISTS(
SELECT 1
FROM sys.partitions p WITH (NOLOCK)
WHERE p.[object_id] = o.[object_id]
AND p.[rows] > 0
)
AND NOT EXISTS(
SELECT 1
FROM sys.columns c WITH (NOLOCK)
JOIN sys.types t WITH (NOLOCK) ON c.system_type_id = t.system_type_id
WHERE (c.user_type_id!= c.system_type_id OR t.name = 'xml')
AND
c.[object_id] = o.[object_id]
)
)
t
JOIN sys.schemas s WITH (NOLOCK) ON t.[schema_id] = s.[schema_id]
-- You can specify sync all db tables or specified schema/tables
-- WHERE s.name = 'Sales'
-- OR o.name IN ('Address', 'AddressType')
) data
OPEN [tables]
FETCH NEXT FROM [tables] INTO @ObjectName, @Data
IF (@@FETCH_STATUS = -1)
RAISERROR('Have nothing to export...', 16, 1)
WHILE @@FETCH_STATUS = 0 BEGIN
SELECT
@Text = CONVERT(NVARCHAR(25), GETDATE(), 114) + ' Generate XML for ' + @ObjectName
RAISERROR(@Text, 0, 1) WITH NOWAIT
EXEC sys.sp_executesql @Data
FETCH NEXT FROM [tables] INTO @ObjectName, @Data
END
CLOSE
[tables]
DEALLOCATE [tables]
PRINT 'ELAPSED TIME: ' + CAST(DATEDIFF(ms, @Time, GETDATE()) AS NVARCHAR(15)) + 'ms'
RAISERROR('Finish...', 0, 1) WITH NOWAIT
PRINT REPLICATE('-', 80)
USE AdventureWorks2008R2_Live
RAISERROR('Start...', 0, 1) WITH NOWAIT
SELECT @Time = GETDATE()
SELECT @Text = CONVERT(NVARCHAR(25), @Time, 114) + ' Get similar objects in both databases...'
RAISERROR(@Text, 0, 1) WITH NOWAIT
DECLARE @MergeTemp TABLE
(
[object_name] NVARCHAR(300)
,
[object_id] INT
, data XML
)
INSERT INTO @MergeTemp
(
[object_name]
, [object_id]
, daata
)
SELECT
mt.[object_name]
, o.[object_id]
, mt.data
FROM tempdb.dbo.MergeTemp mt
JOIN (
SELECT
o.[object_id]
, [object_name] = '[' + s.name + '].[' + o.name + ']'
FROM sys.objects o WITH (NOLOCK)
JOIN sys.schemas s WITH (NOLOCK) ON o.[schema_id] = s.[schema_id]
WHERE o.[type] = 'U'
) o ON mt.[object_name] COLLATE DATABASE_DEFAULT = o.[object_name] COLLATE DATABASE_DEFAULT
WHERE mt.id = @SessionUID
IF NOT EXISTS(
SELECT 1
FROM @MergeTemp
) RAISERROR('Have nothing to import...', 16, 1)
SELECT @Text = CONVERT(NVARCHAR(25), GETDATE(), 114) + ' Turn off foreign keys and triggers...'
RAISERROR(@Text, 0, 1) WITH NOWAIT
EXEC sys.sp_msforeachtable '
ALTER TABLE ? DISABLE TRIGGER ALL
ALTER TABLE ? NOCHECK CONSTRAINT ALL
'
DECLARE
@ObjectID INT
, @ObjName NVARCHAR(300)
DECLARE package CURSOR FAST_FORWARD READ_ONLY LOCAL FOR
SELECT
mt.[object_id]
, mt.[object_name]
FROM @MergeTemp mt
OPEN package
FETCH NEXT FROM package INTO
@ObjectID
, @ObjName
WHILE @@FETCH_STATUS = 0 BEGIN
SELECT
@Text = CONVERT(NVARCHAR(25), GETDATE(), 114) + ' Processing ' + @ObjName + '...'
RAISERROR(@Text, 0, 1) WITH NOWAIT
SELECT @SQL = '
DECLARE @XML XML
SELECT @XML = (
SELECT data
FROM tempdb.dbo.MergeTemp
WHERE [object_name] = '''
+ @ObjName + ''' AND id = ''' + CAST(@SessionUID AS VARCHAR(36)) + '''
)
IF EXISTS(
SELECT 1
FROM sys.columns c WITH (NOLOCK)
WHERE c.[object_id] = '
+ CAST(@ObjectID AS VARCHAR(10)) + '
AND c.is_identity = 1
) SET IDENTITY_INSERT '
+ @ObjName + ' ON;
DELETE FROM '
+ @ObjName + '
INSERT INTO '
+ @ObjName + '(' +
STUFF((
SELECT ', [' + c.name + ']'
FROM sys.columns c WITH (NOLOCK)
WHERE c.[object_id] = @ObjectID
AND c.is_computed = 0
ORDER BY c.column_id
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)'), 1, 2, '') + ')
SELECT '
+ STUFF((
SELECT
', [' + c.name + '] = ' + CASE WHEN c2.name IS NOT NULL THEN 'ISNULL(' ELSE '' END + 'r.c.value(''@' + c.name + ''', ''' + b.name +
CASE
WHEN b.name IN ('char', 'varchar', 'nchar', 'nvarchar', 'varbinary') AND c.max_length = -1 THEN '(max)'
WHEN b.name IN ('char', 'varchar') AND c.max_length!= -1 THEN '(' + CAST(c.max_length AS VARCHAR(10)) + ')'
WHEN b.name IN ('nchar', 'nvarchar') AND c.max_length!= -1 THEN '(' + CAST(c.max_length / 2 AS VARCHAR(10)) + ')'
WHEN b.name = 'decimal' THEN '(' + CAST(c.[precision] AS NVARCHAR(10)) + ',' + CAST(c.[scale] AS VARCHAR(10)) + ')'
ELSE ''
END + ''')' + CASE WHEN c2.name IS NOT NULL THEN ', ' + c2.[definition] + ')' ELSE '' END
FROM
sys.columns c WITH (NOLOCK)
JOIN sys.types b WITH (NOLOCK) ON b.user_type_id = c.system_type_id AND b.user_type_id = b.system_type_id AND b.name!= 'xml'
LEFT JOIN sys.default_constraints c2 WITH (NOLOCK) ON c.default_object_id = c2.[object_id]
AND b.name IN ('bit', 'char', 'varchar', 'nchar', 'nvarchar')
WHERE c.[object_id] = @ObjectID
AND c.is_computed = 0
ORDER BY c.column_id
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)')
,
1, 2, '') + '
FROM @XML.nodes('''
+ s.name + '.' + o.name + '/t'') r(c)
'
FROM sys.objects o WITH (NOLOCK)
JOIN sys.schemas s WITH (NOLOCK) ON o.[schema_id] = s.[schema_id]
WHERE o.[object_id] = @ObjectID
EXEC sys.sp_executesql @SQL
FETCH NEXT FROM package INTO
@ObjectID
, @ObjName
END
CLOSE
package
DEALLOCATE package
SELECT @Text = CONVERT(NVARCHAR(25), GETDATE(), 114) + ' Turn on foreign keys and triggers...'
RAISERROR(@Text, 0, 1) WITH NOWAIT
EXEC sys.sp_msforeachtable '
ALTER TABLE ? ENABLE TRIGGER ALL
ALTER TABLE ? WITH CHECK CHECK CONSTRAINT ALL
'
PRINT 'ELAPSED TIME: ' + CAST(DATEDIFF(ms, @Time, GETDATE()) AS NVARCHAR(15)) + 'ms'
RAISERROR('Finish...', 0, 1) WITH NOWAIT

Conclusion:

As we can see, even for databases with a simple structure, the synchronization process is quite a bigtask that includes many different actions.

As you have noticed early in the article, manual data synchronization is not quite easy task. Thus, to save time and efforts it is highly recommended to use special tools for synchronization. For instance, the following screen-shot demonstrates how you can visualize compared data before synchronization by using comparison feature of dbForge Studio for SQL Server.

The upper grid shows the database objects differences and the bottom grid displays data differences. To synchronize source and target databases you can automatically generate the synchronization script. Then you can view this script, save it, or execute it immediately to update the target database.  So the process can be handled in an easier way.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – SQL Report Builder in dbForge Studio for SQL Server

Modern opportunities of electronic document management systems can significantly simplify the process of decision-making at an enterprise.

Along with the opportunities they provide, the traditional printed documentation still plays an important role by allowing to conveniently operate the required information by means of reports.

Creation of such reports, depending on the degree of complexity, can take a long time, therefore, to accelerate their creation, visual editors are traditionally used.

Some of these editors have only basic functionality. Others have extended reporting capabilities. The latter group of products includes the SQL report builder built-in to dbForge Studio for SQL Server.

The editor’s capabilities allow you to build a report according your SQL query, literally, within a few minutes. For this, it’s enough to call the Report Designer command from the main menu.

Select a report type in the opened dialog.

Select a connection and a data source.

Herewith, tables and views can be selected as a source.

In case if data, for example, from several tables should be returned by a query, a user query can be written.

After selecting a data source, specify the columns, by which the report will be built.

You should also specify how this data will be presented in the report.

At the final stage, it remains just to specify the name for the report and click the Finish button.

After the wizard is finished, the report editor opens, in which it is possible to fine-tune the appearance of the report.

To preview the report, there is the Preview button provided.

After the report is generated, it can be both printed and exported to one of the supported formats.

As you can see, creating and editing the report in SQL report builder does not take a lot of time, allowing to streamline the process and reduce development time.

Reference: Pinal Dave (http://blog.sqlauthority.com)

MySQL – Profiler : A Simple and Convenient Tool for Profiling SQL Queries

In terms of working with highly loaded databases, any badly-designed request can cause significant performance degradation. Depending on the used DBMS features, various methods are used for query optimization.

However, a general rule can be pointed out for all types of DBMS: the fewer server resources are expended when executing a query, the more effective will be the query itself.

To detect bottlenecks, MySQL provides the EXPLAIN command. It will help tell you what exactly the database server does, when you execute the query.

To obtain data on executing a query, it is enough to set EXPLAIN before each targeted query. However, putting EXPLAIN for a large number of constructs can become rather tedious.

To simplify search for a problem query let us use MySQL Profiler in dbForge Studio for MySQL. It provides such commands as a Query Profiling Mode and Generate Execution Plan.

To demonstrate work with execution plan, let’s consider a small sample. Suppose we need to find vendors of PCs, but not laptops, basing on the following data:

CREATE TABLE models (
model_id INT(11) NOT NULL PRIMARY KEY,
model_type VARCHAR(10) NOT NULL
)
ENGINE = INNODB;
CREATE TABLE products (
maker_id INT(11) NOT NULL,
model_id INT(11) NOT NULL,
PRIMARY KEY (maker_id, model_id),
CONSTRAINT FK_products_models_model_id
FOREIGN KEY (model_id)
REFERENCES models (model_id)
ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE = INNODB;
INSERT INTO models(model_id, model_type)
VALUES
(1, 'PC'), (2, 'Laptop'), (3, 'Tablet'),
(
4, 'Phone'), (5, 'PC'), (6, 'Laptop');
INSERT INTO products(maker_id, model_id)
VALUES
(1, 1), (1, 2), (1, 3), (2, 4),
(
4, 4), (2, 5), (3, 5), (3, 6);

This task can be solved quite trivial:

SELECT DISTINCT p.maker_id
FROM products AS p
JOIN models m ON p.model_id = m.model_id
WHERE m.model_type = 'PC'
AND NOT EXISTS (
SELECT p2.maker_id
FROM products AS p2
JOIN models m2 ON p2.model_id = m2.model_id
WHERE m2.model_type = 'Laptop'
AND p2.maker_id = p.maker_id
);

When designing a query, one always  ought to pay attention to re-readings from the same tables, since they bear an additional load on the database server disk subsystem:

The query can be rewritten with no re-readings, using aggregating functions:

SELECT p.maker_id
FROM products AS p
JOIN models AS m ON p.model_id = m.model_id
WHERE m.model_type IN ('PC', 'Laptop')
GROUP BY p.maker_id
HAVING COUNT(CASE WHEN m.model_type = 'PC' THEN 1 END) > 0
AND COUNT(CASE WHEN m.model_type = 'Laptop' THEN 1 END) = 0;

However, having a large number of records, JOIN to the models table can still affect performance.

To speed up the query, denormalization of the scheme can be performed by duplicating data from the model_type column of the products table:

UPDATE products AS p
JOIN models AS m ON p.model_id = m.model_id
SET p.model_type = m.model_type;

Also, pay attention to the fact that the model_type column is filtered by, so it is recommended to index it.

At that,  selection of columns, that will be included to the primary key,  should be approached carefully. Since columns from the primary key are included to the nonclustered index, this can increase database size.

As a result, selection is significantly simplified:

SELECT maker_id
FROM products
GROUP BY maker_id
HAVING COUNT(CASE WHEN model_type = 'PC' THEN 1 END) > 0
AND COUNT(CASE WHEN model_type = 'Laptop' THEN 1 END) = 0;

At this, the plan becomes simple at all:

However, the root of performance problems should not always be searched for in queries – it is very important to correctly configure database server settings. For this, the Server Variables tab can be used.

In this case, all the settings, for convenience, can be grouped according to their target facilities:

There is no perfect advice how to optimize a query.  However, using the profiler embedded into dbForge Studio for MySQL, there is a probability to detect all the performance bottlenecks.

Optimal settings for MySQL server can also vary depending on the specificity of performed tasks. Therefore, detailed consideration of the server configuration process will be revealed in a separate article.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Writing SQL Queries Made Easy with dbForget SQL Complete

Down with shackles of convention! dbForge SQL Complete 4.5 came in this world.

What does SQL mean for each of us? For some – the only language for working with relational databases. Yet, for others – it is a way of life…

However, when working with SQL queries, both categories of people face the same difficulties.

Eventually, writing queries becomes routine, and all because of imperfection of the built-in Intellisense in SSMS.

The new dbForge SQL Complete 4.5 is called to release everyone from the shackles of convention in SSMS Intellisense and provide more opportunities when working with T-SQL code.

For this, SQL Complete has several key features in its arsenal:

1. Code auto-completion

Because of contextual object tooltips and syntactic elements of the T-SQL language, the productivity of work with SQL Complete is not just rising – it opens on to a qualitatively new level, providing greater opportunities.

Not just a trite hint of tables and columns is ensured. The ability to generate foreign key relationships, aliases, and many other things…

Object hints are provided in such a big number of structures, that an incomplete list of them can be learned only on a separate page.

2. SQL Formatter

Features of the SQL Formatter embedded in dbForge SQL Complete will allow to forget about manual query editing forever.

Using the SQL Formatter, your code will always be perfect due to a huge number of settings, that allow to customize formatting of all SQL constructs according to your personal preferences.

Moreover, there is a possibility to format not only the current document or selected code fragment, but groups of files on the disk as well.

3. Snippets

Code reuse through snippets is a perfect opportunity to accelerate the process of writing code.

As Devart developers said, their analysis of more than 5.000 topical posts from authoritative resources such as StackOverflow had shown that the list of basic snippets was significantly extended to prevent users from typing frequently used T-SQL statements.

Besides, owing to the Snippet Manager, users can easily extend the set of existing snippets.

When typing code, a required snippet can be selected by capital letters:

and by match with the snippet’s name:

4. Auto-refresh after update schema changes

On active schema changing, many encounter a situation when new or modified objects are not suggested in SSMS Intellisense. For this, you have to manually refresh the object list.

But all that is in the past, since in the version 4.5, there is a capability of smart refresh of hints for modified objects only at specified time intervals.

Within the dbForge SQL Complete 4.5 release, I would like to bring to your attention that memory consumption is reduced and objects describing is now much faster.

All these improvements are aimed to ensure that our users to be able to enjoy work and perform it without delays.

To support words with deeds, I recommend users to re-discover the wonderful world of SQL with dbForge SQL Complete 4.5.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Query Plan Analysis of Pivot and Unpivot with Profiler of dbForge Studio for SQL Server

Probably, everyone who writes SQL code has used INSERT statement, which alongside SELECT, UPDATE and DELETE, is a part of a basic operations’ set for data manipulation.

At the first sight syntax of the INSERT statement may seem very trivial, especially when inserting one record –  INSERT INTO VALUES … .

It’s not surprising, whereas in SQL Server 2005 basic syntax the VALUES keyword was applied only in the context of inserting records through using the INSERT statement. However, after the release of SQL Server 2008 the basic syntax of T-SQL was considerably expanded. Owing to this, usage of the multiline VALUES statement became possible not only for record insertion.

This article will focus on several possible cases of the VALUES statement usage. To present an unbiased assessment of the advantages of the VALUES statement, we will use built-in Profiler of dbForge Studio for SQL Server to look at the query plan for each particular example.

We have a table with the following structure, and we need to find the maximum amount of medals (disregarding their value) for each player:

IF OBJECT_ID ('dbo.Players') IS NOT NULL
DROP TABLE dbo.Players;
CREATE TABLE dbo.Players
(
PlayerID INT
, GoldBadge SMALLINT
, SilverBadge SMALLINT
, BronzeBadge SMALLINT
);
INSERT INTO dbo.Players (PlayerID, GoldBadge, SilverBadge, BronzeBadge)
VALUES
(1, 5, 4, 0),
(
2, 0, 9, 1),
(
3, 2, 4, 10);

To accomplish the task we need to convert columns into rows, so that afterwards we are able to apply the MAX aggregation function.

At first, let’s look at the example that is very common among beginners:

Note, instead of reading data once, we are reading it three times from the table.

Starting from SQL Server 2005, the UNPIVOT operator is generally used to convert columns into rows:

Let’s look at the plan:

Now data from the table is read only once and that is what we’ve been trying to achieve. However, there is still one more bottleneck left in our query (the most resource-consuming operation) – it’s the Sort operation which allows a server to detect the element with maximal value.

Let’s try to avoid row sorting  using the VALUES statement.

As we can see, sorting is not applied here.

Let’s compare our queries and the results they return

Let’s examine the next sample, where we need to format row-wise data output in the grouped table.

We will get the following result:

We can solve this task by reading the data several times using UNION ALL with sorting:

We can also go back to the UNPIVOT statement, checking line numbers:

However, the VALUES statement allows creating a more sophisticated query:

Let’s take a look at the plan and compare the results:

The VALUES statement cannot be considered a full substitute to UNPIVOT statement as well as a savior in other cases. It all goes down to the query plan, and analyzing it with the help of Profiler, available in dbForge Studio for SQL Server, allows detecting potential bottlenecks in query productivity. Additionally, while we are on the subject of Database Diagram, it does makes sense to mention Database Diagram tool from dbForge. It allows us to visualize a database structure for further analysis as well help us to create and edit database objects on a diagram. You can get the Database Diagram Tool from here.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL Contest – USD 100 Gift Card and Attractive Discount from Devart

Quick Update: To take part in contest click here.

No matter how old we get we all want to Santa to give us gifts during the festival season. As the festival seasons has been approaching I was talking to a few of the Developer Tools vendors to see if they can come up with something cool for blog readers. If you are familiar with this blog, you must know about Devart. They are a fantastic team of people and make beautiful and easy to use tools.

Here are few of the blog posts, which I have written about them:

If you look back, I have used Devart tools for over 2 years now. I am a big fan of their DBForge Studio for SQL Server. It is like a one mega pack which contains multiple product which DBA and Developer will need for their daily needs. It is a complete set of database development tools for SQL Server united into a single IDE to help developers and administrators increase productivity and tackle routine database development tasks.

The main dbForge products for SQL Server are:

  • dbForge SQL Complete. It type T-SQL queries faster and format code automatically in Microsoft SQL Server Management Studio and Visual Studio to make your work more effective and productive with the help of this SQL formatter tool.
  • dbForge Schema Compare for SQL Server. It compare and synchronize schemas safely, analyze database structure differences, and propagate changes to the required SQL Server database with the help of this tool.
  • dbForge Data Compare for SQL Server. It compare and synchronize data safely and effectively, get a convenient way of managing data differences in a well-designed interface and ability to customize synchronization with the help of this tool.
  • dbForge Query Builder for SQL Server. It use this tool to create complex queries without having to type the entire code. Query Builder provides a wide range of options for visual query results management.
  • dbForge Fusion for SQL Server.  It is SQL GUI tools for Microsoft Visual Studio to advance all SQL database development and administration tasks in your favorite IDE.
  • And certainly – dbForge Studio for SQL Server –  which incorporates most of the functions of the tools mentioned above.

Christmas Offer

Team Devart has come up with fantastic Christmas offer which will celebrate Christmas everyday from today to January 15th, 2013. You can get anywhere between 20% to 35% discount on their product by taking part in the contest which they are running on G+. The contest is very simple and there will be one winner every day that means there is a great chance of winning for everybody everyday. Here is something I am going to do. Once you take part in Devart’s contest, come here and let me know in a comment that you have purchased the product and I will enter you in my contest where you can win USD 100 gift card from ThinkGeek. If you are familiar with ThinkGeek, you want those interesting gadgets but you do not want to spend your money on it. Well, now you have the opportunity to get them. I am going to give one of you USD 100 Gift Card for participating in the contest of DevArt. I will announce the winner on January 16th, 2013.

Click here to know about how to take part in the contest of the Devart Christmas Offer.

Reference: Pinal Dave (http://blog.sqlauthority.com)