SQL SERVER – Generating Meaningful Test Data with dbForge Data Generator for SQL Server

Recently I faced with necessity of generating meaningful SQL data for testing purposes. During my search for a proper application for the task, I found out that Devart, a recognized developer of database solutions, released dbForge Data Generator for SQL Server, a GUI tool for fast generation of meaningful SQL data for development and testing purposes. Devart stated that with the tool, I will be able to:

  • Populate SQL Server tables with millions of rows of test data
  • Generate meaningful data that looks just like real data
  • Use 100+ predefined generators with sensible configuration options
  • Customize built-in generators
  • Create user-defined generators
  • Preview generation results
  • Save data population script for future reference
  • Automate data generation with the command-line interface

It sounded great, so I decided to give a quick test for the tool. Moreover, Devart offers a 30-day free trial for the most of its products, including Data Generator for SQL Server.

The first thing I noticed after running SQL Data Generator was clear user-friendly interface. The start page of the tool includes quick access to the main features and allows to quickly run recent projects, if any.

dbforge 1 SQL SERVER   Generating Meaningful Test Data with dbForge Data Generator for SQL Server

After running new data generation, I was prompted to select connection and database as well as to set default data generation options. I decided to generate meaningful data for few columns of the AdventrureWokrsTest database.

dbforge 2 SQL SERVER   Generating Meaningful Test Data with dbForge Data Generator for SQL Server

On the Options page, I selected to generate 2000 rows, and cleared the Include Null values option.

dbforge 3 SQL SERVER   Generating Meaningful Test Data with dbForge Data Generator for SQL Server

After that, Data Generator showed the generation document. I selected the following columns of the Person.Contact table for data population:

  • ContactID
  • Title
  • First Name
  • LatName
  • EmailAddress

Now, the time has come for selecting generators for each column. In dbForge Data Generator for SQL Server, all generators are split into several categories for easier look-up.

dbforge 4 SQL SERVER   Generating Meaningful Test Data with dbForge Data Generator for SQL Server

I was really impressed with the collection of generators the application offers. The tool automatically detects and offers appropriate generators for a given column.  Also, Data Generator for SQL Server includes multiple ways to make generated data more customized. For instance, you can:

  • Set the range of generated values for different SQL data types
  • Define the percentage of NULLs to be generated
  • Set the value distribution mode
  • Immediately preview data that will be generated

Data Generator suggested the following generators:

  • IDs for the ContactID Additionally, I selected the Random by timestamp value distribution mode for getting more complex numbers.
  • Honirific Prefix (Title) for the Title column
  • First Name (male) for the FirstName column
  • Last Name for the LastName column
  • Email for the EmailAddress This generator is based on the following regular expression \c{3,8}(\d{0,3}|.\c{3,10})@nowhere.com that can be customized right in the Fill settings section of Column generation settings. For instance, you can modify the regular expression, so that the domain name would be “mailservice.com” instead of “nowhere.com”.

dbforge 5 SQL SERVER   Generating Meaningful Test Data with dbForge Data Generator for SQL Server

Data Generator also allows to modify existing generators and create your own ones, so in future you won’t need to modify the same generator over and over again. I opened the Email.xml file, that is located at C:\Users\Public\Documents\Devart\dbForge Data Generator for SQL Server\Data Generators,  modified regular expression so that the domain name would consist of combination of letters, updated description, name, and saved the file as MyEmail.xml:

dbforge 6 SQL SERVER   Generating Meaningful Test Data with dbForge Data Generator for SQL Server

The following screenshot shows the data generation document with the result of my manipulations:

I was quite satisfied with the results showed in Preview, so I was ready to populate the target database with data. Data Generator offers several ways to do that, namely:

  • Open the data population script in the internal editor
  • Save the SQL script to a file
  • Run the script automatically

dbforge 7 SQL SERVER   Generating Meaningful Test Data with dbForge Data Generator for SQL Server

I chose the first option and in few seconds, I got the following population script:

dbforge 8 SQL SERVER   Generating Meaningful Test Data with dbForge Data Generator for SQL Server

To sum up, I would like to say that the tool has done a fantastic job for me. In my opinion, the most strong points of Devart’s Data Generator are:

  1. Simple work-flow in a convenient interface. With dbForge Data Generator, the process takes just couple of minutes and basically, it is simplified to three simple steps:
  • Selecting required tables
  • Selecting and customizing a required generator
  • Populating tables with test data
  1. Impressive collection of generators. The tool includes over 100 generators that allow to populate tables with realistic-looking test data.
  2. Flexible Customization Options. Developers put really much effort for making data more customized and suiting specific needs.

I strongly recommend to give dbForge Data Generator for SQL Server a try and to evaluate its functionality on your own by downloading a 30-day free trial.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Using Power of Web to Query Databases / Sources With Skyvia

Web technologies are moving at a pace that we can hardly keep up in pace. More and more of investments are going on in the space of web that even database developers like me find it interesting. In the same context, recently I was exploring on Skyvia (SQL SERVER – Integrate Your Data with Skyvia – Cloud ETL Solution) from ETL to how one might query and was surprised to see how some of these can be made useful.

This blog stems from a simple idea that was in my mind for a long time and I was struggling for a suitable solution till I saw what was available here. Think that you are a DBA and are on a vacation to a far-away land. You didn’t take your laptop along because you wanted to really enjoy your vacation times. Everything looks normal till you get an SMS from your management that something is wrong on one of your servers and they need 10-15 mins of your time to quickly resolve. Now with no laptop and all the cool things that you would have normally installed on your machine, how can you help your management with a few query support? Is it even possible? Assume you get to an internet kiosk and you want to do some amount of troubleshooting. How will you do? Well, there are a number of solutions.

One of the many available including the use of Skyvia to do query on your servers once you have done the simple step setup. In this example, I have used an SQL Server 2014 version available on a Cloud provider so that we can see what is going wrong.

Let us start by connecting to Skyvia and setting up our connections.

 Using Power of Web to Query Databases / Sources With Skyvia

In my example, I am going to use the “SQL Server” source and will give all the credentials. Once done and tested, we must see this in our connections tab.

 Using Power of Web to Query Databases / Sources With Skyvia

Once the connection setting seem to be in place, we are all set to run our queries on the backend. We can then head to the Query tab as shown below:

 Using Power of Web to Query Databases / Sources With Skyvia

At this stage, I just linked the Query window to my connection that we created in our previous step. This started to list our all the tables that were available on my server. This was really a cool moment because I was able to work on my database in less than 5 mins via the web.

 Using Power of Web to Query Databases / Sources With Skyvia

Obviously based on the permissions that I used to connect, the tables and objects are displayed. If there are 100’s of tables to work, then we can surely use the search to find the objects.

The next stage or table is our awesome query window. It is here that we can type in any SQL-92 query that needs to be fired against our database. Since I was wanting to know what was happening in my SQL Server system and what operators were functioning of few long running query, I used a DMV from SQL Server 2014 to check if it works. For my surprise, I was not disappointed.

 Using Power of Web to Query Databases / Sources With Skyvia

This is the query result output based on the query that I was running in the backend.

SELECT
node_id,physical_operator_name, SUM(row_count) row_count, SUM(estimate_row_count) AS estimate_row_count,
CAST(SUM(row_count)*100 AS FLOAT)/SUM(estimate_row_count)
FROM sys.dm_exec_query_profiles
GROUP BY node_id,physical_operator_name
ORDER BY node_id;

To know more about sys.dm_exec_query_profiles you can read from MSDN. I also talk about this DMV in my course at Pluralsight – SQL Server 2014 Administration New Features. The course is exhaustive in understanding some of these new capabilities of SQL 2014.

Coming back to our query, based on the output I was able to find out what was the query that was running behind the scene. As you can see this is a Dummy query without any filters or restriction on the columns selected. Hence we are seeing a number of Clustered Index Scan in our query output via the web.

SELECT      *
FROM DimProduct
INNER JOIN DimProductSubcategory
ON DimProduct.ProductSubcategoryKey = DimProductSubcategory.ProductSubcategoryKey
INNER JOIN FactInternetSales
ON DimProduct.ProductKey = FactInternetSales.ProductKey

Once the query was identified, you can start looking at what is firing up the query and take corrective actions. Any normal SQL syntax can be fired like this via the web. It is simple and quite powerful to say the least.

The results from Skyvia is powerful that you can also use them to export to CSV format for later offline consumption. As part of simple testing, I also fired few DDL statements and was surprised to have almost everything I do via SQL Server Management Script window here via the web, anytime and anywhere I like.

And another thing I’d like to mention is that Skyvia Query is not just for SQL Server and other databases. Skyvia Query supports SQL for cloud CRMs, such as Salesforce, Dynamics CRM, etc, and allows using SELECT, INSERT, UPDATE and  DELETE statements against them. This enables admins, knowing SQL, work with cloud data in the same way as with SQL Server data.

Currently Skyvia is free for use. You can find more information on how to use their online SQL editor on the following page: Online SQL Query Editor for Cloud and Relational Data.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Collect and Analyze SQL Server Data Efficiently

dbForge Event Profiler is one of the most useful SQL Server “build-in” tools. The Profiler records data associated with various SQL Server events for further analysis. This data is stored in a trace file that can later be analyzed or used to replay a specific series of steps when trying to diagnose SQL Server relates problems. The tool allows you to view communications between a client and SQL Server, and gives you an insight into its internal performance. To take full advantage of its potential, download dbForge Event Profiler for SQL Server for free now.

The tool offers a large spectrum of features that can assist you in:

  • analyzing trace results by grouping or aggregating them;
  • auditing user activity;
  • creating your own custom traces and save them for future use;
  • debugging T-SQL code and stored procedures;
  • executing quality assurance check;
  • identifying performance-related problems with front-end applications, queries, T-SQL, transactions, and so forth;
  • performing stress testing;
  • performing query analysis of execution plans;
  • viewing SQL Server performance when interacting with a client.

Essentially, the Event Profiler is designed to quickly and efficiently track down and fix many SQL Server related problems, such as poorly-performing queries, locking and blocking, excessive table/index scanning, and a lot more. For example, you can monitor the execution of a stored procedure to see whether it hampers SQL Server performance.

Using the Profiler, you can monitor the events that you are interested in. For example, you may want to capture events from a specific user or a given database. Smart filters allow you to collect only the events that you want, filtering out those of no interest. This reduces the amount of data that is stored in your trace.

dbForge Event Profiler provides a rich graphical user interface that can be used to create, analyze, and replay trace results. As the trace data is being collected, you can stop or pause the trace at a certain point and store the trace results to a physical file on a local hard disc. The saved SQL Server Profiler document has the “.*ssp” extension. This file may then be viewed to analyze data captured, share it with others, or compare the trace results to traces performed later.

Powerful Tool in Action

Below is an example of how you create a new trace using dbForge Event Profiler.

To create a new trace, you follow these steps:

  1. On the Start page, click Profile Server Events. The Profile Server Events wizard appears.
  2. Specify the connection.
  3. Optionally, select a profiling template, modify the trace file settings and the data storage settings. Click Next.

devarteventprofiler1 SQL SERVER   Collect and Analyze SQL Server Data Efficiently

  1. On the Events to Capture page, select the events you want to capture. Click Next.
  2. Optionally, on the Actions page, select the actions you want to capture in the current events session. Click Next.
  3. Optionally, on the Event Filters page, specify filter options to limit the tracing data.
  4. Click Execute. The Trace will start and the Server Event Profiler Document opens.

devarteventprofiler2 SQL SERVER   Collect and Analyze SQL Server Data Efficiently

Once you have collected enough data, stop the trace by clicking the Stop Trace button on the toolbar. At this point, you may review the collected data or save it to a file for future use.

Now that you have learned about many advantages of this smart tool, you can start mastering and making practical use of the dbForge Event Profiler for SQL Server by downloading it now for FREE.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Cloud Based Data Integration Made Easy – A Real World Scenario

If you are a DBA, once in a while, you will have a situation where you end up with some tasks which will be boring and annoying. Trust me in my life, I often come across similar scenarios quite often. Here is one such tasks I came across a few days ago.

A Task, I would like to Automate

Just the other day I was assigned a task where I had to take a CSV file from my network and insert into SQL Server which was hosted in remote locations. Well, you may think it as a pretty easy task and I agree with you this is very easy task. The challenge is not about its difficulty, but the challenge was rather about the entire process and my interest. The part which annoyed me the most was that I have to do this every day at 4 PM.

This means, every day I must be at my desk at 4 PM and take a file from the network and upload to remote SQL Server. What about weekends? What about when I have to step away from my desk at 4 PM? What about the situation, when I am busy doing something much more important than this task? Well, as I said, more than task, I have been just one-place with the routine which was associated with it. In simple words, this was an ETL task which needed to be automated, but I can’t depend on my machine always. I have to find a solution which was cloud based and runs efficiently.

Skyvia at Rescue

I was sitting miffed in office and suddenly I remembered that last year I blogged about the tool Skyvia. Here is the blog post Integrate Your Data with Skyvia – Cloud ETL Solution. I quickly referred to my own blog, post and realized I should give Skyvia a try.

What is Skyvia?

Skyvia is a powerful solution for cloud data integration (ETL) and backup. It is a complete online solution and does not require any local software installed except for a web browser. In Skyvia we can create integration packages that define the operations and then we can run them or schedule for automatic execution. An integrated package is a set of data ETL operations that can be stored for future use, executed, modified, or deleted. Skyvia provides several kinds of packages for different integration scenarios. They are Import, Export, Replication, and Synchronization packages.

How did I do it?

Well, here are few of the screenshots of the task which I was assigned.

First, I checked if the table where I have to export data exists or not. As the table was already created, I quickly checked if it contained data or not. The table contained no data.

1 SQL SERVER   Cloud Based Data Integration Made Easy   A Real World Scenario

Next we will open the Skyvia web interface. It is pretty simple and it will list three options on the left bar. We will click Integration there.

2 SQL SERVER   Cloud Based Data Integration Made Easy   A Real World Scenario

In the Integration section, click Create Now under Data Import.

3 SQL SERVER   Cloud Based Data Integration Made Easy   A Real World Scenario

In the data integration screen we will be presented with various options. We will load the CSV file from an FTP server, so we select source as a CSV from FTP and target as SQL Server.

4 SQL SERVER   Cloud Based Data Integration Made Easy   A Real World Scenario

As we will be connecting SQL Server for the very first time we will be creating new connection and that is pretty straight forward procedure.

5 SQL SERVER   Cloud Based Data Integration Made Easy   A Real World Scenario

Then we will configure an FTP connection

6 SQL SERVER   Cloud Based Data Integration Made Easy   A Real World Scenario

Next we will configure CSV options. Here will be providing various options, but in our case all the default options were good enough for us to move next.

7 SQL SERVER   Cloud Based Data Integration Made Easy   A Real World Scenario

Right after that we will select the target table. In our case the target table is actor table.

8 SQL SERVER   Cloud Based Data Integration Made Easy   A Real World Scenario

The next screen will present mapping and we will one more time review various mapping options. We will make sure that all the source and target columns maps correctly.

9 SQL SERVER   Cloud Based Data Integration Made Easy   A Real World Scenario

When we click finish it will bring up the following screen.

10 SQL SERVER   Cloud Based Data Integration Made Easy   A Real World Scenario

Click on Save and now we are back on the following screen. Over here we can execute our task and see if it works or not. Click on the RUN button on the right side of the screen.

11 SQL SERVER   Cloud Based Data Integration Made Easy   A Real World Scenario

In my case the task ran successfully and it shows that it has inserted 200 rows successfully. The time taken to complete this entire task was 35 seconds and it depends on my network connection to the destination server.

12 SQL SERVER   Cloud Based Data Integration Made Easy   A Real World Scenario

We can execute the same select statement which we had executed earlier and see if the table contains the valid data.

13 SQL SERVER   Cloud Based Data Integration Made Easy   A Real World Scenario

Once we commit that our task has worked successfully, we can create a daily schedule.

14 SQL SERVER   Cloud Based Data Integration Made Easy   A Real World Scenario

That’s it! I am done.

Now every day at specific time the task will automatically execute and will log history.

Next Action Items

Team Devart has created Skyvia a feature rich service. One has to experiment with various different options to fully see the vast capability of this amazing product. Here are few things you can consider doing it. Here is the link where you can sign up for Skyvia for totally FREE. Next I will be trying out Skyvia with Salesforce. Skyvia is an all-in-one cloud solution for various Salesforce data integration scenarios. In addition to the standard Salesforce data loader functionality – data import and data export – it offers powerful data replication and synchronization tools and backup functionality with preview and export of backed up data, data search, viewing data changes between backups, and one-click restore.

Sign up for Skyvia for totally FREE.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL Authority News – I am Participating in Winter Discounts and Christmas Photo Fest

I often get asked which is my favorite tool from DevArt. They have many great tools, but I am really fan of the development productive tool – dbForget Studio for SQL Server. Devart’s dbForge Studio for SQL Server is a fantastic tool for SQL Server development, as soon as I installed it I found so many things to learn. First off, it will increase SQL coding almost instantly. There is very little to learn, you are not just memorizing codes to “cheat” off of. DbForge Studio provides code completion options and automatic SQL formatting, so that you know your code will work. One of my favorite feature is “snippets,” which stores parts of code that you use over and over to cut down on typing and searching – because you know they’re always a few commands you use again and again! Here is another blog post I wrote about Query Plan Analysis of Pivot and Unpivot with Profiler of dbForge Studio for SQL Server.

Every year in December the team at DevArt announces discounts and some fun contest. This year they have also announced similar fun contests as well as a Christmas photo fest.  The Christmas season is a great time to get 20% off on all Devart licenses until January 15th, 2015. If you want to get more discounts, take part in the Devart Christmas Photo Fest!

Share your Christmas desktop, workspace, office, laptop, tablet, etc. You can make a selfie, take a photo of your pets or any other kind of photo you wish. Share your Christmas mood and you will get a chance to win! The key condition is that any installed Devart product must get into the shot. Be creative and funny.

Please go to this blog post where it has given conditions and an example shot. I am going to take part in the contest and I encourage you to take part as well.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Story of Temporary Objects

SQL Server’s version of Transact SQL provides the ability to create and leverage temporary objects for use within the scope of your query session or batch. There are many reasons why you may decide to use temporary objects and we will explore them later in this article. In addition to meeting various data-tier requirements, temporary objects can also be beneficial from a performance and scalability perspective.

SQL Server provides a few temporary objects options:

  • Temporary tables both local and global
  • Temporary storage procedures (not commonly used)
  • Table variables

Why use temporary objects?

Some reasons for using temporary objects:

  • No requirement for persistent data storage for a result set.
  • Intermediate state needed only temporarily, for example when performing data transformation or aggregation.
  • Use of temporary storage only visible to the current session.
  • Reduce query plan complexity.
  • Portability of lookup-data by the containing modules, for example stored procedures.

The article will make heavy use of demos. I will be use demos to show syntax, overall capabilities, and usage scenarios. To write T-SQL code I opt for dbForge Studio for SQL Server. It is a powerful IDE for SQL Server management, administration, development, data reporting and analysis. The tool contains an excellent SQL code editor that provides SQL syntax check, snippets, automatic code formatting, and code completion.

Creating a temporary table

The following example demonstrates how to create a temporary table, insert some data from a permanent table into the temporary table, and select data from the temporary table.

create table 1 SQL SERVER   Story of Temporary Objects

The query returns the following result

temp table query results 2 SQL SERVER   Story of Temporary Objects

Note the # pound sign indicates that we work with the temporary object. To remove the temporary table explicitly, we can execute the following query:

removing temporrary table 3 SQL SERVER   Story of Temporary Objects

SELECT INTO a temporary table

You can use SELECT INTO statement to create a temporary table implicitly. The following example creates a temporary table and populates it with data from the permanent table.

select into temp table 4 SQL SERVER   Story of Temporary Objects

If you need an empty temporary table based on the schema of an existing table, and you do not want to populate it with rows, you can do the following:

empty temporrary table 5 SQL SERVER   Story of Temporary Objects

In this case, zero rows will be affected. If you try to select from the temporary table, no rows will be returned.

empty table 6 SQL SERVER   Story of Temporary Objects

Temporary table scope and lifecycle

Let us get back to the SELECT INTO example discussed above. If you disconnect and connect back to an SQL Server instance, the temporary table will no longer exist. If you then try to query from the temporary table you will get an error message – Invalid object name ‘#temp_contact’.

So why explicitly drop the temporary table? You can think about tembdb usage for larger modules with longer session durations. You can also think about numerous concurrent module executions as well as about modules with many temporary objects.

Viewing temporary table metadata

Once we have created a temporary table, we can execute a system-stored procedure that reports information about a database object, a user-defined data type, or a data type.

temp table metadata 7 SQL SERVER   Story of Temporary Objects

However, in this case we get an error. We can go ahead and try this in tempdb.

temp db temp table metadata 8 SQL SERVER   Story of Temporary Objects

We will see the table name and the value appended to the end. We can use this name to see an entry for this table in sys tables.

sys table query 9 SQL SERVER   Story of Temporary Objects

This query returns full information about our table.

Temporary table data types

In this example, we will create a temporary table with a list of supported data types. The following query executes successfully, which means all these data types are supported.

example data types 10 SQL SERVER   Story of Temporary Objects

However, there is an exception. Let’s try to create a new temporary table with user-defined data type. The following query will return an exception.

user data type 11 SQL SERVER   Story of Temporary Objects

If you need to use user-defined data-type, you need to use a temporary table. The following script executes correctly.

temp db for user data type 12 SQL SERVER   Story of Temporary Objects

Then you can use the tempdb and the DROP TYPE statement to cleanup.

Temporary tables and dynamic SQL

You can use dynamic SQL to create a temporary table. The following example demonstrates how to do it.

temp table via dynamic SQL 13 SQL SERVER   Story of Temporary Objects

However, if you try to execute the INSERT and then SELECT statements, you get an error.

insertinto temp table via dynamic SQL 14 SQL SERVER   Story of Temporary Objects

The above-mentioned query returns the “Invalid object name ‘#category’”. To avoid this, you need to include INSERT and SELECT within the dynamic SQL, as it shown in the following example.

include insert select in dynamic SQL 15 SQL SERVER   Story of Temporary Objects

In this case, you are able to create the temporary table, insert rows, and select from that table.

Local and global temporary tables

As you already know, a single pound sign is used to create a local temporary table (e.g. #localtemptable). If you try to insert some data to a local temporary table from within another session, you will get an error message saying “Invalid object name”. In order to create a global temporary table, you need to add an additional pound sign to a table name (e.g. ##globaltemptable). In this case, you are allowed to access this global table from a different session.

Table variables

SQL Server provides table variables start with 2000 as an alternative to temporary tables. In some cases, a table variable can be more useful than a temporary table.

Table variables store a set of records, so naturally the declaration syntax looks very similar to a CREATE TABLE statement, as you can see in the following example.

table variable 16 SQL SERVER   Story of Temporary Objects

So here, we created a local variable. Note we have no GO in between the declaration statement and the ISERT statement. We inserted some data from the permanent table and then selected data from the variable. There is no need to explicitly DROP the table variable.

Table variable scope and lifecycle

Let’s brake the previous example into several steps. First, declare a table variable and execute the query.

declare table var 17 SQL SERVER   Story of Temporary Objects

It executes successfully. Now if we try to insert data into the variable

insert into table var 18 SQL SERVER   Story of Temporary Objects

We get an error message like “Must declare the table variable ‘@category’”. This is because the scope is for the duration of batch execution. So one of the main advantage is that there is no need to cleanup at batch completion, implies shorter lifecycle. The main cons is that it cannot cross batch boundaries.

Advantages and disadvantages of using temporary objects

Temporary tables advantages

  • Provide intermediate result-sets that can reduce query plan complexity for large or complex workloads. It is not a guaranteed solution, but a valid tool for query plan quality issues.
  • Ability to isolate a table’s data ‘per-user’ execution.
  • Provide column-level statistics like permanent tables.
  • You can use TRUNCATE, SELECT INTO, CREATE INDEX, ALTER TABLE, IDENTITY_INSERT, ROLLBACK TRANSACTION.

Temporary tables disadvantages

  • Heavy tempdb usage via object creates/drops can lead to latch contention.
  • Not always optimal compared to inline, single-statement alternative.
  • Not supported in user-defined functions.
  • Inherit tempdb collation for non-contained databases.
  • System caching of temporary tables and associated statistics may cause unexpected query plan shapes and incorrect statistics.

Table variables advantages

  • Can be used within scalar and multi-statement table-valued functions.
  • Automatically inherit the current databases’ collation implicitly.
  • Will not directly affect recompilations.
  • Can be passed as input parameters to stored procedures.

Table variables advantages

  • Heavy tempdb usage via object creates/drops can lead to latch contention.
  • Non-column level statistics
  • Cannot TRUNCATE, SELECT INTO, CREATE INDEX, ALTER TABLE, IDENTITY_INSERT, ROLLBACK TRANSACTION

Resume

One of the key factor influencing temporary objects usage is the performance and scalability. This means, for complex queries, sometimes breaking a query down into steps with temporary objects can improve query plan quality and increase performance.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Beginning with SQL Server Security Aspects

SQL Server Security Aspects

This article is the high-level overview of the SQL Server security best practices and aspects. Ensuring SQL Server security is an important task that may be successfully solved by applying best practices and proven solutions described further in this article.

Physical Security

SQL Server physical security aspects are often overlooked. Frequently DB admins focus on the software security and network security, while completely forgetting about the physical server itself. However, the possibility to physically reach the server, will lead to abrogation of all other security configurations that they apply to the software. As far as SQL Server is installed on a physical device, whether it would be server hardware, desktop PC, laptop, or other PC in a data center. The physical security main goal is to prevent someone from gaining access to the machine.

Shared data centers or server rooms provide a wide range of physical and environmental security to ensure that secure areas are protected by appropriate entry controls to ensure that only authorized personnel are allowed access. An appropriate, physical protection should be provided against damage from natural, or man-made disasters, such as fire, flood, explosion etc. All users are required to ensure that systems are not left open to access by intruders to buildings, or by unauthorized colleagues.

If the personnel in the data center  is unable to physically lock away the server, you must ensure that all USB ports on the server are disabled. Keep in mind that security is not complete if physical security is overlooked.

Network  Security

The network is the way that external threads may come to attack your SQL Server.

Firewalls provide efficient means to implement security. A firewall is a separator or restrictor of network traffic, which can be configured to enforce your data security policy. If you use a firewall, you will increase security at the operating system.

SQL Server Instance Security

Every SQL Server instance that is installed on a PC can be considered to be a security domain. Thus, at the instance level you could apply separate security considerations.

I will use the dbForge Studio for SQL Server v 4.0 for the demonstration purposes as it has an excellent tool called Security Manager.

The following picture demonstrates two separate instances of SQL Server on the same computer.  Each has its own specific name, the first one is MSSQLSERVER\Instance1, the second one is MSSQLSERVER\Instance2.

instanses horizontal SQL SERVER   Beginning with SQL Server Security Aspects

This picture gives us the ability to see how security applies at the instance level. SQL Server Security uses the concept of securables that will be discussed later in this article.

Securables are the objects to which the SQL Server authorization system controls access. For instance, a table is a securable. Some securables can be contained within others, creating nested hierarchies called “scopes”. The securable scopes are server, database, and schema.

An object that receives permission to a securable is called a principal. The most common principals are logins and users.

A login is a security principal, or an entity that can be authenticated by a secure system. Users need a login to connect to SQL Server. You can create a login based on a Windows principal (such as a domain user or a Windows domain group) or you can create a login that is not based on a Windows principal (such as an SQL Server login).

To control access to securables, a DB admin can grant or deny permissions, or add logins and users to roles that have access.

It is important to note that every individual instance has it’s own set of security considerations. The security folders on each instance contain the same users, application roles,(etc.) folders. However, if we expand the logins folder you can see the different number of accounts.

logins SQL SERVER   Beginning with SQL Server Security Aspects

This proves that we can have different security settings for individual users across different instances.

It is also important to note that when you do set up an SQL Server instance, you only install the necessary features that are necessary for that instance to operate. This reduces the possibility to attack of your specific SQL Server instance, by reducing the number of services and features that are available for malicious users to gain access to.

SQL Server Securables

Securables are the resources to which the SQL Server Database Engine authorization system regulates access. Essentially these are the database objects to which you can apply security on your instance of SQL Server. Some securables are standalone and others may be contained within another securable.

Each database also has its own security folder, where we can focus on users, which again, become the logins that we assign the permissions to for our database.

db level security SQL SERVER   Beginning with SQL Server Security Aspects

We also have database level roles and application roles. Currently, there are no application roles assigned on this server. There are another components that we do not see in the Database Explorer, however they still exist at the data base level. These components are assemblies which are typically DLL files that are used in the instance of sequel server for deploying functions stored procedures, or triggers. These components are managed by CLR.

SQL Server Principals

There are three high level categories of SQL Server security principals:

  1. Windows level (domain and local logins)
  2. SQL Server level (SQL Logins and server roles)
  3. Database level (database users, DB and application roles)

There are two SQL Server authentication types:

  • by using Windows account security token
  • by using an account defined inside SQL Server

You can tune the type of authentication during installation. Windows mode is the most secure, as it relies on Windows accounts as opposed to SQL accounts. The recommended setting is Windows Authentication and it is selected by default. This authentication type  simplifies administration and is more secure, because it uses Windows security tokens. There is no need to remember another password, and no password transits through the local network.

You cannot disable Windows authentication, however you can extend it with SQL Server authentication. Thus, in case you need to allow access to the users outside the local network, you should use SQL Server authentication. In this case, logins are defined and managed inside SQL Server. The authentication and password validation is managed by SQL Server also.

There are also default roles created during installation as well. One of which is known as the public role.

public role SQL SERVER   Beginning with SQL Server Security Aspects

SQL Server creates the public role as a way of assigning permissions to users who have accounts on the database, but who do not have any other currently assigned permissions. The public role serves the purpose, of providing limited permissions to user accounts on the database, until you’ve had the opportunity to assign the necessary permissions for those user accounts. SQL Server also includes a guest user account. This account is not a server level account, but exists at the database level.

If we expand the users folder within the database, we will notice a guest account, that is available for each database. The guest account is used to grant permissions to users, who might be able to access the database. But, who do not have a user account assigned, or created in the database itself.

guest user SQL SERVER   Beginning with SQL Server Security Aspects

The guest account cannot be dropped, but it can be disabled, through the revocation of the connect permission. User and application access to your SQL Server instance, will be controlled using these security principles.

Manage Logins and Users

As mentioned above in this article, SQL Server implements security through the use of securables, which are the database objects, and security principles, which are the user’s inner applications that will access the database.

If you know how to create the logins and user accounts – it is the first step in creating the authentication and authorization model for your SQL Server.

Before creating logins, you must know which authentication mode SQL Server instance is configured to use. In case of mixed mode authentication, we can create Windows accounts, or we can create local SQL Server accounts for authentication on the instance of SQL Server.

Keep in mind that SQL Server allows you to create logins through Windows accounts or SQL Server accounts at the server level. Also SQL Server permits the creation of local user accounts at the database level. If we expand the database, and expand the Security folder, we notice that SQL Server includes a Users folder at the database level.

Simply right click the Users folder, and then click New User. This allows us to choose a user type at the database level, as opposed to the server level. We can create SQL user with or without a login, what means that this would be a user name we create specifically in this database.

CreateLogin SQL SERVER   Beginning with SQL Server Security Aspects

A good example of such an account is the guest account, which is created at the database level, but does not exist at the logins level for the server itself.

You can set the authentication mode to Mix Mode at any time after the installation. And then you can create SQL Server accounts, or use Windows accounts, depending on your choice.

Understanding each type of account and how security is applied to these accounts will help you set the proper security for user access.

Understanding and using roles

SQL Server roles allow you to group users or logins into a specific container for assigning permissions to securables on your SQL Server instance. Permissions assigned to a role are applied to any user or login that is associated with that role. SQL Server provides nine fixed server roles. You can find these server roles in the Security folder of the SQL Server instance. Outside of the nine fixed server roles, SQL Server also provides database level roles.

db level roles SQL SERVER   Beginning with SQL Server Security Aspects

SQL Server Permissions

Accessing any of the secureables in SQL Server requires that the login or user have permissions. The permissions in SQL Server follow certain naming conventions. The security check algorithm used by SQL is complex. And the complexity comes from the need to evaluate group or role membership, as well as explicit and implicit permissions.

There are three core elements of SQL Server permission checking:

  1. Security context. This is related to the user, the login or the role.
  2. Permission space. The permission space will focus around the securable, such as the database object that the user is attempting to access. SQL Server checks the permissions that the principle has been assigned according to that securable.
  3. Required permissions. This is an important step because some tasks require more than one permission. For instance, if a user attempts to execute a stored procedure, the execute permission would be required in the stored procedure itself.

Summary

SQL Server security is too huge topic to be discussed in a single article. However, this article provides an overview of core principles and concepts of SQL Server security.

SQL SERVER – A Practical Use of Backup Encryption

 Backup is extremely important for any DBA. Think of any disaster and backup will come to rescue users in adverse situation. Similarly, it is very critical that we keep our backup safe as well. If your backup fall in the hands of bad people, it is quite possible that it will be misused and become serious data integrity issue. Well, in this blog post we will see a practical scenario where we will see how we can use Backup Encryption to improve security of the bakcup.

Feature description

Database Backup Encryption is a brand new and long expected feature that is available now in SQL Server 2014. You can create an encrypted backup file by specifying the encryption algorithm and the encryptor (either a Certificate or Asymmetric Key).

The ability to protect a backup file with the password has been existing for many years. If you use SQL Server for a long time, you might remember the WITH PASSWORD option for the BACKUP command. The option prevented unauthorized access to the backup file.

However this approach did not provide reliable protection. In that regard, Greg Robidoux noted on MSSQLTIPS: “Although this does add a level of security if someone really wants to crack the passwords they will find a way, so look for additional ways to secure your data.

To protect a backup file, SQL Server 2008 introduced the transparent data encryption (TDE) feature. Thus, a database had to be transparently encrypted before backup. Therefore, start with SQL Server 2012 the PASSWORD and MEDIAPASSWORD parameters are not used while creating backups. Even so, data encryption and backup files encryption are two different scenarios.

In case a database is stored locally, there is no need to encrypt it before backup. Fortunately in SQL Server 2014 there are two independent processes. Along with data encryption it is possible to encrypt a backup file based on a certificate or an asynchronous key. Supported encryption algorithms are:

  • AES 128
  • AES 192
  • AES 256
  • Triple DES

Practical use

To illustrate above mentioned, I will create an encrypted backup of the Adventureworks database. Also, you can back up directly to Azure. If needed, you may restore the encrypted back up file on Azure.

I will use dbForge Studio for SQL Server to create the encrypted backup file.

To protect the backup file we need to create an encryptor: either a Certificate or Asymmetric Key. Then, we need to pass this encryptor to the target SQL Server to restore the backup. For this, the encryptor must be exported from the source SQL Server and imported to the target SQL Server. There are no problems with the certificate in this regard. It is more complicated with asymmetric keys.

Taking into account that the BACKUP ASYMMETRIC KEY command is not available, and we can not just create a duplicate for an asymmetric key (compared to symmetric key), the only approach is to create the asymmetric key outside the SQL Server. Then we can use the sn.exe utility to transfer it inside SQL Server (CREATE ASYMMETRIC KEYkeynameFROM FILE = ‘filename.snk‘). After that we can use this asymmetric key to encrypt the backup file on the source instance. Further we need to use the same *.snk file to create the asymmetric key on the target instance (and restore the backup file).

In our example we will not use asymmetric keys. We will use a certificate. Moreover the certificate (behind the scene) is the pair of open/closed keys.

Let’s create the server certificate and use it to encrypt the backup file.

devbackup1 SQL SERVER   A Practical Use of Backup Encryption

The certificate will be protected with the database master key, because we didn’t specify the ENCRYPTION BY statement.

This is exactly what we need. Only certificates signed with the database master-key can be used for the encryption purposes. Otherwise, If we for instance, protect the certificate with the password ENCRYPTION BY PASSWORD = ‘strongpassword‘, the following error appears while attempting to encrypt the backup file:

“Cannot use certificate ‘CertName’, because its private key is not present or it is not protected by the database master key.”

Encrypted backups (along with usual backups) can be traditionally created locally on the hard drive and in Azure Storage.

Instead of writing tons of SQL code I will use the convenient dbForge Studio for SQL Server Back Up wizard. The wizard allows to create the database backup in several clicks.

Step 1: Setup the DB Connection and the backup file location.

devbackup2 SQL SERVER   A Practical Use of Backup Encryption

Step2: Setup mediaset

devbackup3 SQL SERVER   A Practical Use of Backup Encryption

Step 3: Select the encryption algorithm and certificate.

devbackup4 SQL SERVER   A Practical Use of Backup Encryption

In case you don’t want to pay extra attention to transferring the backup file to the Windows Azure, you can backup directly to Azure.

devbackup5 SQL SERVER   A Practical Use of Backup Encryption

After the script execution in the required container the blob (with the backup) appears.

devbackup6 SQL SERVER   A Practical Use of Backup Encryption

In case you had already created a backup with the same name in the same container, you can get the following error: There is currently a lease on the blob and no lease ID was specified in the request.

Further, you can restore the back up file on the Windows Azure.

Summary: 

Obviously, it is a good practice to encrypt a backup file while transferring. This, for instance, allows to avoid data leak while transferring backups from one DPC to another.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – 3 Challenges for DBA and Smart Solutions

Developer’s life is never easy. DBA’s life is even crazier.

DBA’s Life

When a developer wakes up in the morning, most of the time have no idea what different challenges they are going to face that day. Of course, most of the developers know the project and roadmap, which they are working on. However, developers have no clue what coding challenges which they are going face for that day.

DBA’s life is even crazier. When DBA wakes up in the morning – they often thank that they were not disturbed during the night due to server issues. The very next thing they wish is that they do not want to challenge which they can’t solve for that day. The problems DBA face every single day are mostly unpredictable and they just have to solve them as they come during the day.

Though the life of DBA is not always bad. There are always ways and methods how one can overcome various challenges. Let us see three of the challenges and how a DBA can use various tools to overcome them.

Challenge #1 Synchronize Data Across Server

A Very common challenge DBA receive is that they have to synchronize the data across the servers. If you try to manually write that up, it may take forever to accomplish the task. It is nearly impossible to do the same with the help of the T-SQL. However, thankfully there are tools like dbForge Studio which can save a day and synchronize data across servers. Read my detailed blog post about the same over here: SQL SERVER – Synchronize Data Exclusively with T-SQL.

devartsync SQL SERVER   3 Challenges for DBA and Smart Solutions

Challenge #2 SQL Report Builder

DBA’s are often asked to build reports on the go. It really annoys DBA’s, but hardly people care about it. No matter how busy a DBA is, they are just called upon to build reports on things on very short notice. I personally like to avoid any task which is given to me accidently and personally building report can be boring. I rather spend time with High Availability, disaster recovery, performance tuning rather than building report. I use SQL third party tool when I have to work with SQL Report. Others have extended reporting capabilities. The latter group of products includes the SQL report builder built-in todbForge Studio for SQL Server. I have blogged about this earlier over here: SQL SERVER – SQL Report Builder in dbForge Studio for SQL Server.

i10 SQL SERVER   3 Challenges for DBA and Smart Solutions

Challenge #3 Work with the OTHER Database

The manager does not understand that MySQL is different from SQL Server and SQL Server is different from Oracle. For them everything is same. In my career hundreds of times I have faced a situation that I am given a database to manage or do some task when their regular DBA is on vacation or leave. When I try to explain I do not understand the underlying the technology, I have been usually told that my manager has trust on me and I can do anything. Honestly, I can’t but I hardly dare to argue. I fall back on the third party tool to manage database when it is not in my comfort zone. For example, I was once given MySQL performance tuning task (at that time I did not know MySQL so well). To simplify search for a problem query let us use MySQL Profiler in dbForge Studio for MySQL. It provides such commands as a Query Profiling Mode and Generate Execution Plan. Here is the blog post discussing about the same: MySQL – Profiler : A Simple and Convenient Tool for Profiling SQL Queries.

MySQL Profiler 06 SQL SERVER   3 Challenges for DBA and Smart Solutions

Well, that’s it! There were many different such occasions when I have been saved by the tool. May be some other day I will write part 2 of this blog post.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Synchronize Data Exclusively with T-SQL

UPDATE: This review and exercise is based on dbForge Studio.

Database developers often face the task to synchronize users’ data. Currently, to solve this task, there was developed quite a big number of various utilities. In this article, we will review the process of creating the new functionality to synchronize data exclusively with T-SQL.

Let’s split the synchronization process with T-SQL onto the sub-tasks:

  1. Retrieving the table structure of a master database
  2. Preparing data stored in the tables of the master database and saving it at the public locations
  3. Turning off relations and triggers between the tables of a target database for synchronization time
  4. Synchronization of data between tables
  5. Turning on relations and triggers between the tables of the target database

To make the process of creating the synchronization functionality comparatively easy to understand, let’s discuss some restrictions:

  1. Synchronization will be elapsing within one server, syntax will be supported on the level of MS SQL Server 2005 or higher
  2. The schemes between the synchronized databases are identical
  3. Only system data types are supported
  4. It’s possible to synchronize any amount of tables at a time
  5. The structure of tables is not essential

1) Retrieving the table structure of the master database

The list of tables can be obtained with the following query:
SELECT table_name = '[' + s.name + '].[' + t.name + ']'
FROM sys.tables t
JOIN sys.schemas s ON t.[schema_id] = s.[schema_id]

 

It’s important to remember that some system views may contain an excessive amount of connections and calls of system functions, that will negatively affect the run-time performance of the query, such as sys.tables:

SELECT *
FROM sys.objects$ o
LEFT JOIN sys.sysidxstats lob ON lob.id = o.[object_id] AND lob.indid <= 1
LEFT JOIN sys.syssingleobjrefs ds ON ds.depid = o.[object_id] AND ds.class = 8 AND ds.depsubid <= 1
LEFT JOIN sys.syssingleobjrefs rfs ON rfs.depid = o.[object_id] AND rfs.class = 42 AND rfs.depsubid = 0
LEFT JOIN sys.syspalvalues ts ON ts.class = 'LEOP' AND ts.value = o.lock_escalation_option
WHERE o.[type] = 'U '

Therefore, you have to use as simple system views as possible. In our case, you can use sys.objects:

SELECT table_name = '[' + s.name + '].[' + t.name + ']'
FROM sys.objects o
JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
WHERE o.[type] = 'U '

To make the query not to return excessive rows, we need to preliminary filter out the tables containing no data:

SELECT table_name = '[' + s.name + '].[' + t.name + ']'
FROM (
SELECT
o.[object_id]
, o.[schema_id]
, o.name
FROM sys.objects o
WHERE o.[type] = 'U '
AND EXISTS(
SELECT 1
FROM sys.partitions p
WHERE p.[object_id] = o.[object_id]
AND p.[rows] > 0
)
)
t
JOIN sys.schemas s ON t.[schema_id] = s.[schema_id]

The list of columns containing only system data types can be obtained in the following way:

SELECT
table_name = '[' + s.name + '].[' + o.name + ']'
, column_name = '[' + c.name + ']'
FROM sys.columns c
JOIN sys.objects o ON o.[object_id] = c.[object_id]
JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
JOIN sys.types b ON b.user_type_id = c.system_type_id AND b.user_type_id = b.system_type_id
WHERE o.[type] = 'U '

Next, to minimize the amount of data that we pass for comparison, it is suggested not to take into account column values, that have default values and can’t be empty. So, the query will look as follows:

SELECT
table_name = '[' + s.name + '].[' + o.name + ']'
, column_name =
CASE WHEN c2.[object_id] IS NULL
THEN '[' + c.name + ']'
ELSE '[' + c.name + '] = NULLIF([' + c.name + '], ' + c2.[definition] + ')'
END
FROM
sys.columns c
JOIN sys.objects o ON o.[object_id] = c.[object_id]
JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
JOIN sys.types b ON b.user_type_id = c.system_type_id AND b.user_type_id = b.system_type_id
LEFT JOIN sys.default_constraints c2 ON c.default_object_id = c2.[object_id]
AND b.name IN ('int', 'bit', 'char', 'nchar', 'ntext', 'nvarchar', 'text', 'varchar')
AND
c.is_nullable = 0
WHERE o.[type] = 'U '
AND c.is_computed = 0

2) Retrieving the data stored in the tables of the master database and saving it in public locations

First of all, we need some interim buffer, into which we will load data from a master database. The tempdb system database is public for any user database on the server, therefore, we can choose it.

Every time synchronization runs, we will check whether the table exists and create it if needed with the following query:

IF OBJECT_ID('tempdb.dbo.MergeTemp', 'U') IS NULL BEGIN
CREATE TABLE
tempdb.dbo.MergeTemp
(
[object_name] NVARCHAR(300) NOT NULL
,
create_date DATETIME NOT NULL DEFAULT GETDATE()
,
data XML NULL
,
CONSTRAINT [PK_MergeTemp] PRIMARY KEY CLUSTERED
(
id ASC, [object_name] ASC
) ON [PRIMARY]
)
END

Then, it’s necessary to decide how to store data from the master database. The main difficulty within this task is that tables may have different structures. The most obvious option is to create a temporary table for each synchronized table and upload data into it. However, such option may be not optimal because we will have to execute many DDL commands.

Therefore, we suggest to solve this task by means of XML, that can describe almost any database entity. The disadvantage of the suggested method is that not quite a correct XML data structure may be generated for some data types. You can retrieve a list of such data types with the following query:

SELECT t.name
FROM sys.types t
WHERE t.user_type_id != t.system_type_id

 

Let’s write a query to create XML with dynamic structure:

DECLARE
@Data NVARCHAR(4000)
,
@ObjectName NVARCHAR(300)
DECLARE [tables] CURSOR READ_ONLY FAST_FORWARD LOCAL FOR
SELECT
[object_name]
, [sql] = '
INSERT INTO tempdb.dbo.MergeTemp([object_name], data)
SELECT '''
+ quota_object_name + ''', (SELECT ' + object_columns + '
FROM '
+ quota_object_name + ' t
FOR XML AUTO, ROOT('''
+ [object_name] + '''))'
FROM (
SELECT
[object_name] = s.name + '.' + t.name
, quota_object_name = '[' + s.name + '].[' + t.name + ']'
, object_columns = STUFF((
SELECT ISNULL(', [' + c.name + '] = NULLIF([' + c.name + '], ' + c2.[definition] + ')',
', [' + c.name + ']')
+
CASE WHEN b.name = 'varbinary'
THEN ' = CAST('''' AS XML).value(''xs:base64Binary(sql:column("'+ c.name +'"))'',''VARCHAR(MAX)'')'
ELSE ''
END
FROM
sys.columns c WITH (NOLOCK)
JOIN sys.types b WITH (NOLOCK) ON b.user_type_id = c.system_type_id AND b.user_type_id = b.system_type_id
LEFT JOIN sys.default_constraints c2 ON c.default_object_id = c2.[object_id]
AND b.name IN ('char', 'nchar', 'ntext', 'nvarchar', 'text', 'varchar')
WHERE t.[object_id] = c.[object_id]
AND c.is_computed = 0
ORDER BY c.column_id
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)')
,
1, 2, '')
FROM (
SELECT
o.[object_id]
, o.[schema_id]
, o.name
FROM sys.objects o WITH (NOLOCK)
WHERE o.[type] = 'U'
AND EXISTS(
SELECT 1
FROM sys.partitions p WITH (NOLOCK)
WHERE p.[object_id] = o.[object_id]
AND p.[rows] > 0
)
AND NOT EXISTS(
SELECT 1
FROM sys.columns c WITH (NOLOCK)
JOIN sys.types t WITH (NOLOCK) ON c.system_type_id = t.system_type_id
WHERE (c.user_type_id != c.system_type_id OR t.name = 'xml')
AND
c.[object_id] = o.[object_id]
)
)
t
JOIN sys.schemas s WITH (NOLOCK) ON t.[schema_id] = s.[schema_id]
) data
OPEN [tables]
FETCH NEXT FROM [tables] INTO @ObjectName, @Data
IF (@@FETCH_STATUS = -1)
RAISERROR('Have nothing to export...', 16, 1)
WHILE @@FETCH_STATUS = 0 BEGIN
EXEC
sys.sp_executesql @Data
FETCH NEXT FROM [tables] INTO @ObjectName, @Data
END
CLOSE
[tables]
DEALLOCATE [tables]

Where, the following query will be generated for each table:

INSERT INTO tempdb.dbo.MergeTemp ([object_name], data)
SELECT <TABLE name>,
(
SELECT <columns list>
FROM <TABLE name> t
FOR XML AUTO, ROOT('<table name>')
)

After such query is executed, an XML with table data will be generated and written into the MergeTemp table.

3) Disabling relations and triggers between the tables of the master database for the time of synchronization

At the next stage, we will insert data into tables, therefore it’s preferably to turn off foreign keys and triggers. If you do not, various errors may appear at the inserting stage.

Here is a script to turn off/on foreign keys for tables:

DECLARE
@FKeysOff NVARCHAR(MAX)
,
@FKeysOn NVARCHAR(MAX)
;
WITH fkeys AS
(
SELECT DISTINCT o.[object_name]
FROM tempdb.dbo.MergeTemp mt
JOIN (
SELECT
o.[object_id]
, [object_name] = '[' + s.name + '].[' + o.name + ']'
FROM sys.objects o
JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
JOIN sys.foreign_keys fk ON o.[object_id] = fk.parent_object_id
WHERE o.[type] = 'U '
) o ON mt.[object_name] COLLATE DATABASE_DEFAULT = o.[object_name] COLLATE DATABASE_DEFAULT
)
SELECT
@FKeysOff = (
SELECT 'ALTER TABLE ' + fk.[object_name] + ' NOCHECK CONSTRAINT ALL;' + CHAR(13)
FROM fkeys fk
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)')
,
@FKeysOn = (
SELECT 'ALTER TABLE ' + fk.[object_name] + ' CHECK CONSTRAINT ALL;' + CHAR(13)
FROM fkeys fk
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)')
EXEC sys.sp_executesql @FKeysOff
EXEC sys.sp_executesql @FKeysOn

This query seems cumbrous and it doesn’t take into account more complicated relations between tables. In case when the database contains relatively few tables, the following query (turning off all table triggers and foreign keys in the database) may be used:

EXEC sys.sp_msforeachtable '
ALTER TABLE ? DISABLE TRIGGER ALL
ALTER TABLE ? NOCHECK CONSTRAINT ALL '

4) Data synchronization between tables

We have already access to data from the master database stored in XML.

With the following query we can find the unique identifier of the database object we are going to synchronize:

SELECT
o.[object_id]
, mt.[object_name]
FROM tempdb.dbo.MergeTemp mt
JOIN (
SELECT
o.[object_id]
, [object_name] = '[' + s.name + '].[' + o.name + ']'
FROM sys.objects o
JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
WHERE o.[type] = 'U '
) o ON mt.[object_name] COLLATE DATABASE_DEFAULT = o.[object_name] COLLATE DATABASE_DEFAULT

It’s important to note that the object identifier (object_id) is unique in the database and may be changed when the schema is modified (e.g., adding a new column). That’s why, initially, we were wrote the name of the object instead of its identifier. To synchronize tables, we need to parse data from XML.

As an option, for this, you can use the OPENXML construction:

DECLARE @IDoc INT
EXEC
sys.sp_xml_preparedocument @IDoc OUTPUT, @XML
SELECT [<column name>]
FROM OPENXML(@IDoc, '<table name>/t')
WITH ([<column name>] [<column datetype>]
EXEC sys.sp_xml_removedocument @IDoc

 

However, this approach is less effective when parsing significantly large XML structures. Therefore, it was decided to use the construction described below:

Here is an example of a query, that will generate dynamic SQL by XML parsing and synchronization with the target table:

DECLARE
@ObjectID INT
, @ObjName NVARCHAR(300)
,
@SQL NVARCHAR(MAX)
DECLARE package CURSOR FAST_FORWARD READ_ONLY LOCAL FOR
SELECT
o.[object_id]
, mt.[object_name]
FROM tempdb.dbo.MergeTemp mt
JOIN (
SELECT
o.[object_id]
, [object_name] = '[' + s.name + '].[' + o.name + ']'
FROM sys.objects o
JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
WHERE o.[type] = 'U '
) o ON mt.[object_name] COLLATE DATABASE_DEFAULT = o.[object_name] COLLATE DATABASE_DEFAULT
OPEN package
FETCH NEXT FROM package INTO @ObjectID, @ObjName
WHILE @@FETCH_STATUS = 0 BEGIN
SELECT
@SQL = '
DECLARE @XML XML
SELECT @XML = (
SELECT data
FROM tempdb.dbo.MergeTemp
WHERE [object_name] = '''
+ @ObjName + '''
)
IF EXISTS(
SELECT 1
FROM sys.columns c WITH (NOLOCK)
WHERE c.[object_id] = '
+ CAST(@ObjectID AS VARCHAR(10)) + '
AND c.is_identity = 1
) SET IDENTITY_INSERT '
+ @ObjName + ' ON;
DELETE FROM '
+ @ObjName + '
INSERT INTO '
+ @ObjName + '(' +
STUFF((
SELECT ', [' + c.name + ']'
FROM sys.columns c
WHERE c.[object_id] = @ObjectID
ORDER BY c.column_id
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)'), 1, 2, '') + ')
SELECT '
+ STUFF((
SELECT
', [' + c.name + '] = ' + CASE WHEN c2.name IS NOT NULL THEN 'ISNULL(' ELSE '' END + 'r.c.value(''@' + c.name + ''', ''' + b.name +
CASE
WHEN b.name IN ('char', 'varchar', 'nchar', 'nvarchar', 'varbinary') AND c.max_length = -1 THEN '(max)'
WHEN b.name IN ('char', 'varchar') AND c.max_length!= -1 THEN '(' + CAST(c.max_length AS VARCHAR(10)) + ')'
WHEN b.name IN ('nchar', 'nvarchar') AND c.max_length!= -1 THEN '(' + CAST(c.max_length / 2 AS VARCHAR(10)) + ')'
WHEN b.name = 'decimal' THEN '(' + CAST(c.[precision] AS NVARCHAR(10)) + ',' + CAST(c.[scale] AS VARCHAR(10)) + ')'
ELSE ''
END + ''')' + CASE WHEN c2.name IS NOT NULL THEN ', ' + c2.[definition] + ')' ELSE '' END
FROM
sys.columns c
JOIN sys.types b ON b.user_type_id = c.system_type_id AND b.user_type_id = b.system_type_id AND b.name!= 'xml'
LEFT JOIN sys.default_constraints c2 ON c.default_object_id = c2.[object_id]
AND b.name IN ('int', 'bit', 'char', 'nchar', 'ntext', 'nvarchar', 'text', 'varchar')
AND
c.is_nullable = 0
WHERE c.[object_id] = @ObjectID
AND c.is_computed = 0
ORDER BY c.column_id
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)')
,
1, 2, '') + '
FROM @XML.nodes('''
+ s.name + '.' + o.name + '/t'') r(c)
'
FROM sys.objects o
JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
WHERE o.[object_id] = @ObjectID
EXEC sys.sp_executesql @SQL
FETCH NEXT FROM package INTO @ObjectID, @ObjName
END
CLOSE
package
DEALLOCATE package

The following script will build such query for each synchronized table:

DECLARE @XML XML
SELECT
@XML = (
SELECT data
FROM tempdb.dbo.MergeTemp
WHERE [object_name] = '<table name>'
)
IF EXISTS(
SELECT 1
FROM sys.columns c
WHERE c.[object_id] = <table_id>
AND
c.is_identity = 1
) SET IDENTITY_INSERT <TABLE name> ON;
DELETE FROM <TABLE name>
INSERT INTO <TABLE name> (<columns list>)
SELECT [<column name>] = r.c.value('@<column name>', '<column datetype>')
FROM @XML.nodes('<table name>/t') r(c)

Synchronization can be performed in three steps:

  1. Inserting new data
  2. Updating existing data by a key field
  3. Deleting non-existing data in the master database

The provided sequence of actions can be organized through separate operators: INSERT/UPDATE/DELETE; or using a MERGE construction that was implemented in MS SQL Server 2008. However, in this case, SQL code will have quite a complicated structure, since we will have to take into account many factors (e.g., insert data in accordance to the order of the relations between tables). In case if the database has a more simple structure, an easier approach can be used.

If all the tables of the database or a specific isolated schema are synchronized, there is a possibility to directly remove all the records from the tables and insert data from the master database. On one hand, such approach can be the most cost-effective one in terms of implementation, if a database with simple logic is synchronized. On the other hand, this way of synchronization may lead to damages in the integrity and validity of the data in the database.

5) Enabling relations and triggers between the tables of the target database

After inserting data into required tables, it’s necessary to enable foreign keys and triggers in order not to damage the cascade integrity and internal business logic. This can be performed as follows:

EXEC sys.sp_msforeachtable '
ALTER TABLE ? ENABLE TRIGGER ALL
ALTER TABLE ? WITH CHECK CHECK CONSTRAINT ALL
'

When executing this command, the server will check whether the inserted data is correct. Therefore, if an error arises on this step, the problem may initially due to damaging of the cascade integrity.

The resulting version of the script:

USE AdventureWorks2008R2
SET NOCOUNT ON;
SET XACT_ABORT ON;
RAISERROR('Start...', 0, 1) WITH NOWAIT
DECLARE
@SQL NVARCHAR(MAX)
,
@Time DATETIME
, @Text NVARCHAR(300)
,
@SessionUID UNIQUEIDENTIFIER
SELECT
@SQL = ''
, @Time = GETDATE()
,
@SessionUID = NEWID()
SELECT @Text = CONVERT(NVARCHAR(25), @Time, 114) + ' Creating temporary table...'
RAISERROR(@Text, 0, 1) WITH NOWAIT
IF OBJECT_ID('tempdb.dbo.MergeTemp', 'U') IS NULL BEGIN
CREATE TABLE
tempdb.dbo.MergeTemp
(
id UNIQUEIDENTIFIER NOT NULL
,
[object_name] NVARCHAR(300) NOT NULL
,
create_date DATETIME NOT NULL DEFAULT GETDATE()
,
data XML NULL
,
CONSTRAINT [PK_MergeTemp] PRIMARY KEY CLUSTERED
(
id ASC, [object_name] ASC
) ON [PRIMARY]
)
END
SELECT
@Text = CONVERT(NVARCHAR(25), GETDATE(), 114) + ' Generate SQL queries...'
RAISERROR(@Text, 0, 1) WITH NOWAIT
DECLARE
@Data NVARCHAR(4000)
,
@ObjectName NVARCHAR(300)
DECLARE [tables] CURSOR READ_ONLY FAST_FORWARD LOCAL FOR
SELECT
[object_name]
, [sql] = '
INSERT INTO tempdb.dbo.MergeTemp(id, [object_name], data)
SELECT '''
+ CAST(@SessionUID AS VARCHAR(36)) + ''', ''' + quota_object_name + ''', (SELECT ' + object_columns + '
FROM '
+ quota_object_name + ' t
FOR XML AUTO, ROOT('''
+ [object_name] + '''))'
FROM (
SELECT
[object_name] = s.name + '.' + t.name
, quota_object_name = '[' + s.name + '].[' + t.name + ']'
, object_columns = STUFF((
SELECT ISNULL(', [' + c.name + '] = NULLIF([' + c.name + '], ' + c2.[definition] + ')',
', [' + c.name + ']')
+
CASE WHEN b.name = 'varbinary'
THEN ' = CAST('''' AS XML).value(''xs:base64Binary(sql:column("'+ c.name +'"))'',''VARCHAR(MAX)'')'
ELSE ''
END
FROM
sys.columns c WITH (NOLOCK)
JOIN sys.types b WITH (NOLOCK) ON b.user_type_id = c.system_type_id AND b.user_type_id = b.system_type_id
LEFT JOIN sys.default_constraints c2 ON c.default_object_id = c2.[object_id]
AND b.name IN ('char', 'nchar', 'ntext', 'nvarchar', 'text', 'varchar')
WHERE t.[object_id] = c.[object_id]
AND c.is_computed = 0
ORDER BY c.column_id
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)')
,
1, 2, '')
FROM (
SELECT
o.[object_id]
, o.[schema_id]
, o.name
FROM sys.objects o WITH (NOLOCK)
WHERE o.[type] = 'U'
AND EXISTS(
SELECT 1
FROM sys.partitions p WITH (NOLOCK)
WHERE p.[object_id] = o.[object_id]
AND p.[rows] > 0
)
AND NOT EXISTS(
SELECT 1
FROM sys.columns c WITH (NOLOCK)
JOIN sys.types t WITH (NOLOCK) ON c.system_type_id = t.system_type_id
WHERE (c.user_type_id!= c.system_type_id OR t.name = 'xml')
AND
c.[object_id] = o.[object_id]
)
)
t
JOIN sys.schemas s WITH (NOLOCK) ON t.[schema_id] = s.[schema_id]
-- You can specify sync all db tables or specified schema/tables
-- WHERE s.name = 'Sales'
-- OR o.name IN ('Address', 'AddressType')
) data
OPEN [tables]
FETCH NEXT FROM [tables] INTO @ObjectName, @Data
IF (@@FETCH_STATUS = -1)
RAISERROR('Have nothing to export...', 16, 1)
WHILE @@FETCH_STATUS = 0 BEGIN
SELECT
@Text = CONVERT(NVARCHAR(25), GETDATE(), 114) + ' Generate XML for ' + @ObjectName
RAISERROR(@Text, 0, 1) WITH NOWAIT
EXEC sys.sp_executesql @Data
FETCH NEXT FROM [tables] INTO @ObjectName, @Data
END
CLOSE
[tables]
DEALLOCATE [tables]
PRINT 'ELAPSED TIME: ' + CAST(DATEDIFF(ms, @Time, GETDATE()) AS NVARCHAR(15)) + 'ms'
RAISERROR('Finish...', 0, 1) WITH NOWAIT
PRINT REPLICATE('-', 80)
USE AdventureWorks2008R2_Live
RAISERROR('Start...', 0, 1) WITH NOWAIT
SELECT @Time = GETDATE()
SELECT @Text = CONVERT(NVARCHAR(25), @Time, 114) + ' Get similar objects in both databases...'
RAISERROR(@Text, 0, 1) WITH NOWAIT
DECLARE @MergeTemp TABLE
(
[object_name] NVARCHAR(300)
,
[object_id] INT
, data XML
)
INSERT INTO @MergeTemp
(
[object_name]
, [object_id]
, daata
)
SELECT
mt.[object_name]
, o.[object_id]
, mt.data
FROM tempdb.dbo.MergeTemp mt
JOIN (
SELECT
o.[object_id]
, [object_name] = '[' + s.name + '].[' + o.name + ']'
FROM sys.objects o WITH (NOLOCK)
JOIN sys.schemas s WITH (NOLOCK) ON o.[schema_id] = s.[schema_id]
WHERE o.[type] = 'U'
) o ON mt.[object_name] COLLATE DATABASE_DEFAULT = o.[object_name] COLLATE DATABASE_DEFAULT
WHERE mt.id = @SessionUID
IF NOT EXISTS(
SELECT 1
FROM @MergeTemp
) RAISERROR('Have nothing to import...', 16, 1)
SELECT @Text = CONVERT(NVARCHAR(25), GETDATE(), 114) + ' Turn off foreign keys and triggers...'
RAISERROR(@Text, 0, 1) WITH NOWAIT
EXEC sys.sp_msforeachtable '
ALTER TABLE ? DISABLE TRIGGER ALL
ALTER TABLE ? NOCHECK CONSTRAINT ALL
'
DECLARE
@ObjectID INT
, @ObjName NVARCHAR(300)
DECLARE package CURSOR FAST_FORWARD READ_ONLY LOCAL FOR
SELECT
mt.[object_id]
, mt.[object_name]
FROM @MergeTemp mt
OPEN package
FETCH NEXT FROM package INTO
@ObjectID
, @ObjName
WHILE @@FETCH_STATUS = 0 BEGIN
SELECT
@Text = CONVERT(NVARCHAR(25), GETDATE(), 114) + ' Processing ' + @ObjName + '...'
RAISERROR(@Text, 0, 1) WITH NOWAIT
SELECT @SQL = '
DECLARE @XML XML
SELECT @XML = (
SELECT data
FROM tempdb.dbo.MergeTemp
WHERE [object_name] = '''
+ @ObjName + ''' AND id = ''' + CAST(@SessionUID AS VARCHAR(36)) + '''
)
IF EXISTS(
SELECT 1
FROM sys.columns c WITH (NOLOCK)
WHERE c.[object_id] = '
+ CAST(@ObjectID AS VARCHAR(10)) + '
AND c.is_identity = 1
) SET IDENTITY_INSERT '
+ @ObjName + ' ON;
DELETE FROM '
+ @ObjName + '
INSERT INTO '
+ @ObjName + '(' +
STUFF((
SELECT ', [' + c.name + ']'
FROM sys.columns c WITH (NOLOCK)
WHERE c.[object_id] = @ObjectID
AND c.is_computed = 0
ORDER BY c.column_id
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)'), 1, 2, '') + ')
SELECT '
+ STUFF((
SELECT
', [' + c.name + '] = ' + CASE WHEN c2.name IS NOT NULL THEN 'ISNULL(' ELSE '' END + 'r.c.value(''@' + c.name + ''', ''' + b.name +
CASE
WHEN b.name IN ('char', 'varchar', 'nchar', 'nvarchar', 'varbinary') AND c.max_length = -1 THEN '(max)'
WHEN b.name IN ('char', 'varchar') AND c.max_length!= -1 THEN '(' + CAST(c.max_length AS VARCHAR(10)) + ')'
WHEN b.name IN ('nchar', 'nvarchar') AND c.max_length!= -1 THEN '(' + CAST(c.max_length / 2 AS VARCHAR(10)) + ')'
WHEN b.name = 'decimal' THEN '(' + CAST(c.[precision] AS NVARCHAR(10)) + ',' + CAST(c.[scale] AS VARCHAR(10)) + ')'
ELSE ''
END + ''')' + CASE WHEN c2.name IS NOT NULL THEN ', ' + c2.[definition] + ')' ELSE '' END
FROM
sys.columns c WITH (NOLOCK)
JOIN sys.types b WITH (NOLOCK) ON b.user_type_id = c.system_type_id AND b.user_type_id = b.system_type_id AND b.name!= 'xml'
LEFT JOIN sys.default_constraints c2 WITH (NOLOCK) ON c.default_object_id = c2.[object_id]
AND b.name IN ('bit', 'char', 'varchar', 'nchar', 'nvarchar')
WHERE c.[object_id] = @ObjectID
AND c.is_computed = 0
ORDER BY c.column_id
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)')
,
1, 2, '') + '
FROM @XML.nodes('''
+ s.name + '.' + o.name + '/t'') r(c)
'
FROM sys.objects o WITH (NOLOCK)
JOIN sys.schemas s WITH (NOLOCK) ON o.[schema_id] = s.[schema_id]
WHERE o.[object_id] = @ObjectID
EXEC sys.sp_executesql @SQL
FETCH NEXT FROM package INTO
@ObjectID
, @ObjName
END
CLOSE
package
DEALLOCATE package
SELECT @Text = CONVERT(NVARCHAR(25), GETDATE(), 114) + ' Turn on foreign keys and triggers...'
RAISERROR(@Text, 0, 1) WITH NOWAIT
EXEC sys.sp_msforeachtable '
ALTER TABLE ? ENABLE TRIGGER ALL
ALTER TABLE ? WITH CHECK CHECK CONSTRAINT ALL
'
PRINT 'ELAPSED TIME: ' + CAST(DATEDIFF(ms, @Time, GETDATE()) AS NVARCHAR(15)) + 'ms'
RAISERROR('Finish...', 0, 1) WITH NOWAIT

Conclusion:

As we can see, even for databases with a simple structure, the synchronization process is quite a bigtask that includes many different actions.

As you have noticed early in the article, manual data synchronization is not quite easy task. Thus, to save time and efforts it is highly recommended to use special tools for synchronization. For instance, the following screen-shot demonstrates how you can visualize compared data before synchronization by using comparison feature of dbForge Studio for SQL Server.

devartsync SQL SERVER   Synchronize Data Exclusively with T SQL

The upper grid shows the database objects differences and the bottom grid displays data differences. To synchronize source and target databases you can automatically generate the synchronization script. Then you can view this script, save it, or execute it immediately to update the target database.  So the process can be handled in an easier way.

Reference: Pinal Dave (http://blog.SQLAuthority.com)