SQL SERVER – Beginning with SQL Server Security Aspects

SQL Server Security Aspects

This article is the high-level overview of the SQL Server security best practices and aspects. Ensuring SQL Server security is an important task that may be successfully solved by applying best practices and proven solutions described further in this article.

Physical Security

SQL Server physical security aspects are often overlooked. Frequently DB admins focus on the software security and network security, while completely forgetting about the physical server itself. However, the possibility to physically reach the server, will lead to abrogation of all other security configurations that they apply to the software. As far as SQL Server is installed on a physical device, whether it would be server hardware, desktop PC, laptop, or other PC in a data center. The physical security main goal is to prevent someone from gaining access to the machine.

Shared data centers or server rooms provide a wide range of physical and environmental security to ensure that secure areas are protected by appropriate entry controls to ensure that only authorized personnel are allowed access. An appropriate, physical protection should be provided against damage from natural, or man-made disasters, such as fire, flood, explosion etc. All users are required to ensure that systems are not left open to access by intruders to buildings, or by unauthorized colleagues.

If the personnel in the data center  is unable to physically lock away the server, you must ensure that all USB ports on the server are disabled. Keep in mind that security is not complete if physical security is overlooked.

Network  Security

The network is the way that external threads may come to attack your SQL Server.

Firewalls provide efficient means to implement security. A firewall is a separator or restrictor of network traffic, which can be configured to enforce your data security policy. If you use a firewall, you will increase security at the operating system.

SQL Server Instance Security

Every SQL Server instance that is installed on a PC can be considered to be a security domain. Thus, at the instance level you could apply separate security considerations.

I will use the dbForge Studio for SQL Server v 4.0 for the demonstration purposes as it has an excellent tool called Security Manager.

The following picture demonstrates two separate instances of SQL Server on the same computer.  Each has its own specific name, the first one is MSSQLSERVER\Instance1, the second one is MSSQLSERVER\Instance2.

This picture gives us the ability to see how security applies at the instance level. SQL Server Security uses the concept of securables that will be discussed later in this article..

Securables are the objects to which the SQL Server authorization system controls access. For instance, a table is a securable. Some securables can be contained within others, creating nested hierarchies called “scopes”. The securable scopes are server, database, and schema.

An object that receives permission to a securable is called a principal. The most common principals are logins and users.

A login is a security principal, or an entity that can be authenticated by a secure system. Users need a login to connect to SQL Server. You can create a login based on a Windows principal (such as a domain user or a Windows domain group) or you can create a login that is not based on a Windows principal (such as an SQL Server login).

To control access to securables, a DB admin can grant or deny permissions, or add logins and users to roles that have access.

It is important to note that every individual instance has it’s own set of security considerations. The security folders on each instance contain the same users, application roles,(etc.) folders. However, if we expand the logins folder you can see the different number of accounts.

This proves that we can have different security settings for individual users across different instances.

 It is also important to note that when you do set up an SQL Server instance, you only install the necessary features that are necessary for that instance to operate. This reduces the possibility to attack of your specific SQL Server instance, by reducing the number of services and features that are available for malicious users to gain access to.

SQL Server Securables

Securables are the resources to which the SQL Server Database Engine authorization system regulates access. Essentially these are the database objects to which you can apply security on your instance of SQL Server. Some securables are standalone and others may be contained within another securable.

Each database also has its own security folder, where we can focus on users, which again, become the logins that we assign the permissions to for our database.

We also have database level roles and application roles. Currently, there are no application roles assigned on this server. There are another components that we do not see in the Database Explorer, however they still exist at the data base level. These components are assemblies which are typically DLL files that are used in the instance of sequel server for deploying functions stored procedures, or triggers. These components are managed by CLR.

SQL Server Principals

There are three high level categories of SQL Server security principals:

  1. Windows level (domain and local logins)
  2. SQL Server level (SQL Logins and server roles)
  3. Database level (database users, DB and application roles)

There are two SQL Server authentication types:

  • by using Windows account security token
  • by using an account defined inside SQL Server

You can tune the type of authentication during installation. Windows mode is the most secure, as it relies on Windows accounts as opposed to SQL accounts. The recommended setting is Windows Authentication and it is selected by default. This authentication type  simplifies administration and is more secure, because it uses Windows security tokens. There is no need to remember another password, and no password transits through the local network.

You cannot disable Windows authentication, however you can extend it with SQL Server authentication. Thus, in case you need to allow access to the users outside the local network, you should use SQL Server authentication. In this case, logins are defined and managed inside SQL Server. The authentication and password validation is managed by SQL Server also.

There are also default roles created during installation as well. One of which is known as the public role.

SQL Server creates the public role as a way of assigning permissions to users who have accounts on the database, but who do not have any other currently assigned permissions. The public role serves the purpose, of providing limited permissions to user accounts on the database, until you’ve had the opportunity to assign the necessary permissions for those user accounts. SQL Server also includes a guest user account. This account is not a server level account, but exists at the database level.

If we expand the users folder within the database, we will notice a guest account, that is available for each database. The guest account is used to grant permissions to users, who might be able to access the database. But, who do not have a user account assigned, or created in the data base itself.

The guest account cannot be dropped, but it can be disabled, through the revocation of the connect permission. User and application access to your SQL Server instance, will be controlled using these security principles.

Manage Logins and Users

As mentioned above in this article, SQL Server implements security through the use of securables, which are the database objects, and security principles, which are the user’s inner applications that will access the database.

If you know how to create the logins and user accounts – it is the first step in creating the authentication and authorization model for your SQL Server.

Before creating logins, you must know which authentication mode SQL Server instance is configured to use. In case of mixed mode authentication, we can create Windows accounts, or we can create local SQL Server accounts for authentication on the instance of SQL Server.

Keep in mind that SQL Server allows you to create logins through Windows accounts or SQL Server accounts at the server level. Also SQL Server permits the creation of local user accounts at the database level. If we expand the database, and expand the Security folder, we notice that SQL Server includes a Users folder at the database level.

Simply right click the Users folder, and then click New User. This allows us to choose a user type at the database level, as opposed to the server level. We can create SQL user with or without a login, what means that this would be a user name we create specifically in this database.

A good example of such an account is the guest account, which is created at the database level, but does not exist at the logins level for the server itself.

You can set the authentication mode to Mix Mode at any time after the installation. And then you can create SQL Server accounts, or use Windows accounts, depending on your choice.

Understanding each type of account and how security is applied to these accounts will help you set the proper security for user access.

Understanding and using roles

SQL Server roles allow you to group users or logins into a specific container for assigning permissions to securables on your SQL Server instance. Permissions assigned to a role are applied to any user or login that is associated with that role. SQL Server provides nine fixed server roles. You can find these server roles in the Security folder of the SQL Server instance. Outside of the nine fixed server roles, SQL Server also provides database level roles.

SQL Server Permissions

Accessing any of the secureables in SQL Server requires that the login or user have permissions. The permissions in SQL Server follow certain naming conventions. The security check algorithm used by SQL is complex. And the complexity comes from the need to evaluate group or role membership, as well as explicit and implicit permissions.

There are three core elements of SQL Server permission checking:

  1. Security context. This is related to the user, the login or the role.
  2. Permission space. The permission space will focus around the securable, such as the database object that the user is attempting to access. SQL Server checks the permissions that the principle has been assigned according to that securable.
  3. Required permissions. This is an important step because some tasks require more than one permission. For instance, if a user attempts to execute a stored procedure, the execute permission would be required in the stored procedure itself.

Summary

SQL Server security is too huge topic to be discussed in a single article. However, this article provides an overview of core principles and concepts of SQL Server security.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – A Practical Use of Backup Encryption

 Backup is extremely important for any DBA. Think of any disaster and backup will come to rescue users in adverse situation. Similarly, it is very critical that we keep our backup safe as well. If your backup fall in the hands of bad people, it is quite possible that it will be misused and become serious data integrity issue. Well, in this blog post we will see a practical scenario where we will see how we can use Backup Encryption to improve security of the bakcup.

Feature description

Database Backup Encryption is a brand new and long expected feature that is available now in SQL Server 2014. You can create an encrypted backup file by specifying the encryption algorithm and the encryptor (either a Certificate or Asymmetric Key).

The ability to protect a backup file with the password has been existing for many years. If you use SQL Server for a long time, you might remember the WITH PASSWORD option for the BACKUP command. The option prevented unauthorized access to the backup file.

However this approach did not provide reliable protection. In that regard, Greg Robidoux noted on MSSQLTIPS: “Although this does add a level of security if someone really wants to crack the passwords they will find a way, so look for additional ways to secure your data.

To protect a backup file, SQL Server 2008 introduced the transparent data encryption (TDE) feature. Thus, a database had to be transparently encrypted before backup. Therefore, start with SQL Server 2012 the PASSWORD and MEDIAPASSWORD parameters are not used while creating backups. Even so, data encryption and backup files encryption are two different scenarios.

In case a database is stored locally, there is no need to encrypt it before backup. Fortunately in SQL Server 2014 there are two independent processes. Along with data encryption it is possible to encrypt a backup file based on a certificate or an asynchronous key. Supported encryption algorithms are:

  • AES 128
  • AES 192
  • AES 256
  • Triple DES

Practical use

To illustrate above mentioned, I will create an encrypted backup of the Adventureworks database. Also, you can back up directly to Azure. If needed, you may restore the encrypted back up file on Azure.

I will use dbForge Studio for SQL Server to create the encrypted backup file.

To protect the backup file we need to create an encryptor: either a Certificate or Asymmetric Key. Then, we need to pass this encryptor to the target SQL Server to restore the backup. For this, the encryptor must be exported from the source SQL Server and imported to the target SQL Server. There are no problems with the certificate in this regard. It is more complicated with asymmetric keys.

Taking into account that the BACKUP ASYMMETRIC KEY command is not available, and we can not just create a duplicate for an asymmetric key (compared to symmetric key), the only approach is to create the asymmetric key outside the SQL Server. Then we can use the sn.exe utility to transfer it inside SQL Server (CREATE ASYMMETRIC KEYkeynameFROM FILE = ‘filename.snk‘). After that we can use this asymmetric key to encrypt the backup file on the source instance. Further we need to use the same *.snk file to create the asymmetric key on the target instance (and restore the backup file).

In our example we will not use asymmetric keys. We will use a certificate. Moreover the certificate (behind the scene) is the pair of open/closed keys.

Let’s create the server certificate and use it to encrypt the backup file.

The certificate will be protected with the database master key, because we didn’t specify the ENCRYPTION BY statement.

This is exactly what we need. Only certificates signed with the database master-key can be used for the encryption purposes. Otherwise, If we for instance, protect the certificate with the password ENCRYPTION BY PASSWORD = ‘strongpassword‘, the following error appears while attempting to encrypt the backup file:

“Cannot use certificate ‘CertName’, because its private key is not present or it is not protected by the database master key.”

Encrypted backups (along with usual backups) can be traditionally created locally on the hard drive and in Azure Storage.

Instead of writing tons of SQL code I will use the convenient dbForge Studio for SQL Server Back Up wizard. The wizard allows to create the database backup in several clicks.

Step 1: Setup the DB Connection and the backup file location.

Step2: Setup mediaset

Step 3: Select the encryption algorithm and certificate.

In case you don’t want to pay extra attention to transferring the backup file to the Windows Azure, you can backup directly to Azure.

After the script execution in the required container the blob (with the backup) appears.

In case you had already created a backup with the same name in the same container, you can get the following error: There is currently a lease on the blob and no lease ID was specified in the request.

Further, you can restore the back up file on the Windows Azure.

Summary: 

Obviously, it is a good practice to encrypt a backup file while transferring. This, for instance, allows to avoid data leak while transferring backups from one DPC to another.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – 3 Challenges for DBA and Smart Solutions

Developer’s life is never easy. DBA’s life is even crazier.

DBA’s Life

When a developer wakes up in the morning, most of the time have no idea what different challenges they are going to face that day. Of course, most of the developers know the project and roadmap, which they are working on. However, developers have no clue what coding challenges which they are going face for that day.

DBA’s life is even crazier. When DBA wakes up in the morning – they often thank that they were not disturbed during the night due to server issues. The very next thing they wish is that they do not want to challenge which they can’t solve for that day. The problems DBA face every single day are mostly unpredictable and they just have to solve them as they come during the day.

Though the life of DBA is not always bad. There are always ways and methods how one can overcome various challenges. Let us see three of the challenges and how a DBA can use various tools to overcome them.

Challenge #1 Synchronize Data Across Server

A Very common challenge DBA receive is that they have to synchronize the data across the servers. If you try to manually write that up, it may take forever to accomplish the task. It is nearly impossible to do the same with the help of the T-SQL. However, thankfully there are tools like dbForge Studio which can save a day and synchronize data across servers. Read my detailed blog post about the same over here: SQL SERVER – Synchronize Data Exclusively with T-SQL.

Challenge #2 SQL Report Builder

DBA’s are often asked to build reports on the go. It really annoys DBA’s, but hardly people care about it. No matter how busy a DBA is, they are just called upon to build reports on things on very short notice. I personally like to avoid any task which is given to me accidently and personally building report can be boring. I rather spend time with High Availability, disaster recovery, performance tuning rather than building report. I use SQL third party tool when I have to work with SQL Report. Others have extended reporting capabilities. The latter group of products includes the SQL report builder built-in todbForge Studio for SQL Server. I have blogged about this earlier over here: SQL SERVER – SQL Report Builder in dbForge Studio for SQL Server.

Challenge #3 Work with the OTHER Database

The manager does not understand that MySQL is different from SQL Server and SQL Server is different from Oracle. For them everything is same. In my career hundreds of times I have faced a situation that I am given a database to manage or do some task when their regular DBA is on vacation or leave. When I try to explain I do not understand the underlying the technology, I have been usually told that my manager has trust on me and I can do anything. Honestly, I can’t but I hardly dare to argue. I fall back on the third party tool to manage database when it is not in my comfort zone. For example, I was once given MySQL performance tuning task (at that time I did not know MySQL so well). To simplify search for a problem query let us use MySQL Profiler in dbForge Studio for MySQL. It provides such commands as a Query Profiling Mode and Generate Execution Plan. Here is the blog post discussing about the same: MySQL – Profiler : A Simple and Convenient Tool for Profiling SQL Queries.

Well, that’s it! There were many different such occasions when I have been saved by the tool. May be some other day I will write part 2 of this blog post.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Synchronize Data Exclusively with T-SQL

UPDATE: This review and exercise is based on dbForge Studio.

Database developers often face the task to synchronize users’ data. Currently, to solve this task, there was developed quite a big number of various utilities. In this article, we will review the process of creating the new functionality to synchronize data exclusively with T-SQL.

Let’s split the synchronization process with T-SQL onto the sub-tasks:

  1. Retrieving the table structure of a master database
  2. Preparing data stored in the tables of the master database and saving it at the public locations
  3. Turning off relations and triggers between the tables of a target database for synchronization time
  4. Synchronization of data between tables
  5. Turning on relations and triggers between the tables of the target database

To make the process of creating the synchronization functionality comparatively easy to understand, let’s discuss some restrictions:

  1. Synchronization will be elapsing within one server, syntax will be supported on the level of MS SQL Server 2005 or higher
  2. The schemes between the synchronized databases are identical
  3. Only system data types are supported
  4. It’s possible to synchronize any amount of tables at a time
  5. The structure of tables is not essential

1) Retrieving the table structure of the master database

The list of tables can be obtained with the following query:
SELECT table_name = '[' + s.name + '].[' + t.name + ']'
FROM sys.tables t
JOIN sys.schemas s ON t.[schema_id] = s.[schema_id]

 

It’s important to remember that some system views may contain an excessive amount of connections and calls of system functions, that will negatively affect the run-time performance of the query, such as sys.tables:

SELECT *
FROM sys.objects$ o
LEFT JOIN sys.sysidxstats lob ON lob.id = o.[object_id] AND lob.indid <= 1
LEFT JOIN sys.syssingleobjrefs ds ON ds.depid = o.[object_id] AND ds.class = 8 AND ds.depsubid <= 1
LEFT JOIN sys.syssingleobjrefs rfs ON rfs.depid = o.[object_id] AND rfs.class = 42 AND rfs.depsubid = 0
LEFT JOIN sys.syspalvalues ts ON ts.class = 'LEOP' AND ts.value = o.lock_escalation_option
WHERE o.[type] = 'U '

Therefore, you have to use as simple system views as possible. In our case, you can use sys.objects:

SELECT table_name = '[' + s.name + '].[' + t.name + ']'
FROM sys.objects o
JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
WHERE o.[type] = 'U '

To make the query not to return excessive rows, we need to preliminary filter out the tables containing no data:

SELECT table_name = '[' + s.name + '].[' + t.name + ']'
FROM (
SELECT
o.[object_id]
, o.[schema_id]
, o.name
FROM sys.objects o
WHERE o.[type] = 'U '
AND EXISTS(
SELECT 1
FROM sys.partitions p
WHERE p.[object_id] = o.[object_id]
AND p.[rows] > 0
)
)
t
JOIN sys.schemas s ON t.[schema_id] = s.[schema_id]

The list of columns containing only system data types can be obtained in the following way:

SELECT
table_name = '[' + s.name + '].[' + o.name + ']'
, column_name = '[' + c.name + ']'
FROM sys.columns c
JOIN sys.objects o ON o.[object_id] = c.[object_id]
JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
JOIN sys.types b ON b.user_type_id = c.system_type_id AND b.user_type_id = b.system_type_id
WHERE o.[type] = 'U '

Next, to minimize the amount of data that we pass for comparison, it is suggested not to take into account column values, that have default values and can’t be empty. So, the query will look as follows:

SELECT
table_name = '[' + s.name + '].[' + o.name + ']'
, column_name =
CASE WHEN c2.[object_id] IS NULL
THEN '[' + c.name + ']'
ELSE '[' + c.name + '] = NULLIF([' + c.name + '], ' + c2.[definition] + ')'
END
FROM
sys.columns c
JOIN sys.objects o ON o.[object_id] = c.[object_id]
JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
JOIN sys.types b ON b.user_type_id = c.system_type_id AND b.user_type_id = b.system_type_id
LEFT JOIN sys.default_constraints c2 ON c.default_object_id = c2.[object_id]
AND b.name IN ('int', 'bit', 'char', 'nchar', 'ntext', 'nvarchar', 'text', 'varchar')
AND
c.is_nullable = 0
WHERE o.[type] = 'U '
AND c.is_computed = 0

2) Retrieving the data stored in the tables of the master database and saving it in public locations

First of all, we need some interim buffer, into which we will load data from a master database. The tempdb system database is public for any user database on the server, therefore, we can choose it.

Every time synchronization runs, we will check whether the table exists and create it if needed with the following query:

IF OBJECT_ID('tempdb.dbo.MergeTemp', 'U') IS NULL BEGIN
CREATE TABLE
tempdb.dbo.MergeTemp
(
[object_name] NVARCHAR(300) NOT NULL
,
create_date DATETIME NOT NULL DEFAULT GETDATE()
,
data XML NULL
,
CONSTRAINT [PK_MergeTemp] PRIMARY KEY CLUSTERED
(
id ASC, [object_name] ASC
) ON [PRIMARY]
)
END

Then, it’s necessary to decide how to store data from the master database. The main difficulty within this task is that tables may have different structures. The most obvious option is to create a temporary table for each synchronized table and upload data into it. However, such option may be not optimal because we will have to execute many DDL commands.

Therefore, we suggest to solve this task by means of XML, that can describe almost any database entity. The disadvantage of the suggested method is that not quite a correct XML data structure may be generated for some data types. You can retrieve a list of such data types with the following query:

SELECT t.name
FROM sys.types t
WHERE t.user_type_id != t.system_type_id

 

Let’s write a query to create XML with dynamic structure:

DECLARE
@Data NVARCHAR(4000)
,
@ObjectName NVARCHAR(300)
DECLARE [tables] CURSOR READ_ONLY FAST_FORWARD LOCAL FOR
SELECT
[object_name]
, [sql] = '
INSERT INTO tempdb.dbo.MergeTemp([object_name], data)
SELECT '''
+ quota_object_name + ''', (SELECT ' + object_columns + '
FROM '
+ quota_object_name + ' t
FOR XML AUTO, ROOT('''
+ [object_name] + '''))'
FROM (
SELECT
[object_name] = s.name + '.' + t.name
, quota_object_name = '[' + s.name + '].[' + t.name + ']'
, object_columns = STUFF((
SELECT ISNULL(', [' + c.name + '] = NULLIF([' + c.name + '], ' + c2.[definition] + ')',
', [' + c.name + ']')
+
CASE WHEN b.name = 'varbinary'
THEN ' = CAST('''' AS XML).value(''xs:base64Binary(sql:column("'+ c.name +'"))'',''VARCHAR(MAX)'')'
ELSE ''
END
FROM
sys.columns c WITH (NOLOCK)
JOIN sys.types b WITH (NOLOCK) ON b.user_type_id = c.system_type_id AND b.user_type_id = b.system_type_id
LEFT JOIN sys.default_constraints c2 ON c.default_object_id = c2.[object_id]
AND b.name IN ('char', 'nchar', 'ntext', 'nvarchar', 'text', 'varchar')
WHERE t.[object_id] = c.[object_id]
AND c.is_computed = 0
ORDER BY c.column_id
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)')
,
1, 2, '')
FROM (
SELECT
o.[object_id]
, o.[schema_id]
, o.name
FROM sys.objects o WITH (NOLOCK)
WHERE o.[type] = 'U'
AND EXISTS(
SELECT 1
FROM sys.partitions p WITH (NOLOCK)
WHERE p.[object_id] = o.[object_id]
AND p.[rows] > 0
)
AND NOT EXISTS(
SELECT 1
FROM sys.columns c WITH (NOLOCK)
JOIN sys.types t WITH (NOLOCK) ON c.system_type_id = t.system_type_id
WHERE (c.user_type_id != c.system_type_id OR t.name = 'xml')
AND
c.[object_id] = o.[object_id]
)
)
t
JOIN sys.schemas s WITH (NOLOCK) ON t.[schema_id] = s.[schema_id]
) data
OPEN [tables]
FETCH NEXT FROM [tables] INTO @ObjectName, @Data
IF (@@FETCH_STATUS = -1)
RAISERROR('Have nothing to export...', 16, 1)
WHILE @@FETCH_STATUS = 0 BEGIN
EXEC
sys.sp_executesql @Data
FETCH NEXT FROM [tables] INTO @ObjectName, @Data
END
CLOSE
[tables]
DEALLOCATE [tables]

Where, the following query will be generated for each table:

INSERT INTO tempdb.dbo.MergeTemp ([object_name], data)
SELECT <TABLE name>,
(
SELECT <columns list>
FROM <TABLE name> t
FOR XML AUTO, ROOT('<table name>')
)

After such query is executed, an XML with table data will be generated and written into the MergeTemp table.

3) Disabling relations and triggers between the tables of the master database for the time of synchronization

At the next stage, we will insert data into tables, therefore it’s preferably to turn off foreign keys and triggers. If you do not, various errors may appear at the inserting stage.

Here is a script to turn off/on foreign keys for tables:

DECLARE
@FKeysOff NVARCHAR(MAX)
,
@FKeysOn NVARCHAR(MAX)
;
WITH fkeys AS
(
SELECT DISTINCT o.[object_name]
FROM tempdb.dbo.MergeTemp mt
JOIN (
SELECT
o.[object_id]
, [object_name] = '[' + s.name + '].[' + o.name + ']'
FROM sys.objects o
JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
JOIN sys.foreign_keys fk ON o.[object_id] = fk.parent_object_id
WHERE o.[type] = 'U '
) o ON mt.[object_name] COLLATE DATABASE_DEFAULT = o.[object_name] COLLATE DATABASE_DEFAULT
)
SELECT
@FKeysOff = (
SELECT 'ALTER TABLE ' + fk.[object_name] + ' NOCHECK CONSTRAINT ALL;' + CHAR(13)
FROM fkeys fk
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)')
,
@FKeysOn = (
SELECT 'ALTER TABLE ' + fk.[object_name] + ' CHECK CONSTRAINT ALL;' + CHAR(13)
FROM fkeys fk
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)')
EXEC sys.sp_executesql @FKeysOff
EXEC sys.sp_executesql @FKeysOn

This query seems cumbrous and it doesn’t take into account more complicated relations between tables. In case when the database contains relatively few tables, the following query (turning off all table triggers and foreign keys in the database) may be used:

EXEC sys.sp_msforeachtable '
ALTER TABLE ? DISABLE TRIGGER ALL
ALTER TABLE ? NOCHECK CONSTRAINT ALL '

4) Data synchronization between tables

We have already access to data from the master database stored in XML.

With the following query we can find the unique identifier of the database object we are going to synchronize:

SELECT
o.[object_id]
, mt.[object_name]
FROM tempdb.dbo.MergeTemp mt
JOIN (
SELECT
o.[object_id]
, [object_name] = '[' + s.name + '].[' + o.name + ']'
FROM sys.objects o
JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
WHERE o.[type] = 'U '
) o ON mt.[object_name] COLLATE DATABASE_DEFAULT = o.[object_name] COLLATE DATABASE_DEFAULT

It’s important to note that the object identifier (object_id) is unique in the database and may be changed when the schema is modified (e.g., adding a new column). That’s why, initially, we were wrote the name of the object instead of its identifier. To synchronize tables, we need to parse data from XML.

As an option, for this, you can use the OPENXML construction:

DECLARE @IDoc INT
EXEC
sys.sp_xml_preparedocument @IDoc OUTPUT, @XML
SELECT [<column name>]
FROM OPENXML(@IDoc, '<table name>/t')
WITH ([<column name>] [<column datetype>]
EXEC sys.sp_xml_removedocument @IDoc

 

However, this approach is less effective when parsing significantly large XML structures. Therefore, it was decided to use the construction described below:

Here is an example of a query, that will generate dynamic SQL by XML parsing and synchronization with the target table:

DECLARE
@ObjectID INT
, @ObjName NVARCHAR(300)
,
@SQL NVARCHAR(MAX)
DECLARE package CURSOR FAST_FORWARD READ_ONLY LOCAL FOR
SELECT
o.[object_id]
, mt.[object_name]
FROM tempdb.dbo.MergeTemp mt
JOIN (
SELECT
o.[object_id]
, [object_name] = '[' + s.name + '].[' + o.name + ']'
FROM sys.objects o
JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
WHERE o.[type] = 'U '
) o ON mt.[object_name] COLLATE DATABASE_DEFAULT = o.[object_name] COLLATE DATABASE_DEFAULT
OPEN package
FETCH NEXT FROM package INTO @ObjectID, @ObjName
WHILE @@FETCH_STATUS = 0 BEGIN
SELECT
@SQL = '
DECLARE @XML XML
SELECT @XML = (
SELECT data
FROM tempdb.dbo.MergeTemp
WHERE [object_name] = '''
+ @ObjName + '''
)
IF EXISTS(
SELECT 1
FROM sys.columns c WITH (NOLOCK)
WHERE c.[object_id] = '
+ CAST(@ObjectID AS VARCHAR(10)) + '
AND c.is_identity = 1
) SET IDENTITY_INSERT '
+ @ObjName + ' ON;
DELETE FROM '
+ @ObjName + '
INSERT INTO '
+ @ObjName + '(' +
STUFF((
SELECT ', [' + c.name + ']'
FROM sys.columns c
WHERE c.[object_id] = @ObjectID
ORDER BY c.column_id
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)'), 1, 2, '') + ')
SELECT '
+ STUFF((
SELECT
', [' + c.name + '] = ' + CASE WHEN c2.name IS NOT NULL THEN 'ISNULL(' ELSE '' END + 'r.c.value(''@' + c.name + ''', ''' + b.name +
CASE
WHEN b.name IN ('char', 'varchar', 'nchar', 'nvarchar', 'varbinary') AND c.max_length = -1 THEN '(max)'
WHEN b.name IN ('char', 'varchar') AND c.max_length!= -1 THEN '(' + CAST(c.max_length AS VARCHAR(10)) + ')'
WHEN b.name IN ('nchar', 'nvarchar') AND c.max_length!= -1 THEN '(' + CAST(c.max_length / 2 AS VARCHAR(10)) + ')'
WHEN b.name = 'decimal' THEN '(' + CAST(c.[precision] AS NVARCHAR(10)) + ',' + CAST(c.[scale] AS VARCHAR(10)) + ')'
ELSE ''
END + ''')' + CASE WHEN c2.name IS NOT NULL THEN ', ' + c2.[definition] + ')' ELSE '' END
FROM
sys.columns c
JOIN sys.types b ON b.user_type_id = c.system_type_id AND b.user_type_id = b.system_type_id AND b.name!= 'xml'
LEFT JOIN sys.default_constraints c2 ON c.default_object_id = c2.[object_id]
AND b.name IN ('int', 'bit', 'char', 'nchar', 'ntext', 'nvarchar', 'text', 'varchar')
AND
c.is_nullable = 0
WHERE c.[object_id] = @ObjectID
AND c.is_computed = 0
ORDER BY c.column_id
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)')
,
1, 2, '') + '
FROM @XML.nodes('''
+ s.name + '.' + o.name + '/t'') r(c)
'
FROM sys.objects o
JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]
WHERE o.[object_id] = @ObjectID
EXEC sys.sp_executesql @SQL
FETCH NEXT FROM package INTO @ObjectID, @ObjName
END
CLOSE
package
DEALLOCATE package

The following script will build such query for each synchronized table:

DECLARE @XML XML
SELECT
@XML = (
SELECT data
FROM tempdb.dbo.MergeTemp
WHERE [object_name] = '<table name>'
)
IF EXISTS(
SELECT 1
FROM sys.columns c
WHERE c.[object_id] = <table_id>
AND
c.is_identity = 1
) SET IDENTITY_INSERT <TABLE name> ON;
DELETE FROM <TABLE name>
INSERT INTO <TABLE name> (<columns list>)
SELECT [<column name>] = r.c.value('@<column name>', '<column datetype>')
FROM @XML.nodes('<table name>/t') r(c)

Synchronization can be performed in three steps:

  1. Inserting new data
  2. Updating existing data by a key field
  3. Deleting non-existing data in the master database

The provided sequence of actions can be organized through separate operators: INSERT/UPDATE/DELETE; or using a MERGE construction that was implemented in MS SQL Server 2008. However, in this case, SQL code will have quite a complicated structure, since we will have to take into account many factors (e.g., insert data in accordance to the order of the relations between tables). In case if the database has a more simple structure, an easier approach can be used.

If all the tables of the database or a specific isolated schema are synchronized, there is a possibility to directly remove all the records from the tables and insert data from the master database. On one hand, such approach can be the most cost-effective one in terms of implementation, if a database with simple logic is synchronized. On the other hand, this way of synchronization may lead to damages in the integrity and validity of the data in the database.

5) Enabling relations and triggers between the tables of the target database

After inserting data into required tables, it’s necessary to enable foreign keys and triggers in order not to damage the cascade integrity and internal business logic. This can be performed as follows:

EXEC sys.sp_msforeachtable '
ALTER TABLE ? ENABLE TRIGGER ALL
ALTER TABLE ? WITH CHECK CHECK CONSTRAINT ALL
'

When executing this command, the server will check whether the inserted data is correct. Therefore, if an error arises on this step, the problem may initially due to damaging of the cascade integrity.

The resulting version of the script:

USE AdventureWorks2008R2
SET NOCOUNT ON;
SET XACT_ABORT ON;
RAISERROR('Start...', 0, 1) WITH NOWAIT
DECLARE
@SQL NVARCHAR(MAX)
,
@Time DATETIME
, @Text NVARCHAR(300)
,
@SessionUID UNIQUEIDENTIFIER
SELECT
@SQL = ''
, @Time = GETDATE()
,
@SessionUID = NEWID()
SELECT @Text = CONVERT(NVARCHAR(25), @Time, 114) + ' Creating temporary table...'
RAISERROR(@Text, 0, 1) WITH NOWAIT
IF OBJECT_ID('tempdb.dbo.MergeTemp', 'U') IS NULL BEGIN
CREATE TABLE
tempdb.dbo.MergeTemp
(
id UNIQUEIDENTIFIER NOT NULL
,
[object_name] NVARCHAR(300) NOT NULL
,
create_date DATETIME NOT NULL DEFAULT GETDATE()
,
data XML NULL
,
CONSTRAINT [PK_MergeTemp] PRIMARY KEY CLUSTERED
(
id ASC, [object_name] ASC
) ON [PRIMARY]
)
END
SELECT
@Text = CONVERT(NVARCHAR(25), GETDATE(), 114) + ' Generate SQL queries...'
RAISERROR(@Text, 0, 1) WITH NOWAIT
DECLARE
@Data NVARCHAR(4000)
,
@ObjectName NVARCHAR(300)
DECLARE [tables] CURSOR READ_ONLY FAST_FORWARD LOCAL FOR
SELECT
[object_name]
, [sql] = '
INSERT INTO tempdb.dbo.MergeTemp(id, [object_name], data)
SELECT '''
+ CAST(@SessionUID AS VARCHAR(36)) + ''', ''' + quota_object_name + ''', (SELECT ' + object_columns + '
FROM '
+ quota_object_name + ' t
FOR XML AUTO, ROOT('''
+ [object_name] + '''))'
FROM (
SELECT
[object_name] = s.name + '.' + t.name
, quota_object_name = '[' + s.name + '].[' + t.name + ']'
, object_columns = STUFF((
SELECT ISNULL(', [' + c.name + '] = NULLIF([' + c.name + '], ' + c2.[definition] + ')',
', [' + c.name + ']')
+
CASE WHEN b.name = 'varbinary'
THEN ' = CAST('''' AS XML).value(''xs:base64Binary(sql:column("'+ c.name +'"))'',''VARCHAR(MAX)'')'
ELSE ''
END
FROM
sys.columns c WITH (NOLOCK)
JOIN sys.types b WITH (NOLOCK) ON b.user_type_id = c.system_type_id AND b.user_type_id = b.system_type_id
LEFT JOIN sys.default_constraints c2 ON c.default_object_id = c2.[object_id]
AND b.name IN ('char', 'nchar', 'ntext', 'nvarchar', 'text', 'varchar')
WHERE t.[object_id] = c.[object_id]
AND c.is_computed = 0
ORDER BY c.column_id
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)')
,
1, 2, '')
FROM (
SELECT
o.[object_id]
, o.[schema_id]
, o.name
FROM sys.objects o WITH (NOLOCK)
WHERE o.[type] = 'U'
AND EXISTS(
SELECT 1
FROM sys.partitions p WITH (NOLOCK)
WHERE p.[object_id] = o.[object_id]
AND p.[rows] > 0
)
AND NOT EXISTS(
SELECT 1
FROM sys.columns c WITH (NOLOCK)
JOIN sys.types t WITH (NOLOCK) ON c.system_type_id = t.system_type_id
WHERE (c.user_type_id!= c.system_type_id OR t.name = 'xml')
AND
c.[object_id] = o.[object_id]
)
)
t
JOIN sys.schemas s WITH (NOLOCK) ON t.[schema_id] = s.[schema_id]
-- You can specify sync all db tables or specified schema/tables
-- WHERE s.name = 'Sales'
-- OR o.name IN ('Address', 'AddressType')
) data
OPEN [tables]
FETCH NEXT FROM [tables] INTO @ObjectName, @Data
IF (@@FETCH_STATUS = -1)
RAISERROR('Have nothing to export...', 16, 1)
WHILE @@FETCH_STATUS = 0 BEGIN
SELECT
@Text = CONVERT(NVARCHAR(25), GETDATE(), 114) + ' Generate XML for ' + @ObjectName
RAISERROR(@Text, 0, 1) WITH NOWAIT
EXEC sys.sp_executesql @Data
FETCH NEXT FROM [tables] INTO @ObjectName, @Data
END
CLOSE
[tables]
DEALLOCATE [tables]
PRINT 'ELAPSED TIME: ' + CAST(DATEDIFF(ms, @Time, GETDATE()) AS NVARCHAR(15)) + 'ms'
RAISERROR('Finish...', 0, 1) WITH NOWAIT
PRINT REPLICATE('-', 80)
USE AdventureWorks2008R2_Live
RAISERROR('Start...', 0, 1) WITH NOWAIT
SELECT @Time = GETDATE()
SELECT @Text = CONVERT(NVARCHAR(25), @Time, 114) + ' Get similar objects in both databases...'
RAISERROR(@Text, 0, 1) WITH NOWAIT
DECLARE @MergeTemp TABLE
(
[object_name] NVARCHAR(300)
,
[object_id] INT
, data XML
)
INSERT INTO @MergeTemp
(
[object_name]
, [object_id]
, daata
)
SELECT
mt.[object_name]
, o.[object_id]
, mt.data
FROM tempdb.dbo.MergeTemp mt
JOIN (
SELECT
o.[object_id]
, [object_name] = '[' + s.name + '].[' + o.name + ']'
FROM sys.objects o WITH (NOLOCK)
JOIN sys.schemas s WITH (NOLOCK) ON o.[schema_id] = s.[schema_id]
WHERE o.[type] = 'U'
) o ON mt.[object_name] COLLATE DATABASE_DEFAULT = o.[object_name] COLLATE DATABASE_DEFAULT
WHERE mt.id = @SessionUID
IF NOT EXISTS(
SELECT 1
FROM @MergeTemp
) RAISERROR('Have nothing to import...', 16, 1)
SELECT @Text = CONVERT(NVARCHAR(25), GETDATE(), 114) + ' Turn off foreign keys and triggers...'
RAISERROR(@Text, 0, 1) WITH NOWAIT
EXEC sys.sp_msforeachtable '
ALTER TABLE ? DISABLE TRIGGER ALL
ALTER TABLE ? NOCHECK CONSTRAINT ALL
'
DECLARE
@ObjectID INT
, @ObjName NVARCHAR(300)
DECLARE package CURSOR FAST_FORWARD READ_ONLY LOCAL FOR
SELECT
mt.[object_id]
, mt.[object_name]
FROM @MergeTemp mt
OPEN package
FETCH NEXT FROM package INTO
@ObjectID
, @ObjName
WHILE @@FETCH_STATUS = 0 BEGIN
SELECT
@Text = CONVERT(NVARCHAR(25), GETDATE(), 114) + ' Processing ' + @ObjName + '...'
RAISERROR(@Text, 0, 1) WITH NOWAIT
SELECT @SQL = '
DECLARE @XML XML
SELECT @XML = (
SELECT data
FROM tempdb.dbo.MergeTemp
WHERE [object_name] = '''
+ @ObjName + ''' AND id = ''' + CAST(@SessionUID AS VARCHAR(36)) + '''
)
IF EXISTS(
SELECT 1
FROM sys.columns c WITH (NOLOCK)
WHERE c.[object_id] = '
+ CAST(@ObjectID AS VARCHAR(10)) + '
AND c.is_identity = 1
) SET IDENTITY_INSERT '
+ @ObjName + ' ON;
DELETE FROM '
+ @ObjName + '
INSERT INTO '
+ @ObjName + '(' +
STUFF((
SELECT ', [' + c.name + ']'
FROM sys.columns c WITH (NOLOCK)
WHERE c.[object_id] = @ObjectID
AND c.is_computed = 0
ORDER BY c.column_id
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)'), 1, 2, '') + ')
SELECT '
+ STUFF((
SELECT
', [' + c.name + '] = ' + CASE WHEN c2.name IS NOT NULL THEN 'ISNULL(' ELSE '' END + 'r.c.value(''@' + c.name + ''', ''' + b.name +
CASE
WHEN b.name IN ('char', 'varchar', 'nchar', 'nvarchar', 'varbinary') AND c.max_length = -1 THEN '(max)'
WHEN b.name IN ('char', 'varchar') AND c.max_length!= -1 THEN '(' + CAST(c.max_length AS VARCHAR(10)) + ')'
WHEN b.name IN ('nchar', 'nvarchar') AND c.max_length!= -1 THEN '(' + CAST(c.max_length / 2 AS VARCHAR(10)) + ')'
WHEN b.name = 'decimal' THEN '(' + CAST(c.[precision] AS NVARCHAR(10)) + ',' + CAST(c.[scale] AS VARCHAR(10)) + ')'
ELSE ''
END + ''')' + CASE WHEN c2.name IS NOT NULL THEN ', ' + c2.[definition] + ')' ELSE '' END
FROM
sys.columns c WITH (NOLOCK)
JOIN sys.types b WITH (NOLOCK) ON b.user_type_id = c.system_type_id AND b.user_type_id = b.system_type_id AND b.name!= 'xml'
LEFT JOIN sys.default_constraints c2 WITH (NOLOCK) ON c.default_object_id = c2.[object_id]
AND b.name IN ('bit', 'char', 'varchar', 'nchar', 'nvarchar')
WHERE c.[object_id] = @ObjectID
AND c.is_computed = 0
ORDER BY c.column_id
FOR XML PATH(''), TYPE, ROOT).value('root[1]', 'NVARCHAR(MAX)')
,
1, 2, '') + '
FROM @XML.nodes('''
+ s.name + '.' + o.name + '/t'') r(c)
'
FROM sys.objects o WITH (NOLOCK)
JOIN sys.schemas s WITH (NOLOCK) ON o.[schema_id] = s.[schema_id]
WHERE o.[object_id] = @ObjectID
EXEC sys.sp_executesql @SQL
FETCH NEXT FROM package INTO
@ObjectID
, @ObjName
END
CLOSE
package
DEALLOCATE package
SELECT @Text = CONVERT(NVARCHAR(25), GETDATE(), 114) + ' Turn on foreign keys and triggers...'
RAISERROR(@Text, 0, 1) WITH NOWAIT
EXEC sys.sp_msforeachtable '
ALTER TABLE ? ENABLE TRIGGER ALL
ALTER TABLE ? WITH CHECK CHECK CONSTRAINT ALL
'
PRINT 'ELAPSED TIME: ' + CAST(DATEDIFF(ms, @Time, GETDATE()) AS NVARCHAR(15)) + 'ms'
RAISERROR('Finish...', 0, 1) WITH NOWAIT

Conclusion:

As we can see, even for databases with a simple structure, the synchronization process is quite a bigtask that includes many different actions.

As you have noticed early in the article, manual data synchronization is not quite easy task. Thus, to save time and efforts it is highly recommended to use special tools for synchronization. For instance, the following screen-shot demonstrates how you can visualize compared data before synchronization by using comparison feature of dbForge Studio for SQL Server.

The upper grid shows the database objects differences and the bottom grid displays data differences. To synchronize source and target databases you can automatically generate the synchronization script. Then you can view this script, save it, or execute it immediately to update the target database.  So the process can be handled in an easier way.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – SQL Report Builder in dbForge Studio for SQL Server

Modern opportunities of electronic document management systems can significantly simplify the process of decision-making at an enterprise.

Along with the opportunities they provide, the traditional printed documentation still plays an important role by allowing to conveniently operate the required information by means of reports.

Creation of such reports, depending on the degree of complexity, can take a long time, therefore, to accelerate their creation, visual editors are traditionally used.

Some of these editors have only basic functionality. Others have extended reporting capabilities. The latter group of products includes the SQL report builder built-in to dbForge Studio for SQL Server.

The editor’s capabilities allow you to build a report according your SQL query, literally, within a few minutes. For this, it’s enough to call the Report Designer command from the main menu.

Select a report type in the opened dialog.

Select a connection and a data source.

Herewith, tables and views can be selected as a source.

In case if data, for example, from several tables should be returned by a query, a user query can be written.

After selecting a data source, specify the columns, by which the report will be built.

You should also specify how this data will be presented in the report.

At the final stage, it remains just to specify the name for the report and click the Finish button.

After the wizard is finished, the report editor opens, in which it is possible to fine-tune the appearance of the report.

To preview the report, there is the Preview button provided.

After the report is generated, it can be both printed and exported to one of the supported formats.

As you can see, creating and editing the report in SQL report builder does not take a lot of time, allowing to streamline the process and reduce development time.

Reference: Pinal Dave (http://blog.sqlauthority.com)

MySQL – Profiler : A Simple and Convenient Tool for Profiling SQL Queries

In terms of working with highly loaded databases, any badly-designed request can cause significant performance degradation. Depending on the used DBMS features, various methods are used for query optimization.

However, a general rule can be pointed out for all types of DBMS: the fewer server resources are expended when executing a query, the more effective will be the query itself.

To detect bottlenecks, MySQL provides the EXPLAIN command. It will help tell you what exactly the database server does, when you execute the query.

To obtain data on executing a query, it is enough to set EXPLAIN before each targeted query. However, putting EXPLAIN for a large number of constructs can become rather tedious.

To simplify search for a problem query let us use MySQL Profiler in dbForge Studio for MySQL. It provides such commands as a Query Profiling Mode and Generate Execution Plan.

To demonstrate work with execution plan, let’s consider a small sample. Suppose we need to find vendors of PCs, but not laptops, basing on the following data:

CREATE TABLE models (
model_id INT(11) NOT NULL PRIMARY KEY,
model_type VARCHAR(10) NOT NULL
)
ENGINE = INNODB;
CREATE TABLE products (
maker_id INT(11) NOT NULL,
model_id INT(11) NOT NULL,
PRIMARY KEY (maker_id, model_id),
CONSTRAINT FK_products_models_model_id
FOREIGN KEY (model_id)
REFERENCES models (model_id)
ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE = INNODB;
INSERT INTO models(model_id, model_type)
VALUES
(1, 'PC'), (2, 'Laptop'), (3, 'Tablet'),
(
4, 'Phone'), (5, 'PC'), (6, 'Laptop');
INSERT INTO products(maker_id, model_id)
VALUES
(1, 1), (1, 2), (1, 3), (2, 4),
(
4, 4), (2, 5), (3, 5), (3, 6);

This task can be solved quite trivial:

SELECT DISTINCT p.maker_id
FROM products AS p
JOIN models m ON p.model_id = m.model_id
WHERE m.model_type = 'PC'
AND NOT EXISTS (
SELECT p2.maker_id
FROM products AS p2
JOIN models m2 ON p2.model_id = m2.model_id
WHERE m2.model_type = 'Laptop'
AND p2.maker_id = p.maker_id
);

When designing a query, one always  ought to pay attention to re-readings from the same tables, since they bear an additional load on the database server disk subsystem:

The query can be rewritten with no re-readings, using aggregating functions:

SELECT p.maker_id
FROM products AS p
JOIN models AS m ON p.model_id = m.model_id
WHERE m.model_type IN ('PC', 'Laptop')
GROUP BY p.maker_id
HAVING COUNT(CASE WHEN m.model_type = 'PC' THEN 1 END) > 0
AND COUNT(CASE WHEN m.model_type = 'Laptop' THEN 1 END) = 0;

However, having a large number of records, JOIN to the models table can still affect performance.

To speed up the query, denormalization of the scheme can be performed by duplicating data from the model_type column of the products table:

UPDATE products AS p
JOIN models AS m ON p.model_id = m.model_id
SET p.model_type = m.model_type;

Also, pay attention to the fact that the model_type column is filtered by, so it is recommended to index it.

At that,  selection of columns, that will be included to the primary key,  should be approached carefully. Since columns from the primary key are included to the nonclustered index, this can increase database size.

As a result, selection is significantly simplified:

SELECT maker_id
FROM products
GROUP BY maker_id
HAVING COUNT(CASE WHEN model_type = 'PC' THEN 1 END) > 0
AND COUNT(CASE WHEN model_type = 'Laptop' THEN 1 END) = 0;

At this, the plan becomes simple at all:

However, the root of performance problems should not always be searched for in queries – it is very important to correctly configure database server settings. For this, the Server Variables tab can be used.

In this case, all the settings, for convenience, can be grouped according to their target facilities:

There is no perfect advice how to optimize a query.  However, using the profiler embedded into dbForge Studio for MySQL, there is a probability to detect all the performance bottlenecks.

Optimal settings for MySQL server can also vary depending on the specificity of performed tasks. Therefore, detailed consideration of the server configuration process will be revealed in a separate article.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Writing SQL Queries Made Easy with dbForget SQL Complete

Down with shackles of convention! dbForge SQL Complete 4.5 came in this world.

What does SQL mean for each of us? For some – the only language for working with relational databases. Yet, for others – it is a way of life…

However, when working with SQL queries, both categories of people face the same difficulties.

Eventually, writing queries becomes routine, and all because of imperfection of the built-in Intellisense in SSMS.

The new dbForge SQL Complete 4.5 is called to release everyone from the shackles of convention in SSMS Intellisense and provide more opportunities when working with T-SQL code.

For this, SQL Complete has several key features in its arsenal:

1. Code auto-completion

Because of contextual object tooltips and syntactic elements of the T-SQL language, the productivity of work with SQL Complete is not just rising – it opens on to a qualitatively new level, providing greater opportunities.

Not just a trite hint of tables and columns is ensured. The ability to generate foreign key relationships, aliases, and many other things…

Object hints are provided in such a big number of structures, that an incomplete list of them can be learned only on a separate page.

2. SQL Formatter

Features of the SQL Formatter embedded in dbForge SQL Complete will allow to forget about manual query editing forever.

Using the SQL Formatter, your code will always be perfect due to a huge number of settings, that allow to customize formatting of all SQL constructs according to your personal preferences.

Moreover, there is a possibility to format not only the current document or selected code fragment, but groups of files on the disk as well.

3. Snippets

Code reuse through snippets is a perfect opportunity to accelerate the process of writing code.

As Devart developers said, their analysis of more than 5.000 topical posts from authoritative resources such as StackOverflow had shown that the list of basic snippets was significantly extended to prevent users from typing frequently used T-SQL statements.

Besides, owing to the Snippet Manager, users can easily extend the set of existing snippets.

When typing code, a required snippet can be selected by capital letters:

and by match with the snippet’s name:

4. Auto-refresh after update schema changes

On active schema changing, many encounter a situation when new or modified objects are not suggested in SSMS Intellisense. For this, you have to manually refresh the object list.

But all that is in the past, since in the version 4.5, there is a capability of smart refresh of hints for modified objects only at specified time intervals.

Within the dbForge SQL Complete 4.5 release, I would like to bring to your attention that memory consumption is reduced and objects describing is now much faster.

All these improvements are aimed to ensure that our users to be able to enjoy work and perform it without delays.

To support words with deeds, I recommend users to re-discover the wonderful world of SQL with dbForge SQL Complete 4.5.

Reference: Pinal Dave (http://blog.sqlauthority.com)