I just returned from successful road trip of TechEd India. The trip was extremely successful and I have got big chance to engage with community and friends. One of the most frequently asked question during the trip was what kind of efficiency tools do I use while working with SQL Server. I use many different tools and here is the list of my most favorite tools from Devart. If you are using them, do let me know as I would like to get your feedback about the tools.
Some people call it laziness, some will call it efficiency, some think it is the right thing to do. At any rate, tools are meant to make a job easier, and I like to use various tools. Wanting a more efficient way to do something is not inherently lazy. That’s how I see any efficiency tools. A few days ago I found Devart SQL Complete. It took less than a minute to install, and after installation it just worked without needing any tweaking. Once I started using it I was impressed with how fast it formats SQL code – you can write down any terms or even copy and paste. The worst thing to encounter is code that goes all the way to the right side, and you have to scroll a million times because there are no breaks or indents. SQL Complete will take care of this for you – if a developer is too lazy for proper formatting, then Devart’s SQL formatter toolwill make them better, not lazier.
I have talked a little bit about dbForge before, but I would like to do a full product review now. Devart’sdbForge Studio for SQL Server is a fantastic tool for SQL Server development, as soon as I installed it I found so many things to learn. First off, it will increase SQL coding almost instantly. There is very little to learn, you are not just memorizing codes to “cheat” off of. DbForge Studio provides code completion options and automatic SQL formatting, so that you know your code will work. One of my favorite feature is “snippets,” which stores parts of code that you use over and over to cut down on typing and searching – because you know they’re always a few commands you use again and again!
Developers are often very busy in writing code and building application rapidly while racing against strict deadline. Developers want a tool which can help them aid them to write faster code along with a tool which auto format the code, so when they visit the code again they can read it. I came across a similar tool which can help achieve both the goals.
Microsoft SQL Server Data Tools provides an integrated environment for database developers to carry out all their database design work for any SQL Server platform within Visual Studio. The SQL Server Object Explorer in Visual Studio offers a view of your database objects similar to SQL Server Management Studio. SQL Server Object Explorer allows you to do light-duty database administration and design work. You can easily create, edit, rename and delete tables, stored procedures, types, and functions. You can also edit table data, compare schemas, or execute queries by using contextual menus right from the SQL Server Object Explorer.Database developers can use the SQL Server Object Explorer in VS to easily create or edit database objects and data, or execute queries. The Microsoft SQL Server 2012 Data-Tier Application Framework (DACFx) is a component which provides application lifecycle services for database development and management for Microsoft SQL Server and Windows Azure SQL Databases.
I have talked a little bit about dbForge before, but I would like to do a full product review now. Devart’s dbForge Studio for SQL Server is a fantastic tool for SQL Server development, as soon as I installed it I found so many things to learn.
First off, it will increase SQL coding almost instantly. There is very little to learn, you are not just memorizing codes to “cheat” off of. DbForge Studio provides code completion options and automatic SQL formatting, so that you know your code will work. One of my favorite feature is “snippets,” which stores parts of code that you use over and over to cut down on typing and searching – because you know there always a few commands you use again and again! Another time saver is the hints option, which will show you information about objects, and the navigation tool that allows toggling between items using only the F12 key.
Of course, all these features would simply be interesting trivia if you couldn’t design and alter tables, too. But dbForge Studio has that covered as well. Tables are always the hardest part, but with Table Designer, you can quickly set the properties using the visual table editors, then edit the script and rebuild as needed, previewing changes before going live. It really takes a lot of the stress out of altering tables.
Changing your database and the information stored there can also be a stress-filled event. But with Database Diagram tool, you can also edit and manipulate your database quickly and preview changes. Scaling, grouping, and printing are no problem with Database Diagram. You can also build virtual connections to help visualize your data.
Of course, coding and modifications can seem simple in comparison to tackling the data within the database, especially when there is a complex structure and multiple dependencies. With Schema Compare, you can synchronize your database, even when they are extremely complex. You will also be alerted if there are problems so you can solve problems early, before they become catastrophes. Schema Compare takes “snap shots” of your SQL Server database for easy comparisons later, which heads off drift. And if you are thinking that this is too good to be true, Schema Compare also has deployment script that help target different SQL Server editions, so it doesn’t matter what version you use, you can still use this tool.
DbForge Studio also allows automating deployment of the database to the production server. There are multiple tools available to help with this, including T-SQL Debugger, Query Profiler, Query Builder, and Data Import & Export. All of these tools can be scheduled and automated, and use easy “point and click” options rather than tedious coding.
While all this automating sounds great, anyone who has had to create and deal with reports is probably wondering if this can be automated, as well. The answer is yes! The Data Report Designer will compile your data and create a report in any of nine formats, so you have the perfect feedback. You can also schedule automatic report delivery.
And last, but certainly not least, let’s deal with the security features in dbForge Studio . I cannot say enough about how important security is, and how well Devart’s tool deals with this subject. The Security Manager feature allows the user to create visual editors for users, log-ins, and roles, and provides a feature for batch editing of objects. Of course, one of the most difficult tasks in security is fine-tuning the security to each individual user. You probably see where I am heading with this – the Security Manager allows setting up different privileges per schema object.
I hope this quick review has convinced some DBAs (and non DBAs!) to give dbForge Studio for SQL Server a try. I am sure you will not be disappointed. I certainly was not! Check out – dbForge Studio for SQL Server.
Balmukund Lakhani (Blog | Twitter | Site) is currently working as Technical Lead in SQL Support team with Microsoft India GTSC. In past 7+ years with Microsoft he was also a part of the Premier Field Engineering Team for 18 months. During that time he was a part of rapid on-site support (ROSS) team. Prior to joining Microsoft in 2005, he worked as SQL developer, SQL DBA and also got a chance to wear his other hat as an ERP Consultant.
Balmukund is a great friend and one of the finest SQL Server Expert I know. When I requested him for Guest Post, he has indeed come up with a fantastic blog post and very interesting title of the blog post. I am sure many wants to learn the trick to hide from SQL Server, so here it goes in his own words.
Important: It is Balmukund’s Birthday Today – please wish him all the best for exciting SQL filled New Year! Wish him on Twitteror leave a comment on his Blog.
I am sure that all of us have used SQL Server Management Studio countless times. It’s the common tool used by all DBA and developers. Most of the time we give server name, user name and password (if needed) and hit connect. Right? Have you ever clicked on “Options >>” button and made some changes in connection properties? Pinal wrote about color coding of servers, using “use custom color” option, hereand posted a video herewhich is available in connection properties tab.
In this blog, we are going to talk about the third tab “Additional Connections Parameters”. When we want to test connecting string and those options are not available in earlier two tabs, then we can provide them in this tab (available in SQL 2008 onwards). The parameters should be separated by semicolons (;). There are a few parameters in connection string which can be spoofed. Let us start from hiding from SQL Server.
Attempt 1: Hide from SQL Server
Enter following parameters in the Additional Connection Parameter and click on Connect.
Application Name=Pinal-SSMS;Workstation ID=Pinal-PC
Once we connect and try to check our connection properties, the result of this would be as follows:
SELECT HOST_NAME, program_name FROM sys.dm_exec_sessions WHERE session_id = @@spid
If any DBA sees this, they would think that Pinal is working but it’s Balmukund in reality. And, I have hidden myself.
Attempt 2: Hide SQL Server from User (Deceive User)
Now we have hidden ourselves from SQL Server – let us do something crazy. Let us hide the SQL Server from the user. This can be extremely confusing so I suggest to use it carefully or never use it at all. If I can, I would have put red flashing lights around previous statement.
Now let us change the Data Source property in Additional Connection Parameters place. Let us confuse user that they are connecting to instance SQLServer2000 even though they are really connecting to instance BIG. My localhost is named as BIG and in following connection I am providing a incorrect server name Big\SQLServer2000 in the Server Name. In normal case it will give us the error. However, in this case we will provide an incorrect name in the Login >> ServerName connection.
We will right away click on “Additional Connection Parameters” and will enter following text in the string Parameters: Data Source=localhost
Now let us hit connect and you will see following situation.
Well, as warned earlier, SSMS would override the value and hence we are connected to a server which is in additional connection parameters.Everywhere in SSMS, we would see instance SQLServer2000 which is in login tab, but actual server is the SQL Server 2012. Now imagine a drop database executed in SSMS, thinking that this was ainstance SQLServer2000 but they end up on instance BIG. It is quite possible some of the commands will now return error if they are deprecated in instance SQLServer2000. This will be fun prank – which you should not play with anyone.
Moral of the story: Be careful while using this tab. Use this only for those settings which are not available in management studio by default.
Hope you have learned something new today.
Thanks Balmukund Twitter for this wonderful guest post.
Some people call it laziness, some will call it efficiency, some think it is the right thing to do. At any rate, tools are meant to make a job easier, and I like to use various tools. If we consider the history of the world, if we all wanted to keep traditional practices, we would have never invented the wheel. But as time progressed, people wanted convenience and efficiency, which then led to laziness. Wanting a more efficient way to do something is not inherently lazy. That’s how I see any efficiency tools.
A few days ago I found Devart SQL Complete. It took less than a minute to install, and after installation it just worked without needing any tweaking. Once I started using it I was impressed with how fast it formats SQL code – you can write down any terms or even copy and paste. You can start typing right away, and it will complete keywords, object names, and fragmentations. It completes statement expressions. How many times do we write insert, update, delete? Take this example: to alter a stored procedure name, we don’t remember the code written in it, you have to write it over again, or go back to SQL Server Studio Manager to create and alter which is very difficult. With SQL Complete , you can write “alter stored procedure,” and it will finish it for you, and you can modify as needed.
I love to write code, and I love well-written code. When I am working with clients, and I find people whose code have not been written properly, I feel a little uncomfortable. It is difficult to deal with code that is in the wrong case, with no line breaks, no white spaces, improper indents, and no text wrapping. The worst thing to encounter is code that goes all the way to the right side, and you have to scroll a million times because there are no breaks or indents. SQL Complete will take care of this for you – if a developer is too lazy for proper formatting, then Devart’s SQL formatter toolwill make them better, not lazier.
SQL Management Studio gives information about your code when you hover your mouse over it, however SQL Complete goes further in it, going into the work table, and the current rate idea, too. It gives you more information about the parameters; and last but not least, it will just take you to the help file of code navigation. It will open object explorer in a document viewer. You can start going through the various properties of your code – a very important thing to do.
Here are are interesting Intellisense examples:
1) We are often very lazy to expand *however, when we are using SQL Complete we can just mouse over the * and it will give us all the the column names and we can select the appropriate columns.
2) We can put the cursor after * and it will give us option to expand it to all the column names by pressing the Tab key.
3) Here is one more Intellisense feature I really liked it. I always alias my tables and I always select the alias with special logic. When I was using SQL Complete I selected just a tablename (without schema name) and…(just like below image)
… and it autocompleted the schema and alias name (the way I needed it).
I believe using SQL Complete we can work faster. It supports all versions of SQL Server, and works SQL formatting. Many businesses perform code review and have code standards, so why not use an efficiency tool on everyone’s computer and make sure the code is written correctly from the first time? If you’re interested in this tool, there are free editions available. If you like it, you can buy it. I bought it because it works. I love it, and I want to hear all your opinions on it, too.
How do I find all the tables used in a particular stored procedure?
How do I know which stored procedures are using a particular table?
Both are valid question but before we see the answer of this question – let us understand two small concepts – Referenced and Referencing.
Here is the sample stored procedure.
CREATE PROCEDURE mySP AS
SELECT * FROM Sales.Customer
Reference: The table Sales.Customer is the reference object as it is being referenced in the stored procedure mySP.
Referencing: The stored procedure mySP is the referencing object as it is referencing Sales.Customer table.
Now we know what is referencing and referenced object. Let us run following queries. I am using AdventureWorks2012 as a sample database. If you do not have SQL Server 2012 here is the way to get SQL Server 2012 AdventureWorks database.
Find Referecing Objects of a particular object
Here we are finding all the objects which are using table Customer in their object definitions (regardless of the schema).
GO SELECT referencing_schema_name = SCHEMA_NAME(o.SCHEMA_ID), referencing_object_name = o.name, referencing_object_type_desc = o.type_desc, referenced_schema_name, referenced_object_name = referenced_entity_name, referenced_object_type_desc = o1.type_desc, referenced_server_name, referenced_database_name --,sed.* -- Uncomment for all the columns FROM sys.sql_expression_dependencies sed INNER JOIN sys.objects o ON sed.referencing_id = o.[object_id] LEFT OUTER JOIN sys.objects o1 ON sed.referenced_id = o1.[object_id] WHERE referenced_entity_name = 'Customer'
The above query will return all the objects which are referencing the table Customer.
Find Referenced Objects of a particular object
Here we are finding all the objects which are used in the view table vIndividualCustomer.
GO SELECT referencing_schema_name = SCHEMA_NAME(o.SCHEMA_ID), referencing_object_name = o.name, referencing_object_type_desc = o.type_desc, referenced_schema_name, referenced_object_name = referenced_entity_name, referenced_object_type_desc = o1.type_desc, referenced_server_name, referenced_database_name --,sed.* -- Uncomment for all the columns FROM sys.sql_expression_dependencies sed INNER JOIN sys.objects o ON sed.referencing_id = o.[object_id] LEFT OUTER JOIN sys.objects o1 ON sed.referenced_id = o1.[object_id] WHERE o.name = 'vIndividualCustomer'
The above query will return all the objects which are referencing the table Customer.
I am just glad to write above query. There are more to write to this subject. In future blog post I will write more in depth about other DMV which also aids in finding referenced data.
Let me tell you about one of the most useful SQL tools that every DBA should use – it is SQLBackupAndFTP. I have been using this tool since 2009 – and it is the first program I install on a SQL server. Download a free version, 1 minute configuration and your daily backups are safe in the cloud.
In summary, SQLBackupAndFTP
Creates SQL Server database and file backups on schedule
Compresses and encrypts the backups
Sends backups to a network folder, FTP Server, Dropbox, Google Drive or Amazon S3
Sends email notifications of job’s success or failure
SQLBackupAndFTP comes in Free and Paid versions (starting from $29) – see version comparison. Free version is fully functional for unlimited ad hoc backups or for scheduled backups of up to two databases – it will be sufficient for many small customers.
What has impressed me from the beginning – is that I understood how it works and was able to configure the job from a single form (see Image 1 – Main form above)
Connect to you SQL server and select databases to be backed up
Click “Add backup destination” to configure where backups should go to (network, FTP Server, Dropbox, Google Drive or Amazon S3)
Enter your email to receive email confirmations
Set the time to start daily full backups (or go to Settings if you need Differential or Transaction Log backups on a flexible schedule)
Press “Run Now” button to test
You can get to this form if you click “Settings” buttons in the “Schedule section”. Select what types of backups and how often you want to run them and you will see the scheduled backups in the “Estimated backup plan” list
Along with SQLBackupAndFTP setup gives you the option to install “One-Click SQL Restore” (you can install it stand-alone too) – a basic tool for restoring just Full backups.
However basic, you can drag-and-drop on it the zip file created by SQLBackupAndFTP, it unzips the BAK file if necessary, connects to the SQL server on the start, selects the right database, it is smart enough to restart the server to drop open connections if necessary – very handy for developers who need to restore databases often.
You may ask why is this tool is better than maintenance tasks available in SQL Server? While maintenance tasks are easy to set up, SQLBackupAndFTP is still way easier and integrates solution for compression, encryption, FTP, cloud storage and email which make it superior to maintenance tasks in every aspect.
On a flip side SQLBackupAndFTP is not the fanciest tool to manage backups or check their health. It only works reliably on local SQL Server instances. In other words it has to be installed on the SQL server itself. For remote servers it uses scripting which is less reliable. This limitations is actually inherent in SQL server itself as BACKUP DATABASE command creates backup not on the client, but on the server itself.
This tool is compatible with almost all the known SQL Server versions. It works with SQL Server 2008 (all versions) and many of the previous versions. It is especially useful for SQL Server Express 2005 and SQL Server Express 2008, as they lack built in tools for backup.
I strongly recommend this tool to all the DBAs. They must absolutely try it as it is free and does exactly what it promises. You can download your free copyof the tool from here.
Please share your experience about using this tool. I am eager to receive your feedback regarding this article.
There has been much fanfare over the new SQL Server 2012, and especially around its new companion product Data Quality Services (DQS). Among the many new features is the addition of this integrated knowledge-driven product that enables data stewards everywhere to profile, match, and cleanse data. In addition to the homegrown rules that data stewards can design and implement, there are also connectors to third party providers that are hosted in the Azure Datamarket marketplace. In this review, I leverage SQL Server 2012 Data Quality Services, and proceed to subscribe to a third party data cleansing product through the Datamarket to showcase this unique capability.
For the purposes of the review, I used a database I had in an Excel spreadsheet with name and address information. Upon a cursory inspection, there are miscellaneous problems with these records; some addresses are missing ZIP codes, others missing a city, and some records are slightly misspelled or have unparsed suites. With DQS, I can easily add a knowledge base to help standardize my values, such as for state abbreviations. But how do I know that my address is correct?And if my address is not correct, what should it be corrected to? The answer lies in a third party knowledge base by the acknowledged USPS certified address accuracy experts at Melissa Data.
Reference Data Services
Within DQS there is a handy feature to actually add reference data from many different third-party Reference Data Services (RDS) vendors. DQS simplifies the processes of cleansing, standardizing, and enriching data through custom rules and through service providers from the Azure Datamarket. A quick jump over to the Datamarket site shows me that there are a handful of providers that offer data directly through Data Quality Services. Upon subscribing to these services, one can attach a DQS domain or composite domain (fields in a record) to a reference data service provider, and begin using it to cleanse, standardize, and enrich that data. Besides what I am looking for (address correction and enrichment), it is possible to subscribe to a host of other services including geocoding, IP address reference, phone checking and enrichment, as well as name parsing, standardization, and genderization. These capabilities extend the data quality that DQS has natively by quite a bit.
For my current address correction review, I needed to first sign up to a reference data provider on the Azure Data Market site. For this example, I used Melissa Data’s Address Check Service. They offer free one-month trials, so if you wish to follow along, or need to add address quality to your own data, I encourage you to sign up with them.
Once I subscribed to the desired Reference Data Provider, I navigated my browser to the Account Keys within My Account to view the generated account key, which I then inserted into the DQS Client – Configuration under the Administration area.
Step by Step to Guide
That was all it took to hook in the subscribed provider -Melissa Data- directly to my DQS Client. The next step was for me to attach and map in my Reference Data from the newly acquired reference data provider, to a domain in my knowledge base.
On the DQS Client home screen, I selected “New Knowledge Base” under Knowledge Base Management on the left-hand side of the home screen.
Under New Knowledge Base, I typed a Name and description of my new knowledge base, then proceeded to the Domain Management screen.
Here I established a series of domains (fields) and then linked them all together as a composite domain (record set). Using the Create Domain button, I created the following domains according to the fields in my incoming data:
I added a Suite column in my domain because Melissa Data has the ability to return missing Suites based on last name or company. And that’s a great benefit of using these third party providers, as they have data that the data steward would not normally have access to. The bottom line is, with these third party data providers, I can actually improve my data.
Next, I created a composite domain (fulladdress) and added the (field) domains into the composite domain. This essentially groups our address fields together in a record to facilitate the full address cleansing they perform.
I then selected my newly created composite domain and under the Reference Data tab, added my third party reference data provider –Melissa Data’s Address Check- and mapped in each domain that I had to the provider’s Schema.
Now that my composite domain has been married to the Reference Data service, I can take the newly published knowledge base and create a project to cleanse and enrich my data.
My next task was to create a new Data Quality project, mapping in my data source and matching it to the appropriate domain column, and then kick off the verification process. It took just a few minutes with some progress indicators indicating that it was working.
When the process concluded, there was a helpful set of tabs that place the response records into categories: suggested; new; invalid; corrected (automatically); and correct. Accepting the suggestions provided by Melissa Data allowed me to clean up all the records and flag the invalid ones. It is very apparent that DQS makes address data quality simplistic for any IT professional.
As I have shown, DQS makes data quality very easy. Within minutes I was able to set up a data cleansing and enrichment routine within my data quality project, and ensure that my address data was clean, verified, and standardized against real reference data. As reviewed here, it’s easy to see how both SQL Server 2012 and DQS work to take what used to require a highly skilled developer, and empower an average business or database person to consume external services and clean data.
It is not a SQL Server Issue or SSMS issue. It is how things work. There is a simple trick to resolve this issue.
It is very common when users are coping the resultset to Excel, the floating point or decimals are missed.
The solution is very much simple and it requires a small adjustment in the Excel. By default Excel is very smart and when it detects the value which is getting pasted is numeric it changes the column format to accommodate that. Now as Zero which are training any digit after decimal points have no value, Excel automatically hides it. To prevent this to happen user has to convert columns to text format so it can preserve the formatting.
Here is how you can do it.
Select the corner between A and 1 and Right Click on it.
It will select complete spreadsheet. If you want to change the format of any column you can select an individual column the same way. In the menu Click on Format Cells…
It will bring up the following menu. Here by default the selected column will be General, change that to Text. It will change the format of all the cells to Text.
Now once again paste the values from SSMS to the Excel. This time it will preserve the decimal values from SSMS.
Any other trick you do you know to preserve the decimal values? Leave a comment please.
Data Quality Services is very interesting enhancements in SQL Server 2012. My friend and SQL Server Expert Govind Kanshi have written an excellent article on this subject earlier on his blog. Yesterday I stumbled upon his blog one more time and decided to experiment myself with DQS. I have basic understanding of DQS and MDS so I knew I need to start with DQS Client. However, when I tried to find DQS Client I was not able to find it under SQL Server 2012 installation.
I quickly realized that I needed to separately install the DQS client. You will find the DQS installer under SQL Server 2012 >> Data Quality Services directory.
The pre-requisite of DQS is Master Data Services (MDS) and IIS. If you have not installed IIS, you can follow the simple steps and install IIS in your machine.
Once the pre-requisites are installed, click on MDS installer once again and it will install DQS just fine.
Be patient with the installer as it can take a bit longer time if your machine is low on configurations.
Once the installation is over you will be able to expand SQL Server 2012 >> Data Quality Services directory and you will notice that it will have a new item called Data Quality Client. Click on it and it will open the client.
Well, in future blog post we will go over more details about DQS and detailed practical examples.