Developer – Cross-Platform: Fact or Fiction?

This is a guest blog post by Jeff McVeigh. Jeff McVeigh is the general manager of Performance Client and Visual Computing within Intel’s Developer Products Division. His team is responsible for the development and delivery of leading software products for performance-centric application developers spanning Android*, Windows*, and OS* X operating systems. During his 17-year career at Intel, Jeff has held various technical and management positions in the fields of media, graphics, and validation. He also served as the technical assistant to Intel’s CTO. He holds 20 patents and a Ph.D. in electrical and computer engineering from Carnegie Mellon University.


It’s not a homogenous world. We all know it. I have a Windows* desktop, a MacBook Air*, an Android phone, and my kids are 100% Apple. We used to have 2.5 kids, now we have 2.5 devices. And we all agree that diversity is great, unless you’re a developer trying to prioritize the limited hours in the day. Then it’s a series of trade-offs. Do we become brand loyalists for Google or Apple or Microsoft? Do we specialize on phones and tablets or still consider the 300M+ PC shipments a year when we make our decisions on where to spend our time and resources?

We weigh the platform options, monetization opportunities, APIs, and distribution models. Too often, I see developers choose one platform, or write to the lowest common denominator, which limits their reach and market success. But who wants to be ‟me too”? Cross-platform coding is possible in some environments, for some applications, for some level of innovation—but it’s not all-inclusive, yet.

There are some tricks of the trade to develop cross-platform, including using languages and environments that ‟run everywhere.” HTML5 is today’s answer for web-enabled platforms. However, it’s not a panacea, especially if your app requires the ultimate performance or native UI look and feel. There are other cross-platform frameworks that address the presentation layer of your application. But for those apps that have a preponderance of native code (e.g., highly-tuned C/C++ loops), there aren’t tons of solutions today to help with code reuse across these platforms using consistent tools and libraries.

As we move forward with interim solutions, they’ll improve and become more robust, based, in no small part, on our input.

What’s your answer to the cross-platform challenge? Are you fully invested in HTML5 now? What are your barriers? What’s your vision to navigate the cross-platform landscape? 

Here is the link where you can head next and learn more about how to answer the questions I have asked: https://software.intel.com/en-us

Republished with permission from here.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – SSMS: Top Queries by CPU and IO

Let me start this blog post with a personal story.

Personal Story – Dad and I

My fascination for computers started way back when I was about to get into highschool. My father once took me to his office (was for some family day if I remember correctly) and it was fun to watch the PC. Interestingly enough, it was the only color PC in that office it seems – for those days it was a green font’s print CRT monitor. I am not sure how many even had a chance to work on those PC’s. It is a lost era for this generation.

Even from that time the most discussed part of these computers have been its processors – if I remember correctly, it was the 32-bit processors (pre-Pentium era) and the Hard-disks with the (3 ½ inch Floppy drives). It was an era where having few MB’s of data was a big deal.

Fast forward to today, all these stories seem like a great recreation for my daughters bedtime stories. Moore’s Law has proved itself for more than 4 decades and still amuses us. The processors these days on our watch / handheld devices are more powerful (by a factor of 1000x at least) than what we used to work on a PC 15-20 years back. The days are surely changing so should we! I am not sure what the innovations and technology would be when my daughter grows up.

Back to SQL Server

In today’s context, the advancements in technology have not stopped us from troubleshooting these parameters even today. Whenever I get involved in performance tuning exercise, one of the first questions I ask is – “What is the CPU utilization?”, “How is Memory consumption?”, “How is Disk activity and Disk queue length?” and “How is the network doing?”

So for today’s blog post, we will concentrate on 4 different reports:

  1. Top Queries by Average CPU Time
  2. Top Queries by Total CPU Time
  3. Top Queries by Average IO Time
  4. Top Queries by Total IO Time

These are the standard reports from the Server node. Go to Server Node -> Right Click -> Reports -> Standard Reports and you will find these in SQL Server Management Studio.

Top Queries by Average CPU Time

When it comes to CPU, for me Perfmon is still the primary tool for tracking down fundamental CPU usage and I feel it should remain so for you.  However, from time to time we need to track down which process is using a physical CPU. When DBA’s ask me why SQL Server using all the CPU, I ask them the first question – are you sure SQL Server is the process utilizing CPU on server.

Just for the records, I typically read and used the kernel debugger to determine what process is using which exact CPU. For a primer take a look at how to use XPerf tool on MSDN Blogs.

Though these tools are powerful, there is a considerable learning curve for a novice DBA. So let me come back to favorite tool of choice – SQL Server Management Studio. Let us start to look at the Top Queries by Average CPU Time report. The output has two sections: a) Graph Section and b) Details Section.

The first Graph section is a color palette sorted in descending order the Top queries that are consuming Average CPU Time and Total CPU Time. For me the next section has more interesting details to analyze. Before I jump there – there is an important note on the top of the report that is worth a look. It reads as:

Note: This report identifies the queries currently residing in the plan cache that have consumed the most total CPU time over the course of all their executions.  This data is aggregated over the lifetime of the plan in the cache and is available only for plans currently in the cache.

This means that the values will get reset if SQL Server has been restarted because the cache gets flushed and emptied. In your active transactional system this will never be the case and I am sure you will know where to start your tuning exercise when SQL Server is utilizing higher CPUs.

As I mentioned, the second section is of higher interesting values to look at. As you can see, the Top 10 CPU consuming queries are listed and we can start investigating what is the resource utilization of each of these queries and how we can tune them. The label of the colors (1, 2, 3 etc.) are “Query No.” column in the table. It is quite possible that the order in the second graph may not be same as first graph.

In the above picture we can see that the Top most expensive query is utilizing close to 1.774 seconds to execute and on an average of 2 executions it is taking close to 0.887 seconds for each execution. The query under question is also highlighted under the Query Text.

Top Queries by Total CPU Time

Since the output is almost similar to what we discussed before. The queries are sorted by Top CPU time now. That is the only difference. The rough DMV’s used for this report would be:

SELECT TOP(10)
creation_time
,       last_execution_time
,       (total_worker_time+0.0)/1000 AS total_worker_time
,       (total_worker_time+0.0)/(execution_count*1000) AS [AvgCPUTime]
,       execution_count
FROM sys.dm_exec_query_stats  qs
CROSS APPLY sys.dm_exec_sql_text(sql_handle) st
WHERE total_worker_time > 0
ORDER BY total_worker_time DESC

Before I sign off on CPU utilization, I have always relied on this report for CPU workloads. But as a primer there are obvious candidates that I always look out to when CPU is high like:

  1. Query Execution and Parallelism
  2. Compiles and recompiles on the server
  3. Any tracing if enabled in the system – includes running of Activity Monitor somewhere or Profiler (we can use sys.traces to check traces which are configured to run)
  4. If any anti-virus is running in the system.
  5. Invalid drivers for SAN, BIOS or other components

These are some great starting points to eliminate before going into tuning queries for a CPU deprived system.

Performance – Top Queries by Average IO

As described for the CPU output, the output is almost the same here too with two sections. The difference is, it has been sorted by Average IO utilization.

The meat of information is available in section 2 as usual.

This section apart from CPU values, has also the values of Logical Reads, Logical Writes. As you can see, Total Logical IO = Logical Reads + Logical Writes

The query to get this information would be:

SELECT TOP 10
creation_time
,       last_execution_time
,       total_logical_reads AS [LogicalReads]
,       total_logical_writes AS [LogicalWrites]
,       execution_count
,       total_logical_reads+total_logical_writes AS [AggIO]
,       (total_logical_reads+total_logical_writes)/(execution_count+0.0) AS [AvgIO]
,      st.TEXT
,       DB_NAME(st.dbid) AS database_name
,       st.objectid AS OBJECT_ID
FROM sys.dm_exec_query_stats  qs
CROSS APPLY sys.dm_exec_sql_text(sql_handle) st
WHERE total_logical_reads+total_logical_writes > 0
AND sql_handle IS NOT NULL
ORDER BY [AggIO] DESC

It is important to understand that IO for a query is a SUM of both Logical IO and Physical IO. Irrespective of where the data resides the IO is calculated as a Union of these two values.

Performance – Top Queries by Total IO

This output is identical to the previous output with the only difference of it being sorted by the Total IO parameter. The sections and output are exactly same as above reports.

As I sign off this post, wanted to know if anyone has used any of these reports or do you rely on using DMVs and other tools to troubleshoot CPU and/or IO related problems inside SQL Server? Do share your experience.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Top Transaction Reports

Let us start with a personal story.

Personal Story – My Daughter and I

My day has its own part of long meetings and my daughter tries to pull a string here and there to get my attention almost every other day. And recently I was made to watch one of her school assignment on drawing. As a parent, the beauty is in the eye of the beholder – so I can never complain what my daughter drew for me. Since this was some sort of school competition, I wanted to see what the remark of the teacher was – it mentioned “Very Good”. Intrigued by this comment, I started to quiz my daughter to who got the Best painting marks and what did they draw? I am sure the parent inside me was taking a sneak peek into how my daughter performed as compared to others in the class. She was quick to respond with a few names of her friends and said they also drew best. The honesty moved me because as a child we are unbiased and true to our very self. This moved me and I thought it is a great time to take my daughter to the park and spend some quality time as father-daughter.

Getting Back to SQL Server

Returning from park, this incident was on top of my mind and I thought being a class topper or ahead of the crowd is an intrinsic quality we all try to follow as human beings. It is so strange that today’s post is all about the “Top” nature. In life, it is great to be a Top performer but in SQL Server parlance it is not a great thing to be on these Top reports. You are surely going to become a subject for tuning next. This surely is a great boon for Administrators though.

This blog will call out 3 different report from the Server Node -> Reports -> Standard Reports

  1. Top Transactions by Age
  2. Top Transactions by Locks Count
  3. Top Transactions by Blocked Transactions Count

Since all these reports were from the Top category about the transaction based on various factors, I thought to have them all covered in one post.

Top Transactions by Age

This is one of the simplest of reports which shows based on when the query was submitted to the server and how long some of the transactions have been waiting in the instance. This is a good starting point to know which Session ID, Database is holding the # of lock for how long. These for all practical purposes is a very good starting point in my opinion to start looking at long running transactions.

Some of the DMV’s that work behind the scenes to generate the report are:

  • sys.dm_tran_active_transactions – Has information of transactions in a given instance.
  • sys.dm_tran_session_transactions – Has information of transactions for a given session.
  • sys.dm_tran_database_transactions – Gives transactions at a database level.
  • sys.dm_exec_sessions – Has information about the active sessions currently on the server.
  • sys.dm_exec_requests – Information of each request currently running on the server.

From this report, the DBA can take a decision to what process is causing these locks? And why they are held for such a long time.

Top Transactions by Lock Count

I would say, this is in continuation to the previous report. In the previous report, I was able to find out the number of locks for a given Session ID and database. But the specifics on the type of locks were not clear.

This report is all about expanding that part of the uncertainty. This report shows the type of locks are held by a specific session. In our report below we can see the session ID 52 and 53 are holding Object, Page and Key locks respectively. While 52 has an Exclusive Lock already taken, 53 has an Update fired on the dbo.test table.

I am sure on a highly transactional production server this will surely be a busy report to view. Do let me know how many nodes you see on your servers.

The transaction state can be one of the following:

  • Uninitialized
  • Initialized
  • Active
  • Prepared
  • Committed
  • Rolled Back
  • Commiting

These values can be got from sys.dm_tran_database_transactions DMV. While the Active Transaction State can take the below values as defined in the sys.dm_tran_active_transactions   DMV:

  • Invalid
  • Initialized
  • Active
  • Ended
  • Commit Started
  • Prepared
  • Committed
  • Rolling Back
  • Rolled Back

 

Top Transactions by Blocked Transactions Count

The last report under the Top category is the Blocked transaction Count report. This is almost similar to the report we say in our previous post of Blocking Transactions Report. Since we have explained the same out there already, I will refrain from getting into the details here.

 

These reports can help in finding cause of below few errors:

The instance of the SQL Server Database Engine cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users. Ask the database administrator to check the lock and memory configuration for this instance, or to check for long-running transactions. (Msg 1204, Level 19, State 4)

Action: Find the session_id which is taking more locks and tune them.

Lock request time out period exceeded.(Msg 1222, Level 16, State 45)

Action: Find the session_id which is holding locks and see why they have a long running transaction.

I am curious to know how many of you have every used the Top reports mentioned here in your environments to debug something? Do let me know how you used them effectively so that we can learn from each other.

Reference: Pinal Dave (http://blog.sqlauthority.com)

Developer’s Life – Every Developer is the Incredible Hulk

The Incredible Hulk is possibly one of the scariest superheroes out there.  All superheroes are meant to be “out of this world” and awe-inspiring, but I think most people will agree with I say The Hulk takes this to the next level.  He is the result of an industrial accident, which is scary enough in it’s own right.  Plus, when mild-mannered Bruce Banner is angered, he goes completely out-of-control and transforms into a destructive monster that he cannot control and has no memories of.

With that said, The Incredible Hulk might also be the most powerful superhero.  While I don’t think that developers should get raging mad and turn into giant green destroyers, I think there are still great lessons we can learn from The Hulk.

So how are developers like the Incredible Hulk?

 

Well, read on my list of reasons.

Keep Calm

“Don’t make me angry.  You won’t like me when I’m angry.”  Bruce Banner warns everyone he comes in contact with, and this can be a lesson to take into the real world.  Don’t think of as fair warning, and now you are allowed to yell at people.  Think of it as advice to yourself.  Very few people (including developers) good do work when they have steam coming out their ears.

A Little Intensity is a Good Thing

A little intensity is a good thing.  In the movie “The Avengers,” Bruce Banner says he is always angry.  Instead of letting it get out of control and turn him into The Hulk, he harnesses that energy to try to find a solution to their problems.  As a developer, when something is frustrating you, try to channel that energy into beating the problem, not beating up your co-workers.

It is NOT all about You!

It’s not all about you.  In another scene in “The Avengers,” Bruce Banner’s tendency to transform into The Hulk becomes a liability to the rest of the team.  He has to rely on his team members to save themselves (and him) from himself.  Developers work as a team as well.

Constructive Anger

So far, we have discussed The Hulk’s anger like it is a bad thing.  But anger can be constructive, too.  When there is a huge evil to be fought, The Incredible Hulk is a fighting machine.  When there is a huge network or server problem, incredible developers join together and do amazing work.

The Incredible Hulk can be scary – no, let me put it another way – he can be intimidating.  But that just means that the lessons he can teach us all, including developers are about controlling ourselves in tough situations, and rising to the occasion.

Reference: Pinal Dave (http://blog.sqlauthority.com)

MySQL – UDF – Validate Integer Function

There is a default function in SQL Server called ISNUMERIC to determine if the string is numeric. In SQL Server ISNUMERIC returns 1 for non numerials also. Ex ISNUMERIC (‘,’) returns 1. To avoid this you can write a user defined function as shown over here. In MySQL, there is no default function like ISNUMERIC as in SQL Server. You need to create a similar user defined function as shown below.

DELIMITER $$
CREATE FUNCTION 'udf_IsNumeric'(number_in VARCHAR(100)) RETURNS BIT
BEGIN
DECLARE
Ret BIT;
IF number_in NOT regexp '[^0123456789]' THEN
SET
Ret:= 1;
ELSE
SET
Ret:= 0;
END IF;
RETURN Ret;
END

The above function uses Regular expression. The expression regexp ‘[^0123456789]‘ will find out if the string has at least one character which is not a digit. The expression not regexp ‘[^0123456789]‘ will just negate that condition so that you get the string which has all characters as a digit.

You can test this using the following example.

SELECT '999' TestValue,udf_IsNumeric('999') NumericTest;
SELECT 'abc' TestValue,udf_IsNumeric('abc') NumericTest;
SELECT '9+9' TestValue,udf_IsNumeric('9+9') NumericTest;
SELECT '$9.9' TestValue,udf_IsNumeric('$9.9') NumericTest;
SELECT 'SQLAuthority' TestValue,udf_IsNumeric('SQLAuthority') NumericTest;

The result is

TestValue NumericTest
 ——— ———–
 999 1
TestValue NumericTest
 ——— ———–
 abc 0
TestValue NumericTest
 ——— ———–
 9+9 0
TestValue NumericTest
 ——— ———–
 $9.9 0
TestValue NumericTest
 ———— ———–
 SQLAuthority 0

You can find more examples about Regular expressions in MySQL over here.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Transaction Log Shipping Status Report

History has its own way to define now civilizations thrived. Most of the cities flourished in the river side and transporting lumber was one of the key activity. Most of the cities like Seattle and many others have this boom and bust life. The idea here was to cut the timber upstream and use the natural flow of rivers to transport to factories downstream using the river. These are classic and wonderful examples of how we typically work with Log-Shipping in SQL Server too. This blog is about Log Shipping Status report.

Ensuring the availability of databases, meeting SLA and performance tuning are some of the top priorities for today’s database administrators (DBAs). One of the important work of DBA is to monitor the database servers and make sure the application is working fine. The monitoring might involve automatic alerts, running scripts or looking at some dashboard. Even for high availability solutions, we need some kind of monitoring mechanism. One of the traditional high availability solution is Log Shipping.

As the name suggests, Log-shipping is based on transaction log backups getting shipped from one server to one or more servers on the other side. For understanding this you need to know basics of transaction log backups. First, log backups can be taken from the database which is in full or bulk logged recovery model. In the simple recovery model, transaction log backups are not allowed because every checkpoint flushes the transaction log file. In other two recovery models log backup would do flush. Another basics of log shipping is that all log backups form a chain. T1, T2 and T3 must be restored in sequence. Missing any one the file would cause an error message during restore. In log shipping, backup, copy and restore is done automatically. The SQL Agent service does that for us. Since we can ship to multiple servers, backup location is shared so that other servers can get a copy of that file to perform the restore. Source server in technical terms is called as the primary server. Rest all servers which are at receiving end are called as a secondary server. You would also hear monitor server, which is responsible to check the health of copy, backup and restore job. If the jobs are not running properly, then secondary would be behind primary server and would defeat the purpose of high availability. Based in the threshold defined, monitor server can raise alerts so that corrective action can be taken.

This is the last report in the list under server node. Based on the name of the report, you might have already guessed that it can be used to “see” the status of log shipping status.

The important note about this report is that the data shown in the column would be dependent on the server where we launch the report. Here is the report, when launched from Primary Server.

If we notice, information about backup section is populated. This is because the report doesn’t make a remote connection to check secondary server status. If the report is launched from a Secondary Server the output would be as below:

The information about copy and restore related information is populated automatically because those are available on secondary server.

If we configure monitor server in log-shipping (which I have not done) and launch report there, we can see information about all three steps (i.e. backup, copy and restore)

The good part about the report is that it shows the alarming pair in red color. To demonstrate, I have configured log shipping for two databases, and for one, I have disabled the backup, copy and restore jobs so that alerts are raised and we can see the impact on report.

You may wonder how this information is fetched. This has the simplest possible query behind the scene.

EXEC sp_help_log_shipping_monitor

As per Books online – “Returns a result set containing status and other information for registered primary and secondary databases on a primary, secondary, or monitor server.”

If you see anything in red color, you need to start investigation further to find the cause of delay. What is the most common cause you have observed, which causes delay in log shipping? Networking, Disk slowness or something else? Please comment and let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to Format and Refactor Your SQL Code Directly in SSMS and Visual Studio

This article shows how to use ApexSQL Refactor,a free SQL code formatting and refactoring tool. You can download ApexSQL Refactor, and explore it through the article.

ApexSQL Refactor is a free tool, for SQL code formatting and refactoring directly from SSMS or Visual Studio. You can qualify SQL Server object names, expand wildcards, or encapsulate SQL code. The add-in has nearly 200 formatting options and 11 code refactors. Using this tool allows you to locate and highlight unused variables and parameters. In addition, you can update all dependent database objects on renaming or changing columns and parameters. Besides SQL code in SSMS or Visual Studio, you can format SQL code from the external SQL scripts. The add-in integrates under the ApexSQL menu in SSMS or Visual Studio. To format SQL code inside SSMS or Visual Studio, select it in the query window and choose the ApexSQL default option, or other user- defined templates from the ApexSQL Refactor menu. In the same menu, you can find the Formatting options option:

Format SQL code

In the Formatting options section you can modify ApexSQL default formatting, or create your own formatting templates. Click the New button in the upper side of the window, and the new formatting template will be created. In the General tab, you can set indention, whether you are using spaces or tabs. You can set the wrapping to be applied to the lines longer than the specified number of characters, or add spacing inside/outside parenthesis, around operators (assignment, arithmetic, and comparison), and before/after commas.

Here you can manage empty lines, and set the placement for the opening and closing brackets:

In the Capitalization tab, you can enforce the capitalization for SQL keywords, data type, identifiers, system functions, and variables. For each of the mentioned, you can choose from the drop down list whether it is going to be capitalized in upper case, proper case, or lower case.

In the Comments tab you can manage comments, adding an empty line or border before/after the block comment. In addition, you can switch all comments to block/line comments, or remove all block/line comments:



Under the Expressions tab, you can set the formatting options for the operators (arithmetic, comparison logical). If you enable formatting for any operator, you will be able to set the minimum number of characters for the operator to apply formatting on. In addition, you can set the parenthesis placement, to move the operation to a new line, or to show the operations on multiple lines:

In the Schema statements tab, you can setup formatting for the object definitions and parameters. For the object definitions, you can choose to place the body on a new line and to set the indentation. Parameters can be placed on a new line (aligned with keyword, or indented for a defined number of spaces). If there are more than one parameter, each one can be placed on a new line with a comma before/after the row:



In the Data statements tab, you can set the options for column list, data statement, nested SELECT statements, and aliases. A column list can be placed on a new line, aligned with a keyword, or with the defined indentation. Each column can be placed in a new line, with a comma before/after it. A minimum number of characters can be defined for the data statements to be formatted. SQL keywords FROM and WHERE can be placed in a new line, aligned with keyword or indented.

For each nested SELECT statement, the first SELECT can be placed on a new line at the same position, aligned with SQL keyword, or indented. In addition, subsequent nested SELECT can be aligned with the previous SELECT.

All alias names can be aligned. The AS keyword can be used in all aliases in the SELECT statements, placed on a new line (aligned with keyword or indented):

In the Joins tab, you can set the minimum number of characters for JOIN statement to be formatted. The first table can be placed on a new line (at the same position, aligned with previous keyword, or indented). Joined keyword can be placed at start/end of the line, or on a separate line with indention. ON keyword can be placed on a new line aligned with JOIN keyword, or indented. Nested join operations can be placed on separate lines, aligned with previous JOIN keyword, or indented:

In the Value lists tab you can set the value list to be placed ona new line, aligned or indented. Earch row from the list can be placed in a separate line with comma before/after each row. Row values can be placed on a new line, aligned or indented. At the end, each value can be placed in a separate line, with comma before/after each row:

In the Variables tab, you can set the variables to be placed on a new line, aligned with keyword, or indented. Each variable can be placed on a new line with a comma before/after each row:

In the Flow control tab, you can set the condition keywords (WHEN, THEN, and ELSE) to be places on a new line and indented. In addition, you can enforce BEGIN and END keywords to be used in IF statements, and in stored procedures.

To format SQL code outside SSMS or Visual Studio, click the Format SQL scripts option from the ApexSQL Refactor menu, and the Format SQL files dialog opens. Here you can browse your computer for SQL files, and apply specified formatting option (ApexSQL default, or user created template). You can overwrite an old SQL file, with a new one, or create new SQL file, and keep the old one

Split table

This option is used to split a SQL database table into two tables by copying or moving columns from the original table to a new one. It is useful when a table contains rarely used columns, they can be moved to another table, so the original table contains less frequently used columns. To split a table, right click on it in Object Explorer in SSMS or Visual Studio, and choose the Split table option from the ApexSQL Refactor menu to open
the Split table dialog. You can copy/move columns from the original table to a new one. For the new table you can define the name and schema. When you set the columns for the new table, you can preview the generated script, see the impact of changes and affected dependent objects:

Safe rename

Using this feature allows you to rename database objects without breaking the database dependencies, as it generates a SQL script for changing the object name and update all dependent objects. The Safe rename option can be applied to database tables, views, procedures, and functions. It applies to a table/view columns and function/procedure parameters. To rename any of the mentioned objects right click on it from the Object Explorer, or select it, and choose the Safe rename option from the ApexSQL Refactor menu. This will open the Safe rename dialog where you can enter a new name for the selected object, and clicking the Generate preview option, you can preview the script used to change the object name. If an error appears when renaming, it will be shown under the Warnings tab. The Sequence tab shows the process of renaming the object, listing the sequences that will be executed in order to rename the selected object:

Add surrogate key

When a primary key contains many columns, or it needs to be changed for any reason, a surrogate key is considered. Changing a primary key in database table requires updating all dependent object, in order to keep database functionality. To add surrogate key, select the table in Object Explorer and choose the Add surrogate key option from the ApexSQL Refactor menu. This will open the Add surrogate key dialog where you can choose the one of the existing keys, and specify the Surrogate column name value. The Generate preview option shows the generated SQL script in the preview section. All dependent objects, sequences, and warnings (if exist) will be shown under the appropriate tabs:

Change parameters

Stored procedure or function parameters can be changed by deleting and recreating, or using the ALTER statement. To change the parameters safely, use theChange parameters option in ApexSQL Refactor. Select the stored procedure or function parameter and choose the Safe rename option from the ApexSQL refactor menu. In the Safe rename dialog, change the parameter name, and generate the preview of a SQL script. Under the appropriate tab, you can review all dependent objects, warnings (if exist), and sequences that will be executed on renaming the parameter:

Replace on-to-many relationship

To use this option select the table from the Object Explorer and choose the Replace one-to-many relationship option from the ApexSQL Refactor menu. This will open a dialog where you can specify the name of the associative table, choose the dependent table, and a relationship. The Generate preview option generates a SQL script, which replaces a relationship. Under the appropriate tabs, you can review warnings, sequences, and dependent objects:

Copy code as

This option will convert SQL code into the supported programming languages. Supported languages are Java, VB.NET, C#, Perl, PHP, Delphi, Power Builder, Ruby, and C++. You can add additional template for other programming languages choosing the Customize languages option from the Copy code as submenu. To convert SQL code into any of the listed programming languages, point to a query window with SQL code you want to convert, and choose the language from the list. One you click the language from the list, open a new query window, and paste the created code:

The Customize language template dialog allows you to edit templates for natively supported languages, or add new templates. Here you can enter a code that will be inserted before/after SQL code, define escape character for quotes, and preview the defined settings:

Unused variables and parameters

Parameters or variables declared or assigned a value, but never use or queried in any statements as UPDATE, EXECUTE, WHERE, INSERT or PRINT, is unused. ApexSQL Refactor can highlight unused SQL objects, and clean up SQL code. You can find unused objects inline, while typing SQL code. To find unused parameters and variables, run the Unused parameters and variables command from the ApexSQL Refactor menu:

If there is any declared, but unused variables or parameters, ApexSQL Refactor will find them. To confirm that the highlighted parameter/variable is unused, bring the mouse pointer to it, and the tooltip comes up:

Object name qualifying

The Object name qualifying feature enables you to refactor SQL code in a way to add the owner (schema/user) of objects, object name, or alias name. When object name is qualified, SQL Server will not check if the current user is the owner of the object, which means fewer resources to use. As a result, query will be executed faster. The result of using this option can be easily reverted clicking the Undo in SSMS or Visual Studio. To apply the Object name qualifying option, choose it from the ApexSQL Refactor menu.

Encapsulate code as

Encapsulate SQL code means to use selected code as a database object, and makes it easier to reuse it. ApexSQL Refactor allows you to encapsulate SQL code as stored procedure, view, scalar inline function, or a table inline function. To encapsulate SQL code as one of the mentioned objects, select it in the query window, and choose the Encapsulate code as option from the ApexSQL refactor menu. Select the appropriate object, depending on what do you want to encapsulate, and the new window opens. Give a name and assign a schema to the selected object. If you click the Generate preview button, a SQL script will be created, and the parameters will be listed in the Parameters section. When everything is set, click the Encapsulate button:

Expand wildcards

This will allow you to expand wildcards e.g. “*” used in SQL, into column names. It affects the performance of a SQL query, as SQL Server will not expand them itself, before the query execution. If you type in the query:

SELECT * FROM Person.Address

After applying the Expand wildcard option, the above query will be transformed as follows:

SELECT Address.AddressID,
Address.AddressLine1,
Address.AddressLine2,
Address.City,
Address.StateProvinceID,
Address.PostalCode,
Address.SpatialLocation,
Address.rowguid,
Address.ModifiedDate
FROM Person.Address;

Reference: Pinal Dave (http://blog.sqlauthority.com)