SQL SERVER – SSMS: Transaction Log Shipping Status Report

History has its own way to define now civilizations thrived. Most of the cities flourished in the river side and transporting lumber was one of the key activity. Most of the cities like Seattle and many others have this boom and bust life. The idea here was to cut the timber upstream and use the natural flow of rivers to transport to factories downstream using the river. These are classic and wonderful examples of how we typically work with Log-Shipping in SQL Server too. This blog is about Log Shipping Status report.

Ensuring the availability of databases, meeting SLA and performance tuning are some of the top priorities for today’s database administrators (DBAs). One of the important work of DBA is to monitor the database servers and make sure the application is working fine. The monitoring might involve automatic alerts, running scripts or looking at some dashboard. Even for high availability solutions, we need some kind of monitoring mechanism. One of the traditional high availability solution is Log Shipping.

As the name suggests, Log-shipping is based on transaction log backups getting shipped from one server to one or more servers on the other side. For understanding this you need to know basics of transaction log backups. First, log backups can be taken from the database which is in full or bulk logged recovery model. In the simple recovery model, transaction log backups are not allowed because every checkpoint flushes the transaction log file. In other two recovery models log backup would do flush. Another basics of log shipping is that all log backups form a chain. T1, T2 and T3 must be restored in sequence. Missing any one the file would cause an error message during restore. In log shipping, backup, copy and restore is done automatically. The SQL Agent service does that for us. Since we can ship to multiple servers, backup location is shared so that other servers can get a copy of that file to perform the restore. Source server in technical terms is called as the primary server. Rest all servers which are at receiving end are called as a secondary server. You would also hear monitor server, which is responsible to check the health of copy, backup and restore job. If the jobs are not running properly, then secondary would be behind primary server and would defeat the purpose of high availability. Based in the threshold defined, monitor server can raise alerts so that corrective action can be taken.

This is the last report in the list under server node. Based on the name of the report, you might have already guessed that it can be used to “see” the status of log shipping status.

The important note about this report is that the data shown in the column would be dependent on the server where we launch the report. Here is the report, when launched from Primary Server.

If we notice, information about backup section is populated. This is because the report doesn’t make a remote connection to check secondary server status. If the report is launched from a Secondary Server the output would be as below:

The information about copy and restore related information is populated automatically because those are available on secondary server.

If we configure monitor server in log-shipping (which I have not done) and launch report there, we can see information about all three steps (i.e. backup, copy and restore)

The good part about the report is that it shows the alarming pair in red color. To demonstrate, I have configured log shipping for two databases, and for one, I have disabled the backup, copy and restore jobs so that alerts are raised and we can see the impact on report.

You may wonder how this information is fetched. This has the simplest possible query behind the scene.

EXEC sp_help_log_shipping_monitor

As per Books online – “Returns a result set containing status and other information for registered primary and secondary databases on a primary, secondary, or monitor server.”

If you see anything in red color, you need to start investigation further to find the cause of delay. What is the most common cause you have observed, which causes delay in log shipping? Networking, Disk slowness or something else? Please comment and let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – How to Format and Refactor Your SQL Code Directly in SSMS and Visual Studio

This article shows how to use ApexSQL Refactor,a free SQL code formatting and refactoring tool. You can download ApexSQL Refactor, and explore it through the article.

ApexSQL Refactor is a free tool, for SQL code formatting and refactoring directly from SSMS or Visual Studio. You can qualify SQL Server object names, expand wildcards, or encapsulate SQL code. The add-in has nearly 200 formatting options and 11 code refactors. Using this tool allows you to locate and highlight unused variables and parameters. In addition, you can update all dependent database objects on renaming or changing columns and parameters. Besides SQL code in SSMS or Visual Studio, you can format SQL code from the external SQL scripts. The add-in integrates under the ApexSQL menu in SSMS or Visual Studio. To format SQL code inside SSMS or Visual Studio, select it in the query window and choose the ApexSQL default option, or other user- defined templates from the ApexSQL Refactor menu. In the same menu, you can find the Formatting options option:

Format SQL code

In the Formatting options section you can modify ApexSQL default formatting, or create your own formatting templates. Click the New button in the upper side of the window, and the new formatting template will be created. In the General tab, you can set indention, whether you are using spaces or tabs. You can set the wrapping to be applied to the lines longer than the specified number of characters, or add spacing inside/outside parenthesis, around operators (assignment, arithmetic, and comparison), and before/after commas.

Here you can manage empty lines, and set the placement for the opening and closing brackets:

In the Capitalization tab, you can enforce the capitalization for SQL keywords, data type, identifiers, system functions, and variables. For each of the mentioned, you can choose from the drop down list whether it is going to be capitalized in upper case, proper case, or lower case.

In the Comments tab you can manage comments, adding an empty line or border before/after the block comment. In addition, you can switch all comments to block/line comments, or remove all block/line comments:



Under the Expressions tab, you can set the formatting options for the operators (arithmetic, comparison logical). If you enable formatting for any operator, you will be able to set the minimum number of characters for the operator to apply formatting on. In addition, you can set the parenthesis placement, to move the operation to a new line, or to show the operations on multiple lines:

In the Schema statements tab, you can setup formatting for the object definitions and parameters. For the object definitions, you can choose to place the body on a new line and to set the indentation. Parameters can be placed on a new line (aligned with keyword, or indented for a defined number of spaces). If there are more than one parameter, each one can be placed on a new line with a comma before/after the row:



In the Data statements tab, you can set the options for column list, data statement, nested SELECT statements, and aliases. A column list can be placed on a new line, aligned with a keyword, or with the defined indentation. Each column can be placed in a new line, with a comma before/after it. A minimum number of characters can be defined for the data statements to be formatted. SQL keywords FROM and WHERE can be placed in a new line, aligned with keyword or indented.

For each nested SELECT statement, the first SELECT can be placed on a new line at the same position, aligned with SQL keyword, or indented. In addition, subsequent nested SELECT can be aligned with the previous SELECT.

All alias names can be aligned. The AS keyword can be used in all aliases in the SELECT statements, placed on a new line (aligned with keyword or indented):

In the Joins tab, you can set the minimum number of characters for JOIN statement to be formatted. The first table can be placed on a new line (at the same position, aligned with previous keyword, or indented). Joined keyword can be placed at start/end of the line, or on a separate line with indention. ON keyword can be placed on a new line aligned with JOIN keyword, or indented. Nested join operations can be placed on separate lines, aligned with previous JOIN keyword, or indented:

In the Value lists tab you can set the value list to be placed ona new line, aligned or indented. Earch row from the list can be placed in a separate line with comma before/after each row. Row values can be placed on a new line, aligned or indented. At the end, each value can be placed in a separate line, with comma before/after each row:

In the Variables tab, you can set the variables to be placed on a new line, aligned with keyword, or indented. Each variable can be placed on a new line with a comma before/after each row:

In the Flow control tab, you can set the condition keywords (WHEN, THEN, and ELSE) to be places on a new line and indented. In addition, you can enforce BEGIN and END keywords to be used in IF statements, and in stored procedures.

To format SQL code outside SSMS or Visual Studio, click the Format SQL scripts option from the ApexSQL Refactor menu, and the Format SQL files dialog opens. Here you can browse your computer for SQL files, and apply specified formatting option (ApexSQL default, or user created template). You can overwrite an old SQL file, with a new one, or create new SQL file, and keep the old one

Split table

This option is used to split a SQL database table into two tables by copying or moving columns from the original table to a new one. It is useful when a table contains rarely used columns, they can be moved to another table, so the original table contains less frequently used columns. To split a table, right click on it in Object Explorer in SSMS or Visual Studio, and choose the Split table option from the ApexSQL Refactor menu to open
the Split table dialog. You can copy/move columns from the original table to a new one. For the new table you can define the name and schema. When you set the columns for the new table, you can preview the generated script, see the impact of changes and affected dependent objects:

Safe rename

Using this feature allows you to rename database objects without breaking the database dependencies, as it generates a SQL script for changing the object name and update all dependent objects. The Safe rename option can be applied to database tables, views, procedures, and functions. It applies to a table/view columns and function/procedure parameters. To rename any of the mentioned objects right click on it from the Object Explorer, or select it, and choose the Safe rename option from the ApexSQL Refactor menu. This will open the Safe rename dialog where you can enter a new name for the selected object, and clicking the Generate preview option, you can preview the script used to change the object name. If an error appears when renaming, it will be shown under the Warnings tab. The Sequence tab shows the process of renaming the object, listing the sequences that will be executed in order to rename the selected object:

Add surrogate key

When a primary key contains many columns, or it needs to be changed for any reason, a surrogate key is considered. Changing a primary key in database table requires updating all dependent object, in order to keep database functionality. To add surrogate key, select the table in Object Explorer and choose the Add surrogate key option from the ApexSQL Refactor menu. This will open the Add surrogate key dialog where you can choose the one of the existing keys, and specify the Surrogate column name value. The Generate preview option shows the generated SQL script in the preview section. All dependent objects, sequences, and warnings (if exist) will be shown under the appropriate tabs:

Change parameters

Stored procedure or function parameters can be changed by deleting and recreating, or using the ALTER statement. To change the parameters safely, use theChange parameters option in ApexSQL Refactor. Select the stored procedure or function parameter and choose the Safe rename option from the ApexSQL refactor menu. In the Safe rename dialog, change the parameter name, and generate the preview of a SQL script. Under the appropriate tab, you can review all dependent objects, warnings (if exist), and sequences that will be executed on renaming the parameter:

Replace on-to-many relationship

To use this option select the table from the Object Explorer and choose the Replace one-to-many relationship option from the ApexSQL Refactor menu. This will open a dialog where you can specify the name of the associative table, choose the dependent table, and a relationship. The Generate preview option generates a SQL script, which replaces a relationship. Under the appropriate tabs, you can review warnings, sequences, and dependent objects:

Copy code as

This option will convert SQL code into the supported programming languages. Supported languages are Java, VB.NET, C#, Perl, PHP, Delphi, Power Builder, Ruby, and C++. You can add additional template for other programming languages choosing the Customize languages option from the Copy code as submenu. To convert SQL code into any of the listed programming languages, point to a query window with SQL code you want to convert, and choose the language from the list. One you click the language from the list, open a new query window, and paste the created code:

The Customize language template dialog allows you to edit templates for natively supported languages, or add new templates. Here you can enter a code that will be inserted before/after SQL code, define escape character for quotes, and preview the defined settings:

Unused variables and parameters

Parameters or variables declared or assigned a value, but never use or queried in any statements as UPDATE, EXECUTE, WHERE, INSERT or PRINT, is unused. ApexSQL Refactor can highlight unused SQL objects, and clean up SQL code. You can find unused objects inline, while typing SQL code. To find unused parameters and variables, run the Unused parameters and variables command from the ApexSQL Refactor menu:

If there is any declared, but unused variables or parameters, ApexSQL Refactor will find them. To confirm that the highlighted parameter/variable is unused, bring the mouse pointer to it, and the tooltip comes up:

Object name qualifying

The Object name qualifying feature enables you to refactor SQL code in a way to add the owner (schema/user) of objects, object name, or alias name. When object name is qualified, SQL Server will not check if the current user is the owner of the object, which means fewer resources to use. As a result, query will be executed faster. The result of using this option can be easily reverted clicking the Undo in SSMS or Visual Studio. To apply the Object name qualifying option, choose it from the ApexSQL Refactor menu.

Encapsulate code as

Encapsulate SQL code means to use selected code as a database object, and makes it easier to reuse it. ApexSQL Refactor allows you to encapsulate SQL code as stored procedure, view, scalar inline function, or a table inline function. To encapsulate SQL code as one of the mentioned objects, select it in the query window, and choose the Encapsulate code as option from the ApexSQL refactor menu. Select the appropriate object, depending on what do you want to encapsulate, and the new window opens. Give a name and assign a schema to the selected object. If you click the Generate preview button, a SQL script will be created, and the parameters will be listed in the Parameters section. When everything is set, click the Encapsulate button:

Expand wildcards

This will allow you to expand wildcards e.g. “*” used in SQL, into column names. It affects the performance of a SQL query, as SQL Server will not expand them itself, before the query execution. If you type in the query:

SELECT * FROM Person.Address

After applying the Expand wildcard option, the above query will be transformed as follows:

SELECT Address.AddressID,
Address.AddressLine1,
Address.AddressLine2,
Address.City,
Address.StateProvinceID,
Address.PostalCode,
Address.SpatialLocation,
Address.rowguid,
Address.ModifiedDate
FROM Person.Address;

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Andy Defines Basic RDBMS: Isolation in Processes – Notes from the Field #038

[Note from Pinal]: This is a new episode of Notes from the Field series. Every time I give an introductory note, however, this time there is no need of intro note. This note is from Andy and as we all know he is amazing person when we have to understand the fundamentals. He has written this blog post with such an interesting way that you must read it to understand the very basic fundamental of Isolation.


When I think about SQL Server isolation in processes, it reminds me of eggs. Our family raises chickens for their eggs. Fresh eggs are very tasty, but there’s always the risk of a bad egg.

In the image above, I am preparing to scramble five eggs. I have cracked and opened five eggs (you can tell by the number of eggshells), but only four egg yolks are shown. “Why are there only four yolks, Andy?” I’m glad you asked.

My process for opening the eggs involves first dropping the contents of the egg into the mug, examining the contents, then – if satisfactory – adding them to the bowl for mixing. You don’t have to do this to make five scrambled eggs; you can crack the eggs right over the bowl.

But what happens when you open a bad egg? You risk ruining the entire batch of mostly good eggs. If you crack the eggs over the bowl, you have to pick five good eggs in a row to get a batch that’s ready to scramble. You may get lucky and it may only require five eggs. But the risk of a bad egg is ever present. You could get one good egg, followed by a bad egg. Now you have to throw both the good and bad egg out and begin again. The next time you may get three good eggs and then one bad egg. Now you’ve wasted three good eggs.

Isolating the eggs is a good practice. If I first empty the egg into a mug, I have constrained the process so that I only risk one egg at a time. In so doing, I have successfully mitigated risk to the least possible unit. I will only throw out the bad eggs without risking contamination of the good eggs in the bowl.

Isolation is generically defined as, “the process of separating somebody or something from others, or the fact of being alone and separated from others.”

In database terms, isolation is one of the four defining properties (i.e., atomic, consistent, isolated, durable—remember these by using the acronym ACID) of a Relational Database Management System (RDBMS). Similar to isolating bad eggs from the good, RDBMS isolation keeps individual database transactions from intermingling with each other during execution. It’s not that other transactions are bad, we just want to keep them separated so that data from one transaction doesn’t corrupt data from another transaction.

Are isolated transactions completely unaffected by each other? No, unlike the example of completely isolating a bad egg from the mix of good eggs, RDBMS isolation doesn’t prevent one transaction from influencing or impeding another transaction. An example of influence is resource contention; an example of impedance is locking. Isolation simply guarantees the results of the transaction will not be affected by concurrently executing transactions.

You can learn more about these properties from this awesome post by my friend, Pinal Dave: SQL Server – ACID (Atomicity, Consistency, Isolation, Durability).

If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL Authority News – Webinar: SQL Server 2014 In-Memory OLTP and SafePeak In-Memory Caching

In the past couple of months many of us heard about the new SQL Server 2014 and its In-Memory technologies, such as In-Memory OLTP for high-transactional application and the ColumnStore INDEX’s for BI apps. This comes in addition to various in-memory technologies outside of the SQL Server world, such as Oracle, IBM or few NewSQL startups.

There are plenty of benefits for in-memory solutions:

  • Offloading expensive repeated database requests
  • Acceleration of Database speed and application performance
  • Scalability of applications
  • Reducing database resources and licensing costs

The level of interest in performance in-memory technologies is clearly on a rise, so we’ll try to provide you with interesting news, investigation summaries and interesting webcasts, especially when it comes to the SQL Server databases, applications and users.

Choosing the right solution for you can be based on many factors:

  • Application stage: a Greenfield new application, a Live production homegrown application or Commercial off-the-shelf application
  • Application characteristics: Operational vs. Analytical BI application
  • Effort level and time available to resolve a challenge: the level of effort, resources and costs you want to apply in setup, development, testing and deployment

Personally, I love webinars as they give an opportunity to learn very quickly one or two technologies; many times combining a view from 10,000 feet down to practical 100 foot level; learn various technologies aspects; either how to get started or help defining a realistic plan.

My friends at SafePeak, the software vendor of In-Memory Dynamic Caching for performance acceleration of SQL Server applications, are running next week (Wednesday 30th) a webinar under the title: SQL Server 2014 In-Memory OLTP and SafePeak In-Memory Caching.

The webinar plans to review the SQL Server 2014 In-Memory OLTP, how it works, various scenarios where it fits more and less, key benefits, key limitations (and how to overcome some of them), things to consider when planning to use it. Additionally the webinar will review SafePeak’s Automated In-Memory Caching for SQL Server applications, it’s internals or how it works, where it fits more and where it fits less, and run a technologies comparison of both SQL 2014 In-Memory OLTP and SafePeak Caching – and probably (as usually happens) show a live demo of both technologies to get a better grip of the content.

Should be interesting…

You can register the webinar here.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Finding Object Dependencies in SSMS – SQL in Sixty Seconds #071

While we are doing development, we create and drop objects. We build new things and we need to understand the relationships between database objects when we are doing various activities in SQL Server. Well, it is indeed very hard to know all, the relationship between various objects in SQL Server. However, with the help of SQL Server 2014 Management Studio, you can for sure do the same task very easily.

You have to go the object of which you want to see properties of and right click over it.

Now click over the option “View Dependencies”. It will bring up a screen listing various dependencies.

I hope this is clear enough. If not, I strongly suggest you watch following quick video where I explain this concept in extremely simple words.

Action Item

Here are the blog posts I have previously written on SSMS. You can read it over here:

You can subscribe to my YouTube Channel for frequent updates.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Free Entry to SQLPASS 2014 is Possible

The other day, I received an email from Dell and I was extremely delighted to read it. It contained details how one can go for FREE to SQLPASS.

What is SQLPASS?

The three-day event will be marked by a lot of learningsharing, and networking, which will help me increase both my knowledge and contacts. PASS Summit provides me a golden opportunity to build my network as well as to identify and meet potential customers or employees. If I am a consultant or vendor who is looking for better career opportunities, PASS Summit is the perfect platform to meet and show my skills to my new potential customers and employers. Further, breakfasts, lunches, and evening receptions, which are included with registration, are meant to provide more and more networking opportunities.

At PASS Summit, I gain not only new ideas, but also inspire myself from top professionals and experts. Learning new things about SQL Server, interacting with different kinds of professionals, and sharing issues and solutions will definitely improve my understanding and turn me into a better SQL Server professional who can leverage and optimize SQL Server to improve business.

I am once again going to SQLPASS. You can join me as well for FREE.

How to Get FREE Entry?

Dell is running a contest. The contest will begin July 17 and run until August 18. To enter, all you have to do is download a free 30-day trial of Spotlight on SQL Server Enterprise or LiteSpeed for SQL Server from this download page. The cost of SQLPASS regular entry is USD 1800 but you can get in for free if you take part in the contest.

About Products

Spotlight on SQL Server Enterprise delivers simplicity without compromise by providing unmatched monitoring, diagnosis and optimization of your SQL Server environments, ensuring peak performance around the clock. With intuitive overviews of enterprise health and performance, automated alerts and actions, an integrated tuning module and mobile device support, Spotlight makes it easier than ever to obtain the data you need to ensure the health of your SQL Server databases.

-  Ensure high-speed, storage-efficient backup and restore for SQL Server databases with up to 85 percent savings in backup size, storage costs, and restore times. LiteSpeed for SQL Server makes it possible, with minimal effort and risk. Delivering a wide variety of recovery options, LiteSpeed ensures the right SQL Server data is restored and available as quickly as possible.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Activity – All Blocking Transactions

Working out of India has its own challenges and I enjoy here despite these challenges thrown at me. One of the biggest advantage I have working with Pluralsight is, I can still get my job done by working-from-home occasionally. And this is one of the perks I wish most of the companies give their employees. You might be thinking why I am doing this, well the obvious answer to this question relies on the fact how the previous day went. If it rained heavily, which is does in Bengaluru in July, then the chances are that roads would have a build-up of traffic the next day morning. Taking traffic away from your life is never so easy, but with technology improvements like Maps on the phone, I still manage to get an alternate route to reach my destination. This is what makes life interesting and the exploration into new places always fun.

I just wish SQL Server had some way of achieving the same. Blocking and Locking are fundamental to keeping databases in sync and consistent. This blog is all about Blocking Transactions report from the instance level.

To access the report, get to Server node -> Reports -> Standard Reports -> Activity – All Blocked Transactions.

From this node, if there are no apparent blocking happening in the system at the point this report was run, we will be presented with a “Blank” output as shown below.

The ideal situation for us to be in this state, even for a transitional system, but this will never be the case in reality. For a highly transactional systems which try to modify / insert data in same table, SQL Server will respect the order in which the request came and will not allow incompatible locks to exist at the same time. So this behaviour creates a queue automatically and this is what we call as Blocking.

This brings us to the next output, where we are having multiple transactions running. To show some data in report from my non-production-workload system, I have simulated a blocking scenario using two statements. In such a scenario you can see there are two regions to look at: the Session ID of 52, 53 and 54. From the hierarchy, we know that 52 is blocking both 53 and 54. We can also know there are 2 “#Directly Blocked Transactions” in the system currently from the top row for SPID 52. If there are additional transactions trying to insert or delete, then this will show the complete chain of tractions currently blocked.

We also get to see the type of statement that is waiting in this blocking scenario. In the diagram below we see the two statements involved are – INSERT and DELETE.

Various DMVs which have been used to get this information are sys.dm_tran_locks, sys.dm_tran_active_transactions, sys.dm_tran_session_transactions, sys.dm_tran_database_transactions and sys.dm_exec_requests. Along with above, report also uses DMF sys.dm_exec_sql_text to convert the SQL handle to more meaningful text.
If that was not enough then we can also head to the Activity Monitor and expand the Processes tab to get similar information. It is evident that the head of blocking is 52 whereas 53 and 54 are waiting on 52. It is completely up to us to decide what we need to do. We can Kill process 52 and the other transactions will go through.

As a small note, the Task States can give us vital information of what is happening in the system. Some of the states are worth mentioning:

Sleeping This shows the SPID is waiting for a command or nothing is currently executing.
Running SPID is currently running.
Suspended SPID is waiting for locks or a latch.
Rollback Connection is in rollback state of a transaction.

You can use the state information to take an informed decision of killing a process if required.

At this moment, yet another blog post that is worth a mention is Blocked Process Threshold post. This option makes sure there is a profiler event raised when a request is blocked beyond a predefined period of time. So do take a look at that too if you are interested in that behaviour.

The reports series is catching up and the learnings are multi-fold for me personally. Subsequent posts I will get into the other reports and give you my learnings.

Reference: Pinal Dave (http://blog.sqlauthority.com)