SQL SERVER – SSMS: Top Transaction Reports

Let us start with a personal story.

Personal Story – My Daughter and I

My day has its own part of long meetings and my daughter tries to pull a string here and there to get my attention almost every other day. And recently I was made to watch one of her school assignment on drawing. As a parent, the beauty is in the eye of the beholder – so I can never complain what my daughter drew for me. Since this was some sort of school competition, I wanted to see what the remark of the teacher was – it mentioned “Very Good”. Intrigued by this comment, I started to quiz my daughter to who got the Best painting marks and what did they draw? I am sure the parent inside me was taking a sneak peek into how my daughter performed as compared to others in the class. She was quick to respond with a few names of her friends and said they also drew best. The honesty moved me because as a child we are unbiased and true to our very self. This moved me and I thought it is a great time to take my daughter to the park and spend some quality time as father-daughter.

Getting Back to SQL Server

Returning from park, this incident was on top of my mind and I thought being a class topper or ahead of the crowd is an intrinsic quality we all try to follow as human beings. It is so strange that today’s post is all about the “Top” nature. In life, it is great to be a Top performer but in SQL Server parlance it is not a great thing to be on these Top reports. You are surely going to become a subject for tuning next. This surely is a great boon for Administrators though.

This blog will call out 3 different report from the Server Node -> Reports -> Standard Reports

  1. Top Transactions by Age
  2. Top Transactions by Locks Count
  3. Top Transactions by Blocked Transactions Count

Since all these reports were from the Top category about the transaction based on various factors, I thought to have them all covered in one post.

Top Transactions by Age

This is one of the simplest of reports which shows based on when the query was submitted to the server and how long some of the transactions have been waiting in the instance. This is a good starting point to know which Session ID, Database is holding the # of lock for how long. These for all practical purposes is a very good starting point in my opinion to start looking at long running transactions.

Some of the DMV’s that work behind the scenes to generate the report are:

  • sys.dm_tran_active_transactions – Has information of transactions in a given instance.
  • sys.dm_tran_session_transactions – Has information of transactions for a given session.
  • sys.dm_tran_database_transactions – Gives transactions at a database level.
  • sys.dm_exec_sessions – Has information about the active sessions currently on the server.
  • sys.dm_exec_requests – Information of each request currently running on the server.

From this report, the DBA can take a decision to what process is causing these locks? And why they are held for such a long time.

Top Transactions by Lock Count

I would say, this is in continuation to the previous report. In the previous report, I was able to find out the number of locks for a given Session ID and database. But the specifics on the type of locks were not clear.

This report is all about expanding that part of the uncertainty. This report shows the type of locks are held by a specific session. In our report below we can see the session ID 52 and 53 are holding Object, Page and Key locks respectively. While 52 has an Exclusive Lock already taken, 53 has an Update fired on the dbo.test table.

I am sure on a highly transactional production server this will surely be a busy report to view. Do let me know how many nodes you see on your servers.

The transaction state can be one of the following:

  • Uninitialized
  • Initialized
  • Active
  • Prepared
  • Committed
  • Rolled Back
  • Commiting

These values can be got from sys.dm_tran_database_transactions DMV. While the Active Transaction State can take the below values as defined in the sys.dm_tran_active_transactions   DMV:

  • Invalid
  • Initialized
  • Active
  • Ended
  • Commit Started
  • Prepared
  • Committed
  • Rolling Back
  • Rolled Back

 

Top Transactions by Blocked Transactions Count

The last report under the Top category is the Blocked transaction Count report. This is almost similar to the report we say in our previous post of Blocking Transactions Report. Since we have explained the same out there already, I will refrain from getting into the details here.

 

These reports can help in finding cause of below few errors:

The instance of the SQL Server Database Engine cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users. Ask the database administrator to check the lock and memory configuration for this instance, or to check for long-running transactions. (Msg 1204, Level 19, State 4)

Action: Find the session_id which is taking more locks and tune them.

Lock request time out period exceeded.(Msg 1222, Level 16, State 45)

Action: Find the session_id which is holding locks and see why they have a long running transaction.

I am curious to know how many of you have every used the Top reports mentioned here in your environments to debug something? Do let me know how you used them effectively so that we can learn from each other.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

Developer’s Life – Every Developer is the Incredible Hulk

The Incredible Hulk is possibly one of the scariest superheroes out there.  All superheroes are meant to be “out of this world” and awe-inspiring, but I think most people will agree with I say The Hulk takes this to the next level.  He is the result of an industrial accident, which is scary enough in it’s own right.  Plus, when mild-mannered Bruce Banner is angered, he goes completely out-of-control and transforms into a destructive monster that he cannot control and has no memories of.

With that said, The Incredible Hulk might also be the most powerful superhero.  While I don’t think that developers should get raging mad and turn into giant green destroyers, I think there are still great lessons we can learn from The Hulk.

So how are developers like the Incredible Hulk?

 

Well, read on my list of reasons.

Keep Calm

“Don’t make me angry.  You won’t like me when I’m angry.”  Bruce Banner warns everyone he comes in contact with, and this can be a lesson to take into the real world.  Don’t think of as fair warning, and now you are allowed to yell at people.  Think of it as advice to yourself.  Very few people (including developers) good do work when they have steam coming out their ears.

A Little Intensity is a Good Thing

A little intensity is a good thing.  In the movie “The Avengers,” Bruce Banner says he is always angry.  Instead of letting it get out of control and turn him into The Hulk, he harnesses that energy to try to find a solution to their problems.  As a developer, when something is frustrating you, try to channel that energy into beating the problem, not beating up your co-workers.

It is NOT all about You!

It’s not all about you.  In another scene in “The Avengers,” Bruce Banner’s tendency to transform into The Hulk becomes a liability to the rest of the team.  He has to rely on his team members to save themselves (and him) from himself.  Developers work as a team as well.

Constructive Anger

So far, we have discussed The Hulk’s anger like it is a bad thing.  But anger can be constructive, too.  When there is a huge evil to be fought, The Incredible Hulk is a fighting machine.  When there is a huge network or server problem, incredible developers join together and do amazing work.

The Incredible Hulk can be scary – no, let me put it another way – he can be intimidating.  But that just means that the lessons he can teach us all, including developers are about controlling ourselves in tough situations, and rising to the occasion.

Reference: Pinal Dave (http://blog.sqlauthority.com)

MySQL – UDF – Validate Integer Function

There is a default function in SQL Server called ISNUMERIC to determine if the string is numeric. In SQL Server ISNUMERIC returns 1 for non numerials also. Ex ISNUMERIC (‘,’) returns 1. To avoid this you can write a user defined function as shown over here. In MySQL, there is no default function like ISNUMERIC as in SQL Server. You need to create a similar user defined function as shown below.

DELIMITER $$
CREATE FUNCTION 'udf_IsNumeric'(number_in VARCHAR(100)) RETURNS BIT
BEGIN
DECLARE
Ret BIT;
IF number_in NOT regexp '[^0123456789]' THEN
SET
Ret:= 1;
ELSE
SET
Ret:= 0;
END IF;
RETURN Ret;
END

The above function uses Regular expression. The expression regexp ‘[^0123456789]‘ will find out if the string has at least one character which is not a digit. The expression not regexp ‘[^0123456789]‘ will just negate that condition so that you get the string which has all characters as a digit.

You can test this using the following example.

SELECT '999' TestValue,udf_IsNumeric('999') NumericTest;
SELECT 'abc' TestValue,udf_IsNumeric('abc') NumericTest;
SELECT '9+9' TestValue,udf_IsNumeric('9+9') NumericTest;
SELECT '$9.9' TestValue,udf_IsNumeric('$9.9') NumericTest;
SELECT 'SQLAuthority' TestValue,udf_IsNumeric('SQLAuthority') NumericTest;

The result is

TestValue NumericTest
 ——— ———–
 999 1
TestValue NumericTest
 ——— ———–
 abc 0
TestValue NumericTest
 ——— ———–
 9+9 0
TestValue NumericTest
 ——— ———–
 $9.9 0
TestValue NumericTest
 ———— ———–
 SQLAuthority 0

You can find more examples about Regular expressions in MySQL over here.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SSMS: Transaction Log Shipping Status Report

History has its own way to define now civilizations thrived. Most of the cities flourished in the river side and transporting lumber was one of the key activity. Most of the cities like Seattle and many others have this boom and bust life. The idea here was to cut the timber upstream and use the natural flow of rivers to transport to factories downstream using the river. These are classic and wonderful examples of how we typically work with Log-Shipping in SQL Server too. This blog is about Log Shipping Status report.

Ensuring the availability of databases, meeting SLA and performance tuning are some of the top priorities for today’s database administrators (DBAs). One of the important work of DBA is to monitor the database servers and make sure the application is working fine. The monitoring might involve automatic alerts, running scripts or looking at some dashboard. Even for high availability solutions, we need some kind of monitoring mechanism. One of the traditional high availability solution is Log Shipping.

As the name suggests, Log-shipping is based on transaction log backups getting shipped from one server to one or more servers on the other side. For understanding this you need to know basics of transaction log backups. First, log backups can be taken from the database which is in full or bulk logged recovery model. In the simple recovery model, transaction log backups are not allowed because every checkpoint flushes the transaction log file. In other two recovery models log backup would do flush. Another basics of log shipping is that all log backups form a chain. T1, T2 and T3 must be restored in sequence. Missing any one the file would cause an error message during restore. In log shipping, backup, copy and restore is done automatically. The SQL Agent service does that for us. Since we can ship to multiple servers, backup location is shared so that other servers can get a copy of that file to perform the restore. Source server in technical terms is called as the primary server. Rest all servers which are at receiving end are called as a secondary server. You would also hear monitor server, which is responsible to check the health of copy, backup and restore job. If the jobs are not running properly, then secondary would be behind primary server and would defeat the purpose of high availability. Based in the threshold defined, monitor server can raise alerts so that corrective action can be taken.

This is the last report in the list under server node. Based on the name of the report, you might have already guessed that it can be used to “see” the status of log shipping status.

The important note about this report is that the data shown in the column would be dependent on the server where we launch the report. Here is the report, when launched from Primary Server.

If we notice, information about backup section is populated. This is because the report doesn’t make a remote connection to check secondary server status. If the report is launched from a Secondary Server the output would be as below:

The information about copy and restore related information is populated automatically because those are available on secondary server.

If we configure monitor server in log-shipping (which I have not done) and launch report there, we can see information about all three steps (i.e. backup, copy and restore)

The good part about the report is that it shows the alarming pair in red color. To demonstrate, I have configured log shipping for two databases, and for one, I have disabled the backup, copy and restore jobs so that alerts are raised and we can see the impact on report.

You may wonder how this information is fetched. This has the simplest possible query behind the scene.

EXEC sp_help_log_shipping_monitor

As per Books online – “Returns a result set containing status and other information for registered primary and secondary databases on a primary, secondary, or monitor server.”

If you see anything in red color, you need to start investigation further to find the cause of delay. What is the most common cause you have observed, which causes delay in log shipping? Networking, Disk slowness or something else? Please comment and let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – How to Format and Refactor Your SQL Code Directly in SSMS and Visual Studio

This article shows how to use ApexSQL Refactor,a free SQL code formatting and refactoring tool. You can download ApexSQL Refactor, and explore it through the article.

ApexSQL Refactor is a free tool, for SQL code formatting and refactoring directly from SSMS or Visual Studio. You can qualify SQL Server object names, expand wildcards, or encapsulate SQL code. The add-in has nearly 200 formatting options and 11 code refactors. Using this tool allows you to locate and highlight unused variables and parameters. In addition, you can update all dependent database objects on renaming or changing columns and parameters. Besides SQL code in SSMS or Visual Studio, you can format SQL code from the external SQL scripts. The add-in integrates under the ApexSQL menu in SSMS or Visual Studio. To format SQL code inside SSMS or Visual Studio, select it in the query window and choose the ApexSQL default option, or other user- defined templates from the ApexSQL Refactor menu. In the same menu, you can find the Formatting options option:

Format SQL code

In the Formatting options section you can modify ApexSQL default formatting, or create your own formatting templates. Click the New button in the upper side of the window, and the new formatting template will be created. In the General tab, you can set indention, whether you are using spaces or tabs. You can set the wrapping to be applied to the lines longer than the specified number of characters, or add spacing inside/outside parenthesis, around operators (assignment, arithmetic, and comparison), and before/after commas.

Here you can manage empty lines, and set the placement for the opening and closing brackets:

In the Capitalization tab, you can enforce the capitalization for SQL keywords, data type, identifiers, system functions, and variables. For each of the mentioned, you can choose from the drop down list whether it is going to be capitalized in upper case, proper case, or lower case.

In the Comments tab you can manage comments, adding an empty line or border before/after the block comment. In addition, you can switch all comments to block/line comments, or remove all block/line comments:



Under the Expressions tab, you can set the formatting options for the operators (arithmetic, comparison logical). If you enable formatting for any operator, you will be able to set the minimum number of characters for the operator to apply formatting on. In addition, you can set the parenthesis placement, to move the operation to a new line, or to show the operations on multiple lines:

In the Schema statements tab, you can setup formatting for the object definitions and parameters. For the object definitions, you can choose to place the body on a new line and to set the indentation. Parameters can be placed on a new line (aligned with keyword, or indented for a defined number of spaces). If there are more than one parameter, each one can be placed on a new line with a comma before/after the row:



In the Data statements tab, you can set the options for column list, data statement, nested SELECT statements, and aliases. A column list can be placed on a new line, aligned with a keyword, or with the defined indentation. Each column can be placed in a new line, with a comma before/after it. A minimum number of characters can be defined for the data statements to be formatted. SQL keywords FROM and WHERE can be placed in a new line, aligned with keyword or indented.

For each nested SELECT statement, the first SELECT can be placed on a new line at the same position, aligned with SQL keyword, or indented. In addition, subsequent nested SELECT can be aligned with the previous SELECT.

All alias names can be aligned. The AS keyword can be used in all aliases in the SELECT statements, placed on a new line (aligned with keyword or indented):

In the Joins tab, you can set the minimum number of characters for JOIN statement to be formatted. The first table can be placed on a new line (at the same position, aligned with previous keyword, or indented). Joined keyword can be placed at start/end of the line, or on a separate line with indention. ON keyword can be placed on a new line aligned with JOIN keyword, or indented. Nested join operations can be placed on separate lines, aligned with previous JOIN keyword, or indented:

In the Value lists tab you can set the value list to be placed ona new line, aligned or indented. Earch row from the list can be placed in a separate line with comma before/after each row. Row values can be placed on a new line, aligned or indented. At the end, each value can be placed in a separate line, with comma before/after each row:

In the Variables tab, you can set the variables to be placed on a new line, aligned with keyword, or indented. Each variable can be placed on a new line with a comma before/after each row:

In the Flow control tab, you can set the condition keywords (WHEN, THEN, and ELSE) to be places on a new line and indented. In addition, you can enforce BEGIN and END keywords to be used in IF statements, and in stored procedures.

To format SQL code outside SSMS or Visual Studio, click the Format SQL scripts option from the ApexSQL Refactor menu, and the Format SQL files dialog opens. Here you can browse your computer for SQL files, and apply specified formatting option (ApexSQL default, or user created template). You can overwrite an old SQL file, with a new one, or create new SQL file, and keep the old one

Split table

This option is used to split a SQL database table into two tables by copying or moving columns from the original table to a new one. It is useful when a table contains rarely used columns, they can be moved to another table, so the original table contains less frequently used columns. To split a table, right click on it in Object Explorer in SSMS or Visual Studio, and choose the Split table option from the ApexSQL Refactor menu to open
the Split table dialog. You can copy/move columns from the original table to a new one. For the new table you can define the name and schema. When you set the columns for the new table, you can preview the generated script, see the impact of changes and affected dependent objects:

Safe rename

Using this feature allows you to rename database objects without breaking the database dependencies, as it generates a SQL script for changing the object name and update all dependent objects. The Safe rename option can be applied to database tables, views, procedures, and functions. It applies to a table/view columns and function/procedure parameters. To rename any of the mentioned objects right click on it from the Object Explorer, or select it, and choose the Safe rename option from the ApexSQL Refactor menu. This will open the Safe rename dialog where you can enter a new name for the selected object, and clicking the Generate preview option, you can preview the script used to change the object name. If an error appears when renaming, it will be shown under the Warnings tab. The Sequence tab shows the process of renaming the object, listing the sequences that will be executed in order to rename the selected object:

Add surrogate key

When a primary key contains many columns, or it needs to be changed for any reason, a surrogate key is considered. Changing a primary key in database table requires updating all dependent object, in order to keep database functionality. To add surrogate key, select the table in Object Explorer and choose the Add surrogate key option from the ApexSQL Refactor menu. This will open the Add surrogate key dialog where you can choose the one of the existing keys, and specify the Surrogate column name value. The Generate preview option shows the generated SQL script in the preview section. All dependent objects, sequences, and warnings (if exist) will be shown under the appropriate tabs:

Change parameters

Stored procedure or function parameters can be changed by deleting and recreating, or using the ALTER statement. To change the parameters safely, use theChange parameters option in ApexSQL Refactor. Select the stored procedure or function parameter and choose the Safe rename option from the ApexSQL refactor menu. In the Safe rename dialog, change the parameter name, and generate the preview of a SQL script. Under the appropriate tab, you can review all dependent objects, warnings (if exist), and sequences that will be executed on renaming the parameter:

Replace on-to-many relationship

To use this option select the table from the Object Explorer and choose the Replace one-to-many relationship option from the ApexSQL Refactor menu. This will open a dialog where you can specify the name of the associative table, choose the dependent table, and a relationship. The Generate preview option generates a SQL script, which replaces a relationship. Under the appropriate tabs, you can review warnings, sequences, and dependent objects:

Copy code as

This option will convert SQL code into the supported programming languages. Supported languages are Java, VB.NET, C#, Perl, PHP, Delphi, Power Builder, Ruby, and C++. You can add additional template for other programming languages choosing the Customize languages option from the Copy code as submenu. To convert SQL code into any of the listed programming languages, point to a query window with SQL code you want to convert, and choose the language from the list. One you click the language from the list, open a new query window, and paste the created code:

The Customize language template dialog allows you to edit templates for natively supported languages, or add new templates. Here you can enter a code that will be inserted before/after SQL code, define escape character for quotes, and preview the defined settings:

Unused variables and parameters

Parameters or variables declared or assigned a value, but never use or queried in any statements as UPDATE, EXECUTE, WHERE, INSERT or PRINT, is unused. ApexSQL Refactor can highlight unused SQL objects, and clean up SQL code. You can find unused objects inline, while typing SQL code. To find unused parameters and variables, run the Unused parameters and variables command from the ApexSQL Refactor menu:

If there is any declared, but unused variables or parameters, ApexSQL Refactor will find them. To confirm that the highlighted parameter/variable is unused, bring the mouse pointer to it, and the tooltip comes up:

Object name qualifying

The Object name qualifying feature enables you to refactor SQL code in a way to add the owner (schema/user) of objects, object name, or alias name. When object name is qualified, SQL Server will not check if the current user is the owner of the object, which means fewer resources to use. As a result, query will be executed faster. The result of using this option can be easily reverted clicking the Undo in SSMS or Visual Studio. To apply the Object name qualifying option, choose it from the ApexSQL Refactor menu.

Encapsulate code as

Encapsulate SQL code means to use selected code as a database object, and makes it easier to reuse it. ApexSQL Refactor allows you to encapsulate SQL code as stored procedure, view, scalar inline function, or a table inline function. To encapsulate SQL code as one of the mentioned objects, select it in the query window, and choose the Encapsulate code as option from the ApexSQL refactor menu. Select the appropriate object, depending on what do you want to encapsulate, and the new window opens. Give a name and assign a schema to the selected object. If you click the Generate preview button, a SQL script will be created, and the parameters will be listed in the Parameters section. When everything is set, click the Encapsulate button:

Expand wildcards

This will allow you to expand wildcards e.g. “*” used in SQL, into column names. It affects the performance of a SQL query, as SQL Server will not expand them itself, before the query execution. If you type in the query:

SELECT * FROM Person.Address

After applying the Expand wildcard option, the above query will be transformed as follows:

SELECT Address.AddressID,
Address.AddressLine1,
Address.AddressLine2,
Address.City,
Address.StateProvinceID,
Address.PostalCode,
Address.SpatialLocation,
Address.rowguid,
Address.ModifiedDate
FROM Person.Address;

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Andy Defines Basic RDBMS: Isolation in Processes – Notes from the Field #038

[Note from Pinal]: This is a new episode of Notes from the Field series. Every time I give an introductory note, however, this time there is no need of intro note. This note is from Andy and as we all know he is amazing person when we have to understand the fundamentals. He has written this blog post with such an interesting way that you must read it to understand the very basic fundamental of Isolation.


When I think about SQL Server isolation in processes, it reminds me of eggs. Our family raises chickens for their eggs. Fresh eggs are very tasty, but there’s always the risk of a bad egg.

In the image above, I am preparing to scramble five eggs. I have cracked and opened five eggs (you can tell by the number of eggshells), but only four egg yolks are shown. “Why are there only four yolks, Andy?” I’m glad you asked.

My process for opening the eggs involves first dropping the contents of the egg into the mug, examining the contents, then – if satisfactory – adding them to the bowl for mixing. You don’t have to do this to make five scrambled eggs; you can crack the eggs right over the bowl.

But what happens when you open a bad egg? You risk ruining the entire batch of mostly good eggs. If you crack the eggs over the bowl, you have to pick five good eggs in a row to get a batch that’s ready to scramble. You may get lucky and it may only require five eggs. But the risk of a bad egg is ever present. You could get one good egg, followed by a bad egg. Now you have to throw both the good and bad egg out and begin again. The next time you may get three good eggs and then one bad egg. Now you’ve wasted three good eggs.

Isolating the eggs is a good practice. If I first empty the egg into a mug, I have constrained the process so that I only risk one egg at a time. In so doing, I have successfully mitigated risk to the least possible unit. I will only throw out the bad eggs without risking contamination of the good eggs in the bowl.

Isolation is generically defined as, “the process of separating somebody or something from others, or the fact of being alone and separated from others.”

In database terms, isolation is one of the four defining properties (i.e., atomic, consistent, isolated, durable—remember these by using the acronym ACID) of a Relational Database Management System (RDBMS). Similar to isolating bad eggs from the good, RDBMS isolation keeps individual database transactions from intermingling with each other during execution. It’s not that other transactions are bad, we just want to keep them separated so that data from one transaction doesn’t corrupt data from another transaction.

Are isolated transactions completely unaffected by each other? No, unlike the example of completely isolating a bad egg from the mix of good eggs, RDBMS isolation doesn’t prevent one transaction from influencing or impeding another transaction. An example of influence is resource contention; an example of impedance is locking. Isolation simply guarantees the results of the transaction will not be affected by concurrently executing transactions.

You can learn more about these properties from this awesome post by my friend, Pinal Dave: SQL Server – ACID (Atomicity, Consistency, Isolation, Durability).

If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL Authority News – Webinar: SQL Server 2014 In-Memory OLTP and SafePeak In-Memory Caching

In the past couple of months many of us heard about the new SQL Server 2014 and its In-Memory technologies, such as In-Memory OLTP for high-transactional application and the ColumnStore INDEX’s for BI apps. This comes in addition to various in-memory technologies outside of the SQL Server world, such as Oracle, IBM or few NewSQL startups.

There are plenty of benefits for in-memory solutions:

  • Offloading expensive repeated database requests
  • Acceleration of Database speed and application performance
  • Scalability of applications
  • Reducing database resources and licensing costs

The level of interest in performance in-memory technologies is clearly on a rise, so we’ll try to provide you with interesting news, investigation summaries and interesting webcasts, especially when it comes to the SQL Server databases, applications and users.

Choosing the right solution for you can be based on many factors:

  • Application stage: a Greenfield new application, a Live production homegrown application or Commercial off-the-shelf application
  • Application characteristics: Operational vs. Analytical BI application
  • Effort level and time available to resolve a challenge: the level of effort, resources and costs you want to apply in setup, development, testing and deployment

Personally, I love webinars as they give an opportunity to learn very quickly one or two technologies; many times combining a view from 10,000 feet down to practical 100 foot level; learn various technologies aspects; either how to get started or help defining a realistic plan.

My friends at SafePeak, the software vendor of In-Memory Dynamic Caching for performance acceleration of SQL Server applications, are running next week (Wednesday 30th) a webinar under the title: SQL Server 2014 In-Memory OLTP and SafePeak In-Memory Caching.

The webinar plans to review the SQL Server 2014 In-Memory OLTP, how it works, various scenarios where it fits more and less, key benefits, key limitations (and how to overcome some of them), things to consider when planning to use it. Additionally the webinar will review SafePeak’s Automated In-Memory Caching for SQL Server applications, it’s internals or how it works, where it fits more and where it fits less, and run a technologies comparison of both SQL 2014 In-Memory OLTP and SafePeak Caching – and probably (as usually happens) show a live demo of both technologies to get a better grip of the content.

Should be interesting…

You can register the webinar here.

Reference: Pinal Dave (http://blog.sqlauthority.com)