SQL SERVER – UPDATE From SELECT Statement with Condition

An email from an old college friend landed my mailbox:

Hey Pinal,”

I have two tables. I want to conditionally update data in one table based on another table. How can I do that. I have included sample scripts and an image for further explanation.

Thanks!”

It always delights to receive email from an old college friend and particularly it is even more interesting when they have a question w where I can help. Here is the question and a sample script.

User had two tables – ItemList and ItemPrice. The requirement was to update ItemPrice table column Price with US price and for that it required to divide the column by 60. Here is the sample script of the table displayed in the image.

USE tempdb;
GO
CREATE TABLE ItemList
(ID INT, ItemDesc VARCHAR(100), Country VARCHAR(100));
INSERT INTO ItemList (ID, ItemDesc, Country)
SELECT 1, 'Car', 'USA'
UNION ALL
SELECT 2, 'Phone', 'India'
UNION ALL
SELECT 3, 'Computer', 'USA';
GO
CREATE TABLE ItemPrice
(ID INT, Price VARCHAR(100));
INSERT INTO ItemPrice (ID, Price)
SELECT 1, 5000
UNION ALL
SELECT 2, 10000
UNION ALL
SELECT 3, 20000;
GO
-- SELECT Data
SELECT *
FROM ItemList;
SELECT *
FROM ItemPrice;

Now let us write a script which will update the table as per our expectation.

-- Update Statement
UPDATE ItemPrice
SET Price = Price/60
FROM ItemList il
INNER JOIN ItemPrice ip ON il.ID = ip.ID
WHERE Country = 'USA'
GO

Now let us result by selecting the data in our Price table.

Now you can see how we can update from table to another table with conditions. You can clean up above code by dropping tables.

-- Clean up
DROP TABLE ItemPrice;
DROP TABLE ItemList;
GO

I hope this quick script helps, let me know if there is any better alternative.

Reference: Pinal Dave (http://blog.sqlauthority.com)

About these ads

SQL SERVER – How to Check Snapshot Isolation State of Database

It is very easy to know the snapshot Isolation State of Database and here is the quick script for the same.

SELECT name
, s.snapshot_isolation_state
, snapshot_isolation_state_desc
, is_read_committed_snapshot_on
FROM sys.databases s

Upon running above code it will return the results describing the status of the isolation level for your database. Here is the screenshot which describes the same.

Just on a side note, remember that READ COMMITTED SNAPSHOT does optimistic reads and pessimistic writes. Whereas, SNAPSHOT does optimistic reads and optimistic writes. It is recommended that you go for READ COMMITTED SNAPSHOT for most of your application where you want to implement row versioning. Microsoft has a detailed article on this subject over here.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – SQL Server 2008 R2 Service Pack 3 – Download

It has been a long time since SQL Server 2008 R2 got Service Pack Update. Microsoft has finally released SQL Server 2008 R2 service pack  3and its feature pack. SQL Server 2008 R2 SP3 contains fixes to issues reported as well as Hotfix solutions have provided since SQL Server 2008 R2 Service Pack 2 up to and including Cumulative Update 13.

I have personally switched to SQL Server 2014 few months ago and I am happy with its performance and robust behavior. Many of the customer and clients are still using SQL Server 2012. However, if you are using SQL Server 2008 R2, I suggest that you look at upgrading to the latest version of SQL Server or at least update your software with latest service pack.

You can download SQL Server 2008 R2 Service Pack from following link:

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Extension of the SQL Server Natively Compiled Stored Procedures

Earlier I wrote a blog post about the Location of Natively Compiled Stored Procedure and Naming Convention. In this blog post, I wrote about location of natively compiled stored procedures.

In the blog post, I have used following image.

One of the questions which I have received was what do various extensions like c, dllobj etc means. My friend Balmukund Lakhani explains that very well in the his blog post, however for the reference it is listed here once again.

File Extension Usage
.c C source file generated by In-Memory engine
.dll Natively compiled DLL to be loaded into SQL Server process
.mat.xml MAT export file
.obj Object file generated by C compiler
.out Compiler output file
.pdb Symbol file for the dll. Used for debugging code issues.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Performance Dashboard: Historic Information

There are a lot of games that I play with my daughter in spare time. Some of them are for just for fun and a lot of them are fun filled with some learning. As she started to grow up, it was important for me to weave learning into her day-to-day activities. So as soon as she was born, one of the biggest decision I took was to travel and show her the various places in India and even abroad. The idea is to visit these places from fun point of view but also from an historic importance. Learning different cultures, people, places, food habits etc. she gets to learn a lot in that process. I personally feel it is up to each parent to find their own sweet spot of making learning fun for their kids while balancing all that they might do. It is a long process and a lot of planning goes behind making them realize what they see in the books is what they are visiting. I have huge admiration for the culture and history each country brings and we can always learn a lot from our ancestors.

Talking about history bring us to the next set of reports from Performance Dashboard. They are grouped inside the Historical Information and Miscellaneous Information. They are logically grouped based on Waits and Expensive queries. We are looking at expensive queries based on:

  1. CPU
  2. Logical Reads
  3. Logical Writes
  4. Duration
  5. Physical Reads
  6. CLR Time

Apart from this the historical information also gives the Waits and IO Statistics across each of the databases. We can find the same in image below.

Historical Waits

This report has two sections and they are come from the same dataset. The top section is a bar-chart and the bottom is a details for the each.

The dataset for this report comes from msdb database.

EXEC msdb.MS_PerfDashboard.usp_DmOsWaitStats

Executing this gives a table of all the Waits and the amount of waits on each of them. Since I started the server and executed this report, we can see it has been idle for a longer time which is quite possible. Having said that, in a busy systems these can be completely different.

I have blogged about the Wait Stats extensively over this blog as a 28 part series – if you are interested, feel free to read the same from SQL SERVER – 28 Links for Learning SQL Wait Stats from Beginning.

Historical IO Report

This report is an aggregation of IO operations for each database since the server was started. Since this is IO, it compares relative to each database and compares them by Reads and writes. To get this information, the report makes a call to msdb database for the following SP.

EXEC msdb.MS_PerfDashboard.usp_DBFileIO

As we can see in the figure below, this is a great indicator in my server that the AdventureWorks2012 has taken maximum amount of Reads while my TempDB has suffered the maximum writes.

The patterns that emerge out of this tell me stories that we normally find it difficult to get. Based on this analysis we can assume that there are lot of reports or operations we are doing using AdventureWorks2012 and working them into Tempdb tables on our server. I am sure you will be able to analyze similarly on your installation box too.

The second section of the report shows the details of Top 20 objects for each database based on IO. For simplicity sake I have expanded the AdventureWorks2012 database for reference here.

Expensive Queries section

The next logical section we can see in the report are around expensive queries. They are the same data for each report but logically sorted based on CPU, IO, Reads, Writes etc. If we run profiler we can find the below query hitting our msdb database.

EXEC sp_executesql @stmt=N'exec msdb.MS_PerfDashboard.usp_QueryStatsTopN1 @OrderBy_Criteria',@params=N'@OrderBy_Criteria NVarChar(max)',@OrderBy_Criteria=N'CPU'

As mentioned above the same query gets fired with a different sort order. The top section shows a graphical bar chart of Top CPU consuming queries.

And as usual the second section outlines the details. In each case we can see the details of CPU, IO, and Reads etc. are also shown. This is marked in figure below.

On expanding each of the columns, we can also check the Total, Max, Min and Avg. for CPU, Duration, Physical Reads, and Logical Writes and so on.

Miscellaneous Reports

This contains a bunch of reports that are like addendum to already available reports. The one that is additional and worth a mention are the XEvents Session report.

XEvents are a deeper topic to understand. You can learn a primer from SQL SERVER – Introduction to Extended Events – Finding Long Running Queries. Fundamentally, the report shows the active running Xevents on the server at this moment of time. As you might have guessed by now, the report calls an SP from MSDB database to fillup the contents.

EXEC msdb.MS_PerfDashboard.usp_XEventSessions

End of the day these joining various DMVs to get the output. Like in this case the report uses sys.dm_xe_sessions, sys.dm_xe_session_targets and sys.dm_xe_session_event_actions to get values. As we have said a number of times before, the power of DMVs are always underestimated when working with SQL Server.

Another very important link is about Active Traces. To demonstrate the power of this report, I have started a profiler trace on SQL instance.  Trace ID 1 is the default trace which runs with every default installation of SQL Server and many standard reports are dependent on that.

Trace ID 2 in below report has some warning:

  • Rowset trace: client/GUI traces can negatively impact server performance.
  • This trace is configured to capture one or more frequently occurring events or events which typically produce a large amount of trace data; tracing these events may negatively impact system performance. The expensive events are highlighted in yellow in the table below.

As we can see below that this report displays a warning for any trace that is configured to capture events which are frequently occurring or that typically produce a large amount of trace data.  It is advisable to avoid capturing these events unless strictly required to prevent potential performance problems on the system, generally on non-production server when you are troubleshooting some particular issue. If we click on (+) symbols, we can see events captured by trace and it would also highlight the expensive events.

Here is the list of events which are “expensive” and would be highlighted in Yellow automatically.

Lock:Released
Lock:Acquired
SQL:StmtStarting
SQL:StmtCompleted
SP:StmtStarting
SP:StmtCompleted
Scan:Started
Scan:Stopped
TransactionLog
Showplan Text (Unencoded)
Showplan Text
Showplan All
Showplan Statistics Profile
Audit Statement Permission Event
Audit Schema Object Access Event
Showplan XML
Showplan XML Statistics Profile
Audit Database Object Access Event

Ideally, we should use server side traces. I have seen common misconception about client side and server side trace. People think that if they run profiler UI on server itself, it is called as server side trace which is WRONG. Trace using profiler.exe is called as Rowset-based trace or client side trace, which consumes more resources than tracing directly to a file.  I would recommend my readers to use a server side trace writing directly to a fast disk to minimize the performance impact.  We can use the Profiler user interface to configure the events and columns you want to capture and save that those setting to a script by choosing Export – Script Trace Definition under the File menu option.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Database Taking Long Time to Create – Notes from the Field #047

[Notes from Pinal]: There are few things which confuse us when we encounter first time, however, there are few concepts which confuses us even though we face them multiple times. One of the such subjects is database taking a long time to create. Think about it, if you see a database which takes long time to create, your natural reaction will be how long various operations will take with the same database. I asked him if he can write further on this topic and help people understand this complex subject in simple words.

Linchpin People are database coaches and wellness experts for a data driven world. In this 26th episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple word. Please follow Tim on his twitter handle at @tradney.


I recently came across a support question on a forum where the user was very concerned about the health of their environment because it was taking over 10 minutes to create a 150GB database. When I was reading the post from the user two things immediately came to mind. First was that they could be creating the data file on a slow IO subsystem and second that they probably do not have Instant File Initialization turned on.

When data and log files are created in SQL Server they are initialized to overwrite any existing data. This is done by filling the files with zeros. This is a process commonly referred to as zero’ing out the file.

In SQL Server there is a way to change a setting that allows for instant initialization of the data file. This process does have a security consideration that must be understood, however for most organizations it is not an issue. Basically by not writing over the free space with zeros it could potentially allow an unauthorized user to read any previous data written to the disk.

Instant file initialization is only available if the SQL Server (MSSQLSERVER) service account has been granted SE_MANAGE_VOLUME_NAME. Members of the Windows Administrator group have this right and can grant it to other users by adding them to the Perform Volume Maintenance Tasks security policy. For more information about assigning user rights, see the Windows documentation. (Reference)

In the case of the user on the forum, they were not using Instant File Initialization and decided to enable it. The user then created another database of the same size and it created in seconds versus the same operation before taking over 10 minutes.

I created a blog and video a couple of years ago walking through the process of enabling this feature. You can view it here.

If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server.

Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone.

Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Performance Dashboard: Current Activity Section Reports

You are driving on the highway and you want to know your speed because there are cops generally in that areas – you watch the dashboard. Your kid seems to be fiddling with your cell phone – you want to see what they are doing actually and not accidentally making any calls to your office colleagues. You walk into your hall and see an ad going on the TV, you want to know what channel it is. All these are classic examples of understanding your current state for the activity in hand. There are surely a number of ways to achieve the same. I am sure you get the drift.

This section of Performance Dashboard is no different. It will give you a glimpse to current activity happening on the system. At a high level they are divided into User requests and User Sessions and the various vital parameters. They are taken from DMVs sys.dm_exec_requests and sys.dm_exec_sessions respectively.

As discussed in our previous post, it is vital to track the Cache Hit Ratio and this needs to be on the high 90’s as much as possible for an OLTP system. There are two hyperlinks that give us even more detailed information about the current system.

Clicking on “User Requests”, we will be presented by the “Current Requests” report. This is the current live report on the requests in queue in our system at this very moment of time.

There are a number of Columns that get reported in this one output. For image restrictions, there is only a part shown above. The complete list of columns available are:

  1. Session ID: Request ID
  2. Start Time
  3. Query Text
  4. Status
  5. Wait Type
  6. Wait Time (ms)
  7. Wait Resource
  8. Blocking Session ID
  9. CPU Time (ms)
  10. Total Elapsed Time
  11. Logical Reads
  12. Physical Reads
  13. Writes
  14. Transaction Isolation Level
  15. Granted Query Memory (KB)
  16. Executing Managed Code

We have capability to sort the result-set by columns by clicking on column name itself. Under the cover, this reports is fetches using a stored procedure in msdb database (created while deploying Dashboard Reports). Here is what you can run

EXEC msdb.MS_PerfDashboard.usp_RequestDetails 0

Second parameter is @include_system_processes means we don’t want to see system processes.

Each of these queries on the “Query Text” column are a hyperlink and can give us more detail when clicked. Below is the output wherein we get the Query Text, Query Plan, Showplan XML and some vital parameters required for the query. These can also be got from our “Execution Plan” and the “Properties” pane for a query.

We can also get this information using DMVs, for example the plan can be got from sys.dm_exec_text_query_plan() passing the SQL Handle.

The next section to look at is the “User Sessions”. If we click on it, we will be presented with all the session currently active on our live system. This report is fetched by running below procedure

EXEC msdb.MS_PerfDashboard.usp_SessionDetails @include_system_processes = 0

Remember, Performance Dashboards Reports are great to understand what is happening in our system at that moment of time. They are not to be seen as proactive reports. As a DBA, if they want to quickly check on the vital parameters in the system interactively, then this is the place to go.

As you can see in the output, the “Sessions Overview” report gives us the active sessions with CPU, Memory usage, Logical Reads, Physical Reads, Writes, Program running on that session, NT User Name and more. A part of this information is already available if we use Activity Monitor, but I sort of like this output because it is not continuously running or refreshing.

Clicking on the Session ID hyperlink or this can also be invoked by clicking on the “Session ID: Request ID” hyperlink in our “User Requests” hyperlink which we discussed earlier in this blog post. Both lead us to the “Session Details” page.

For the given session and the query under question, we will get to know the session ANSI settings, Login Name, Login Time, NT User, CPU & Memory usage, when the last request was fired and many more. I am sure, you will not be able to get this information from other standard reports so easily. So try to use it wherever it might help you.

As discussed above, we can get the top section of the values using DMV query as:

SELECT session_id, login_time, HOST_NAME, program_name, login_name, nt_domain, nt_user_name, status, cpu_time, memory_usage, total_scheduled_time, total_elapsed_time, last_request_start_time,
last_request_end_time, reads, writes, logical_reads,
is_user_process, TEXT_size, language, date_format, date_first,
quoted_identifier, arithabort, ansi_null_dflt_on,
ansi_defaults, ansi_warnings, ansi_padding,
ansi_nulls, concat_null_yields_null,
transaction_isolation_level, lock_timeout,
deadlock_priority, row_count, prev_error
FROM sys.dm_exec_sessions

Quick Tips about Dashboard Reports

  1. We can always use back button available on the top of each report (highlighted below).

  1. The client report viewer control used by SSMS does not support Copy (Ctrl+C and Ctrl+V). The workaround of this would be to right click on the report and choose “export to Excel”. Once you open the exported file we can copy the text.

This is just a sample to how powerful some of the DMV’s are. They have tons of information that go un-noticed by us. Hope you will start using these DMVs effectively in future.

Reference: Pinal Dave (http://blog.sqlauthority.com)