Concurrency is simply put capability of the machine to support two or more transactions working with the same data at the same time. This usually comes up with data is being modified, as during the retrieval of the data this is not the issue. Most of the concurrency problems can be avoided by SQL Locks. There are four types of concurrency problems visible in the normal programming.
1) Lost Update – This problem occurs when there are two transactions involved and both are unaware of each other. The transaction which occurs later overwrites the transactions created by the earlier update.
2) Dirty Reads – This problem occurs when a transactions selects data that isn’t committed by another transaction leading to read the data which may not exists when transactions are over.
Example: Transaction 1 changes the row. Transaction 2 changes the row. Transaction 1 rolls back the changes. Transaction 2 has selected the row which does not exist.
3) Nonrepeatable Reads – This problem occurs when two SELECT statements of the same data results in different values because another transactions has updated the data between the two SELECT statements.
Example: Transaction 1 selects a row, which is later on updated by Transaction 2. When Transaction A later on selects the row it gets different value.
4) Phantom Reads – This problem occurs when UPDATE/DELETE is happening on one set of data and INSERT/UPDATE is happening on the same set of data leading inconsistent data in earlier transaction when both the transactions are over.
Example: Transaction 1 is deleting 10 rows which are marked as deleting rows, during the same time Transaction 2 inserts row marked as deleted. When Transaction 1 is done deleting rows, there will be still rows marked to be deleted.
When two or more transactions are updating the data, concurrency is the biggest issue. I commonly see people toying around with isolation level or locking hints (e.g. NOLOCK) etc, which can very well compromise your data integrity leading to much larger issue in future.
Here is the quick mapping of the isolation level with concurrency problems:
|Isolation||Dirty Reads||Lost Update||Nonrepeatable Reads||Phantom Reads|
I hope this 400 word small article gives some quick understanding on concurrency issues and their relation to isolation level.
Reference: Pinal Dave (https://blog.sqlauthority.com)
I’ve been checking one of our production guidance system’s database lately. It’s been running really slow in some operations and the vendor seems to be unable to fix it.
So first I checked top 10 slowest queries. I quickly found out that the reason is missing indexes. Some tables didn’t even have clustered index. I wrote a report that tables needs indexing badly and also there was quite horrific queries that needs to be rethought.
Then began the horror. I realized that every SQL query was filled with NOLOCK hints! Every stored procedure started with SET TRANSACTION ISOLATION LEVEL READ UNCOMMITED. So instead of implementing proper indexing scheme the vendor was trying to fix performance problems by avoiding locks totally!
We are talking about elementary system to our business. A simple hickup could cost tens of thousands of euros. A major hickup halts the production completely. I’m stunned that someone does so slobby work in such important system.
Luckily this won’t be my headache anymore since I’ll be switching jobs soon. I just wanted to write about this to tell you that there are things that you simply must not do, like enabling dirty reads in a case like this. Maybe there are situations where dirty reads are acceptable but I’ve yet to see one. If you are in doubt like what’s causing the performance problems, please ask someone more experienced in the field. There’s no shame in asking. No-one can know everything about everything. That’s why we create our social networks so we can help each others out.
With the misconception that using (NOLOCK) will eliminate locking in a database, I have seen many databases suffer from the problem of dirty and non-repeatable reads.
My suggestion is to acquire locks for as brief a period as possible, and then implement optimistic concurrency (I wrote about this one in one of my posts – .
(NOLOCK) when used judiciously is good, but when used in excess, might cause unpredictable application behaviour.
Be courteous. Drive repsonsibly,
Superb Post Pinal.
Till date I was never able to understand the difference between these 4 terms explicitely.
I am sure this post is going to help all the sql server knowledge seekers like me.
hi pinal sir,
i am great fan of you & your blogs.
Recently i came across your 109th blog.
when i run the script in my SSMS, it is giving error.
so that i have modified the script
/* FUNCTION TO GET WEEKDAYS FROM TWO GIVEN DAYS */
IF EXISTS(SELECT * FROM sys.objects WHERE TYPE = ‘FN’ AND name = ‘Fn_GetWeekDays’)
DROP FUNCTION Fn_GetWeekDays
CREATE FUNCTION Fn_GetWeekDays
( @StartDate DATETIME,
@EndDate DATETIME )
DECLARE @FirstPart INT
DECLARE @TotalDays INT
DECLARE @LastPart INT
SET @FirstPart = 0
SET @LastPart = 0
IF (DATEDIFF(DAY, @StartDate, @EndDate)>0)
WHILE (@StartDate <= @EndDate)
IF (SUBSTRING(DATENAME(DW,@StartDate),1,3) IN ('MON','TUE','WED','THU','FRI','SUN'))
SET @FirstPart = @FirstPart + 1
SET @StartDate = @StartDate + 1
SET @TotalDays = @FirstPart
SET @TotalDays = 0
Please forgive me if i am wrong.
Process 1 :
Table 1 – Reading No
Table 2 – Inserting No with details
Meantime outside of transaction
Table 1 – Reading No
Table 2 – Inserting No with details
Process1 affects process 2 because of the Read Committed transaction.
Actually it locks(IX) entire table. Not even select the old data.
pls help how to view / Insert the data when process1 running.
Putting NOLOCK everywhere they see is one the biggest mistakes of novice DBAs. Concurrency is not such a simple task to achieve and every responsible DBA should investigate all Isolation levels deeply for optimized production server performance.