Transaction:
Lets discuss real work scenario, when you given something to me and I take it; then it is a transaction. When you withdrawal money from ATM and you received then it is a transaction but what will happen when you withdrawal money from ATM and account still showing same balance as it was before withdrawal. What will happen when you and you wife both withdrawing all your money from the joint account at the same time from different ATMs. So there must be some methodology to keep track on all such natural disaster conditions.
To achieve the above thought in the database system we have locking mechanism. It acts like this, suppose there is a room and only one person can get access at time with provided user key and password but another person wants same room during same time with different purpose. So the security interface should be manageable. We can conclude that the security system must provide different types of security code or passwords to the corresponding person. Let’s have a deeper look on this. Suppose you are doing a transaction for withdrawing money from the ATM machine and at the same time the bank manager is doing a routine check up of your transaction which is totally a different operation and suppose at the same time the bank teller is checking your account for the remaining balance. All these operations are different but accessing the same entity or resource and that is your account information that is kept inside the database. Out of these operations only you are doing write operation in the database as you are withdrawing money and the remaining balance has to be updated in the database. So a proper security mechanism must be implemented here to ensure non-conflict or smooth going of these operations. Here the security can be ensured by putting locks (and of course the type of locks) for each type of operations, which means you are isolating the resources from other transactions that may hamper its consistency. Here comes the role of Isolation levels.
Let’s have look on Locks and type of Locks. Locks can be treated as a policy which will prevent you or a process to perform any action (that may conflict with other actions) on an object or resource if that object or resource is already occupied by any other process or user. It’s like you are going to propose someone who is already with someone else. So for such type of transaction, we have type of Locks.
Types of Locks
Shared Locks(S): This lock id useful when you are doing some read operations and no manipulation like write operations (Update/Delete/Insert). This will perform dirty reads.
Exclusive Locks(X): These locks are big possessive types. They are not compatible with any other locks means these locks will not work if some other locks are already there with the resource. This locks used for data-modifications.
Update Locks(U): This can be a mixture of or perfect combination of two locks (Shared and Exclusive). Let’s take an example: Suppose you are updating any particular record out of 100, so first you will search that particular record using shared lock and then you will update that record with exclusive lock. So here Shared lock transforms to exclusive lock. This avoids deadlocks to a great extent.
Intent Locks: Also called demand locks. There are used to establish a lock hierarchy. For example we are reading some data with shared lock and same time another user wants to modify data with exclusive lock but shared lock can be compatible with other shared lock as result any number of shared lock can be obtained on piece of data and hence the user with exclusive lock has to wait till the completion of all shared lock operations. To avoid this type of situation, intent locks are very useful. Here if the second user comes with intent exclusive lock, the no other transaction can grab a shared lock. Here it can claim the use of exclusive lock after first transaction completes.
There are basically three types of Intent Locks that are most popular:
• Intent Shared Lock(IS)
• Intent exclusive (IX)
• Shared with intent exclusive (SIX)
Schema Locks: This lock protect schema of database and deals with DDL command. There are two types of schema locks: • Schema Modification (Sch-M): Applied only when modifying structure • Schema Stability (Sch-S: Applied on only when query data and prevent execution of DDL command on the table.
Bulk Update Locks: This lock is useful when preforming bulk operation on the table. It will prevent any other type of normal T-SQL operations to be executed the table.
Lost Updates: It generally occurs when more the one transaction tries to update any specific record at a time. Example when one update successfully update data and second update accidently overwrites previous information called lost update.
Isolation Level:
Isolation levels in SQL Server control the way locking works between transactions. The Isolation levels are categorized depending on the type of locks it uses for a particular level. At lower level of isolation more users can access the same resource without any confliction, but they may face concurrency related issues such as dirty-reads and data inaccuracy (described below). At higher Isolation level, these types of issues can be eliminated but here only a limited no. of users can access the resource.
SQL Server 2008 supports the following isolation levels
• Read Uncommitted
• Read Committed (The default)
• Repeatable Read
• Serializable
• Snapshot
Before I run through each of these in detail you may want to create a new database to run the examples, run the following script on the new database to create the sample data.
Note: We also want to drop the TableName table and re-run this script before each example to reset the data.
GO
CREATE
TABLE TableName
(
Id
INT IDENTITY,
Col1
INT,
Col2
INT,
Col3
INT)
INSERT
INTO TableName(Col1,Col2,Col3)
SELECT
1,2,3
UNION
ALL SELECT 1,2,3
UNION
ALL SELECT 1,2,3
UNION
ALL SELECT 1,2,3
UNION
ALL SELECT 1,2,3
UNION
ALL SELECT 1,2,3 UNION ALL SELECT 1,2,3 Before we start each isolation level in details we need to understand two things:
Dirty Read: Quite often in database processing, we come across the situation where in one transaction can change a value, and a second transaction can read this value before the original change has been committed or rolled back. This is known as a dirty read scenario because there is always the possibility that the first transaction may rollback the change, resulting in the second transaction having read an invalid value.
Phantom Reads: Occur when an insert or delete action is performed against a row that belongs to a range of rows being read by a transaction Inconsistent analysis occurs when a second transaction accesses the same row several times and reads different data each time .
Read Uncommitted
This is the lowest isolation level there is. Read uncommitted causes no shared locks to be requested which allows you to read data that is currently being modified in other transactions. It also allows other transactions to modify data that you are reading. As you can probably imagine this can cause some unexpected results in a variety of different ways. For example data returned by the select could be in a half way state if an update was running in another transaction causing some of your rows to come back with the updated values and some not to. To see read uncommitted in action let’s run Query1 in one tab of Management Studio and then quickly run Query2 in another tab before Query1 completes.
BEGIN
TRAN
UPDATE
TableName SET Col1 = 3
WAITFOR
DELAY '00:00:10'
--ROLLBACK
--COMMIT
SELECT
* FROM TableName
SELECT
* FROM TableName WITH(NOLOCK)Read Committed
This is the default isolation level and means selects will only return committed data. Select statements will issue shared lock requests against data you’re querying this causes you to wait if another transaction already has an exclusive lock on that data. Once you have your shared lock any other transactions trying to modify that data will request an exclusive lock and be made to wait until your Read Committed transaction finishes.
BEGIN
TRAN
UPDATE
TableName SET Col1 = 3
where
Id <= 3
WAITFOR
DELAY '00:00:10'
--ROLLBACK
--COMMIT
--SET TRANSACTION ISOLATION LEVEL Read Committed
SELECT
* FROM TableName
DBCC
useroptionsThis is similar to Read Committed but with the additional guarantee that if you issue the same select twice in a transaction you will get the same results both times
SET
TRANSACTION ISOLATION LEVEL REPEATABLE READ
BEGIN
TRAN
SELECT
* FROM TableName
WAITFOR
DELAY '00:00:10'
SELECT
* FROM TableName
ROLLBACK
UPDATE
TableName SET Col1 = 1Serializable
This isolation level takes Repeatable Read and adds the guarantee that no new data will be added eradicating the chance of getting Phantom Reads.
SET
TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN
TRAN
SELECT
* FROM TableName
WAITFOR
DELAY '00:00:10'
SELECT
* FROM TableName
ROLLBACK
INSERT
INTO TableName(Col1,Col2,Col3)
VALUES
(100,100,100)This provides the same guarantees as Serializable. So what’s the difference? Using snapshot inserting or updating doesn’t block other queries the data touched by the snapshot transaction. But you need to enable snapshot on your database:
ALTER
DATABASE databasename SET
ALLOW_SNAPSHOT_ISOLATION
ON Summary
Now we have good idea how each of the different isolation levels work. We can see how the higher level isolation uses the less concurrency and the more blocking to the table. We should always try to use the lowest isolation level. I will recommended for read committed which is default isolation level in SQL server.
No comments:
Post a Comment