Chapter 6: Synchronization 10.28.2008

advertisement

Chapter 6: Synchronization

10.28.2008

Outline

Why need to do process synchronization?

The Critical-Section Problem

 Race condition

What are the solution?

Peterson’s Solution

Synchronization Hardware Solution: TestAndSet, Swap

Semaphores

Monitors

Dining Philosopher problem

Concurrent Transactions

Serializability

2-Phase locking

Motivation

 Sequential execution runs correctly but concurrent execution (of the same program) runs incorrectly.

 Concurrent access to shared data may result in data inconsistency

 Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes

 Let’s look at an example: consumer-producer problem.

Producer-Consumer Problem

Producer while (true) {

/* produce an item and put in nextProduced */ while (count == BUFFER_SIZE); // do nothing buffer [in] = nextProduced; in = (in + 1) % BUFFER_SIZE; count++;

}

count: the number of items in the buffer (initialized to 0)

Consumer

} while (true) { while (count == 0); // do nothing nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; count--;

// consume the item in nextConsumed

What can go wrong in concurrent execution?

Race Condition

 count++ could be implemented as register1 = count register1 = register1 + 1 count = register1

 count-could be implemented as register2 = count register2 = register2 - 1 count = register2

 Consider this execution interleaving with “count = 5” initially:

S0: producer execute register1 = count {register1 = 5}

S1: producer execute register1 = register1 + 1 {register1 = 6}

S2: consumer execute register2 = count {register2 = 5}

S3: consumer execute register2 = register2 - 1 {register2 = 4}

S4: producer execute count = register1 {count = 6 }

S5: consumer execute count = register2 {count = 4}

What are all possible values from concurrent execution?

How to prevent race condition?

 Define a critical section in each process

 Reading and writing common variables.

 Make sure that only one process can execute in the critical section at a time.

 What sync code to put into the entry & exit sections to prevent race condition?

do { entry section critical section exit section remainder section

} while (TRUE);

Solution to Critical-Section Problem

1.

2.

3.

Mutual Exclusion - If process P i is executing in its critical section, then no other processes can be executing in their critical sections

Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely

Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted

What is the difference between

Progress and Bounded Waiting?

Peterson’s Solution

 Simple 2-process solution

 Assume that the LOAD and STORE instructions are atomic; that is, cannot be interrupted.

 The two processes share two variables:

 int turn ;

 Boolean flag[2]

 The variable turn indicates whose turn it is to enter the critical section.

 The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process P i is ready!

Algorithm for Process

P i while (true) { flag[i] = TRUE;

Entry Section turn = j; while ( flag[j] && turn == j);

}

CRITICAL SECTION flag[i] = FALSE;

Exit Section

REMAINDER SECTION

Mutual exclusion

Only one process enters critical section at a time.

Proof: can both processes pass the while loop (and enter critical section) at the same time?

Progress

Selection for waiting-to-enter-criticalsection process does not block.

Proof: can Pi wait at the while loop forever (after Pj leaves critical section)?

Bounded Waiting

Limited time in waiting for other processes.

Proof: can Pj win the critical section twice while Pi waits?

Algorithm for Process

P i while (true) { flag[i] = TRUE;

Entry Section turn = j; while ( flag[j] && turn == j);

CRITICAL SECTION

Exit Section flag[i] = FALSE;

REMAINDER SECTION

} while (true) { flag[j] = TRUE; turn = i; while ( flag[i] && turn == i);

CRITICAL SECTION flag[j] = FALSE;

}

REMAINDER SECTION

Synchronization Hardware

Many systems provide hardware support for critical section code

Uniprocessors – could disable interrupts

 Currently running code would execute without preemption

 Generally too inefficient on multiprocessor systems

 Operating systems using this not broadly scalable

Modern machines provide special atomic hardware instructions

 Atomic = non-interruptable

 TestAndSet(target): Either test memory word and set value

 Swap(a,b): Or swap contents of two memory words

TestAndSet Instruction

 Definition: boolean TestAndSet (boolean *target)

{ boolean rv = *target;

*target = TRUE; return rv:

}

Solution using TestAndSet

 Shared boolean variable lock, initialized to false.

 Solution: while (true) { while ( TestAndSet (&lock ))

; /* do nothing

Entry Section

// critical section

Exit Section lock = FALSE;

// remainder section

}

 Does it satisfy mutual exclusion?

 How about progress and bounded waiting?

 How to fix this?

Bounded-Waiting TestAndSet

 Shared variable boolean waiting[n]; boolean lock; // initialized false.

 Solution: do { waiting[i] = TRUE;

Entry Section while (waiting[i] &&

TestAndSet(&lock); waiting[i] = FALSE;

// critical section

Exit Section j=(i+1)%n; while ((j!=i) && !waiting[j]) j=(j+1)%n;

If (j==i) lock = FALSE; else waiting[j] = FALSE;

// reminder section

} while (TRUE);

Mutual exclusion

 Proof: can two processes pass the while loop (and enter critical section) at the same time?

Bounded Waiting

 Limited time in waiting for other processes.

What is waiting[] for? When does waiting[i] set to FALSE?

Proof: how long does Pi’s wait till waiting[i] becomes FALSE?

Progress

 Proof: exit section unblocks at least one process’s waiting[] or set the lock to FALSE.

Swap Instruction

 Definition: void Swap (boolean *a, boolean *b)

{ boolean temp = *a;

*a = *b;

*b = temp:

}

Solution using Swap

 Shared Boolean variable lock initialized to FALSE; Each process has a local Boolean variable key.

 Solution:

 while (true) { key = TRUE; while ( key == TRUE)

Swap (&lock, &key );

// critical section lock = FALSE;

// remainder section

Entry Section

Exit Section

}

Mutual exclusion? Progress and Bounded Waiting?

Notice a performance problem with Swap &

TestAndSet solutions?

Semaphore

 Also need to consider easy programmability, not just correctness.

 HW/SW solutions to the critical section problem are they easy to use & program?

 Introduce a synchronization abstraction called semaphore

 easy to use and understand

Semaphore

 Semaphore S – integer variable

 Two standard operations to modify S:

 wait(): first test S <= 0, then decrement S.

wait (S) { while S <= 0

; // no-op

S--;

}

} signal(): increment S.

signal (S) {

S++;

 Can only be accessed via two indivisible (atomic) operations

Semaphore as General Synchronization Tool

 Counting semaphore – integer value can range over an unrestricted domain

 Binary semaphore – integer value can range only between 0 and 1; can be simpler to implement

 Also known as mutex locks

 Can implement a counting semaphore S as a binary semaphore

 Provides mutual exclusion

Semaphore S; // initialized to 1 wait (S);

Critical Section signal (S);

Binary Semaphore (0/1 value)

Semaphore S; // initialized to 1 wait (S); // Entry section

Critical Section signal (S); // Exit section

_____________________

} wait (S) { while S <=0; // no-op

S--;

} signal (S) {

S++;

Mutual exclusion

 Only one process enters critical section at a time.

Progress

 Selection for waiting-toenter-critical-section process does not block.

Bounded Waiting

 Limited time in waiting for other processes.

 Problems:

Bounded waiting

Busy waiting

Semaphore Implementation with

no Busy waiting

 Re-implement wait(S) & signal(s)

 With each semaphore there is an associated waiting queue. typedef struct { int value; struct process *list;

} semaphore;

Two operations:

 block – place the process invoking the operation on the appropriate waiting queue.

 wakeup – remove one of processes in the waiting queue and place it in the ready queue.

} wait (S) { value--; if (value < 0) { add this process to

} waiting queue block();

}

Signal (S){ value++; if (value <= 0) {

// remove a process P

} from the waiting queue wakeup(P);

Does it solve busy waiting and bounded waiting?

Deadlock and Starvation

 Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes

 Let

S and

Q be two semaphores initialized to 1

P

0

P

1 wait (S); wait (Q);

.

.

. signal (S); signal (Q); wait (Q); wait (S);

.

.

.

signal (Q); signal (S);

 Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended.

Classical Problems of Synchronization

 Bounded-Buffer Problem

 Readers and Writers Problem

 Dining-Philosophers Problem

Bounded-Buffer Problem

N buffers, each can hold one item

 Semaphore mutex initialized to the value 1

 Semaphore full initialized to the value 0

 Semaphore empty initialized to the value N.

Mutex

Full

Empty

Bounded Buffer Problem (Cont.)

}

 The structure of the producer process

 The structure of the consumer process while (true) {

// produce an item while (true) { wait (empty); wait (mutex);

// add the item to the buffer wait (full); wait (mutex);

// remove an item from buffer signal (mutex); signal (full); signal (mutex); signal (empty);

}

// consume the removed item

Readers-Writers Problem

 A data set is shared among many concurrent processes

 Readers – only read the data set and no write.

 Writers – can both read and write.

 Problem – allow multiple readers to read at the same time. Only one single writer can access the shared data at the same time.

 Shared Data

 Data set

 Semaphore mutex initialized to 1.

 Semaphore wrt initialized to 1.

 Integer readcount initialized to 0.

Mutex

Wrt

Readcount

Readers-Writers Problem (Cont.)

}

 The structure of a writer process while (true) { wait (wrt) ;

// writing is performed signal (wrt) ;

 The structure of the reader process while (true) { readcount++; signal (mutex)

// reading is performed readcount--;

} signal(mutex);

Dining-Philosophers Problem

 Philosophers think

 Philosophers become hungry

 Pick up two chopsticks to eat

 Philosophers as concurrent processes and chopsticks as shared data

 Bowl of rice (data set)

 Semaphore chopstick [5] initialized to 1

Dining-Philosophers Problem (Cont.)

 The structure of Philosopher i : while (true) { wait ( chopstick[i] ); wait ( chopStick[ (i + 1) % 5] );

// eat signal ( chopstick[i] ); signal (chopstick[ (i + 1) % 5] );

// think

}

What’s wrong with this?

Although guarantee that no two neighbors can eat at the same time, deadlock may occur.

How to solve deadlock problem?

How to solve the deadline problem?

 Allow at most 4 philosophers on the table at a time

 Allow a philosopher to pick up her chopsticks only if both chopsticks are available

 Odd philosopher picks up first her left chopstick and then her right chopstick (even philosopher does the opposite).

 Others?

Problems with Semaphores

 Common wrong use of semaphore operations:

 signal (mutex) … critical section … wait (mutex)

 What would happen?

 wait (mutex) … critical section … wait (mutex)

 What would happen?

 Omitting of wait (mutex) or signal (mutex) (or both)

 What would happen?

 Sometimes, errors are not re-producible.

 Make debugging difficult

Monitors

 A high-level abstraction provides a convenient and effective mechanism for process synchronization

 Only one process may be active within the monitor at a time

{ monitor monitor-name

// shared variable declarations procedure P1 (…) { …. }

… procedure Pn (…) {……}

}

}

Initialization code ( ….) { … }

Schematic view of a Monitor

Condition Variables

 condition x, y;

 Two operations on a condition variable:

 x.wait () – a process that invokes the operation is suspended.

 x.signal () – resumes one of processes (if any) that invoked x.wait ()

 Similar to semaphore signal() except x.signal() does nothing when no process is blocked.

Monitor with Condition Variables

Solution to Dining Philosophers (cont)

Allow a philosopher to eat

(successfully pick up her chopsticks) only when both her right and left neighbors are not eating

(both chopsticks are available).

 Avoid deadlock

Each philosopher I invokes the operations pickup() and putdown() in the following sequence: dp.pickup (i)

EAT dp.putdown (i)

Solution to Dining Philosophers

(explanation in the next slides) monitor DP

{ enum { THINKING; HUNGRY,

EATING) state [5] ; condition self [5];

} void pickup (int i) { state[i] = HUNGRY; test(i); // test neighbors if (state[i] != EATING) self [i].wait; void putdown (int i) { state[i] = THINKING; test((i + 4) % 5); test((i + 1) % 5);

}

} void test (int i) { if ((state[(i + 4) % 5] !=

EATING) &&

(state[i] == HUNGRY) &&

}

(state[(i + 1) % 5] != EATING)

) { state[i] = EATING ; self[i].signal () ;

} initialization_code() { for (int i = 0; i < 5; i++) state[i] = THINKING;

}

How to deal with condition wait & signal?

P3 is done eating

 Call putdown(3) -> test(2,4) -> self[2,4].signal()

P4 is hungry

 Called pickup(4) -> self[4].wait()

 Shall P3 continue its execution or immediate switch to P4 execution?

Signal and wait. P3 either waits (until P4 leaves the monitor) or waits for another condition.

Signal and continue. P4 either waits (until P3 leaves the monitor) or waits for another condition.

Monitor Implementation Using Semaphores

(based on Signal & Wait)

 Variables semaphore mutex; // initially = 1 semaphore next; // initially = 0, for the signaled process,

// done eating process waits on next so

// that hungry process can run. int next-count = 0; // # of suspended processes on next

 Each procedure F will be replaced by wait(mutex);

… body of F;

… if (next-count > 0) signal(next) else signal(mutex);

 Mutual exclusion within a monitor is ensured.

Monitor Implementation

 For each condition variable x, we have: semaphore x-sem; // (initially = 0) int x-count = 0;

 The operation x.wait

can be implemented as: x-count++; if (next-count > 0) signal(next); else signal(mutex); wait(x-sem); x-count--;

// counter for self[i]

// unblock done-eating process first

// unblock mutex

// block on self[i]

// unblocked and can eat now

Monitor Implementation

 The operation x.signal can be implemented as:

}

// done eating if (x-count > 0) { next-count++; // signal & wait signal(x-sem); // enable the hungry one to eat first wait(next); // suspend itself next-count--;

Overview of Transaction Management

 Concurrency Control

 Serializability

 Locking protocol

 Crash recovery

 Log-based recovery

 Checkpointing

 Common overlaps with the Database course

42

Transaction

 Transaction = one execution of a user program .

 Example: transfer money from account A to account B.

 A Sequence of Read & Write Operations

 How to improve system throughput (# transactions executed per time unit)?

 Make each transaction execute faster (faster CPU? faster disks?)

 What’s the other option?

43

Concurrent Execution

 Why concurrent execution is needed?

 Disk accesses are slow and frequent - keep the CPU busy by running several transactions at the same time.

 What is so difficult about writing multi-threaded program?

 Race condition

 Concurrency Control ensures that the result of concurrent execution of several transactions is the same as some serial (one at a time) execution of the same set of transactions.

 How can the results be different?

44

System crash

 Must also handle system crash in the middle of a transaction

(or aborted transactions ).

 Crash recovery ensures that partially aborted transactions are not seen by other transactions.

 How can the results be seen by other transactions?

45

Non-serializable Schedule

A is the number of available copies of a book = 1.

T1 wants to buy one copy.

T2 wants to buy one copy.

What is the problem here?

 The result is different from any serial schedule

T1,T2 or T2,T1

How to prevent this problem?

 How to detect a schedule is serializable?

T1

R(A=1)

Check if

(A>0)

T2

W(A=0)

Commit

46

R(A=1)

Check if

(A>0)

W(A=0)

Commit

Unrecoverable Schedule

T1 deducts $100 from A.

T2 adds 6% interests to A.

T2 commits.

T1 aborts.

Why is the problem?

Undo T1 => T2 has read a value for A that should never been there .

But T2 has committed! (may not be able to undo committed actions).

This is called unrecoverable schedule .

Is this a serializable schedule?

How to ensure/detect that a schedule is recoverable (& serializable)?

T1

R(A)

W(A-100)

Abort

47

T2

R(A)

W(A+6%)

Commit

Outline

 Four fundamental properties of transactions (ACID)

 Atomic, (Consistency), Isolation, Durability

 Schedule actions in a set of concurrent transactions

 Problems in concurrent execution of transactions

 Lock-based concurrency control (Strict 2PL)

 Performance issues with lock-based concurrency control

48

ACID Properties

The DBMS’s abstract view of a transaction: a sequence of read and write actions.

Atomic : either all actions in a transaction are carried out or none.

Transfer $100 from account A to account B: R(A), A=A-100,

W(A), R(B), B=B+100, W(B) => all actions or none (retry).

The system ensures this property.

How can a transaction be incomplete?

 Aborted by DBMS, system crash, or error in user program.

How to ensure atomicity during a crash?

 Maintain a log of write actions in partial transactions. Read the log and undo these write actions.

49

ACID Properties (2)

 Consistency : run by itself with no concurrent execution leave the DB in a “good” state.

 This is the responsibility of the user.

 No increase in total balance during a transfer => the user makes sure that credit and debit the same amount.

50

ACID Properties (3)

 Isolation : transactions are protected from effects of concurrently scheduling other transactions.

 The system (concurrency control) ensures this property.

 The result of concurrent execution is the same as some order of serial execution (no interleaving).

 T1 || T2 produces the same result as either

 T1;T2 or T2;T1.

51

ACID Properties (4)

 Durability : the effect of a completed transaction should persist across system crashes.

 The system (crash recovery) ensures durability property and atomicity property.

 What can a DBMS do ensure durability?

 Maintain a log of write actions in partial transactions. If system crashes before the changes are made to disk from memory, read the log to remember and restore changes when the system restarts.

52

Schedules

A transaction is seen by DBMS as a list of read and write actions on DB objects (tuples or tables).

 Denote R

T

(O), W

T

(O) as read and write actions of transaction T on object O.

A transaction also need to specify a final action:

 commit action means the transaction completes successfully.

abort action means to terminate and undo all actions

A schedule is an execution sequence of actions (read, write, commit, abort) from a set of transactions.

A schedule can interleave actions from different transactions.

A serial schedule has no interleaving actions from different transactions.

53

Examples of Schedules

Schedule with

Interleaving Execution

T2 T1

R(A)

W(A)

R(B)

W(B)

Commit

R(C)

W(C)

Commit

Serial Schedule

T1

R(A)

W(A)

R(C)

W(C)

Commit

T2

R(B)

W(B)

Commit

54

Serializable Schedule

 A serializable schedule is a schedule that produces identical result as some serial schedule .

 A serial schedule has no interleaving actions from multiple transactions.

 We have a serializable schedule of

T1 & T2.

 Assume that T2:W(A) does not influence T1:W(B).

 It produces the same result as executing T1 then T2 (denote T1;T2).

T1

R(A)

W(A)

T2

R(A)

W(A)

R(B)

W(B)

Commit

R(B)

W(B)

Commit

55

Anomalies in Interleaved Execution

 Under what situations can an arbitrary (nonserializable) schedule produce inconsistent results from a serial schedule?

 Three possible situations in interleaved execution.

 Write-read (WR) conflict

 Read-write (RW) conflict

 Write-write (WW) conflict

 Note that you can still have serializable schedules with the above conflicts.

56

WR Conflict (Dirty Read)

 Situation: T2 reads an object that has been modified by T1, but T1 has not committed.

 T1 transfers $100 from A to B. T2 adds 6% interests to A and B. A non-serializable schedule is:

Step 1: deduct $100 from A.

Step 2: add 6% interest to A & B.

Step 3: credit $100 in B.

 Why is the problem?

The result is different from any serial schedule -

> Bank adds $6 less interest.

Why? Inconsistent state when T2 completes.

The value of A written by T1 is read by T2 before T1 completes.

T1

R(A)

W(A-100)

R(B)

W(B+100)

Commit

57

T2

R(A)

W(A+6%)

R(B)

W(B+6%)

Commit

RW Conflicts (Unrepeatable Read)

Situation: T2 modifies A that has been read by T1, while T1 is still in progress.

 When T1 tries to read A again, it will get a different result, although it has not modified A in the meantime.

What is the problem?

A is the number of available copies of a book = 1. T1 wants to buy one copy. T2 wants to buy one copy. T1 gets an error.

 The result is different from any serial schedule

T1

R(A==1)

Check if (A>0)

T2

R(A==1)

Check if (A>0)

W(A=0)

W(A)

Error!

Commit

Commit

58

WW Conflict (Overwriting Uncommitted Data)

Situation: T2 overwrites the value of an object A, which has already been modified by

T1, while T1 is still in progress.

T1 wants to set salaries of Harry & Larry to

($2000, $2000). T2 wants to set them to

($1000, $1000).

What wrong results can the left schedule produce?

The left schedule can produce the results

($1000, $2000).

The result is different from any serial schedule.

This is called lost update (T2 overwrites T1’s A value, so T1’s value of A is lost.)

T1

W(A=2000)

T2

W(A=1000)

W(B=1000)

Commit

W(B=2000)

Commit

59

Lock-Based Concurrency Control

 Concurrency control ensures that

 (1) Only serializable, recoverable schedules are allowed

 (2) Actions of committed transactions are not lost while undoing aborted transactions.

 How to guarantee safe interleaving of transactions’ actions (serializability)?

T1

A=5

R(A)

W(A) // A=6

Abort

T2

R(A)

W(A) // A=7

R(B)

W(B)

Commit

60

Lock-Based Concurrency Control

 Strict Two-Phase Locking (Strict 2PL)

 Non-strict Two-phase lock

 A lock is a small bookkeeping object associated with a

DB object.

 Shared lock : several transactions can have shared locks on the same DB object.

 Concurrent read? Concurrent write?

 Exclusive lock : only one transaction can have an exclusive lock on a DB object.

 Concurrent read? Concurrent write?

61

Strict 2PL

 Rule #1: If a transaction T wants to read an object, it requests a shared lock. Denote as S(O).

 Rule #2: If a transaction T wants to write an object, it requests an exclusive lock. Denote as X(O).

 Rule #3: When a transaction is completed (aborted), it releases all held locks.

62

Strict 2PL

 What happens when a transaction cannot obtain a lock?

 It suspends

 Why shared lock for read?

 Avoid RW/WR conflicts

 Why exclusive lock for write?

 Avoid RW/WR/WW conflicts

 Requests to acquire or release locks can be automatically inserted into transactions.

63

T1

S(A)

R(A)

X(C)

R(C)

W(C)

Commit

T2

S(A)

R(A)

X(B)

R(B)

W(B)

Commit

Example of Strict 2PL

T1

S(A)

R(A)

T2

S(A)

R(A)

X(B)

R(B)

W(B)

Commit

X(C)

R(C)

W(C)

Commit

64

T1

X(A)

R(A)

W(A)

X(B)

R(B)

W(B)

Commit // release locks

Example #2 of Strict 2PL

T2

X(A)

R(A)

W(A)

X(B)

R(B)

W(B)

Commit // release locks

T1

X(A)

R(A)

W(A)

X(B)

R(B)

W(B)

Commit // release locks

T2

Suspend

X(A)

R(A)

W(A)

R(B)

W(B)

Commit // release locks

65

“Strict” 2PL seems too strict?

possible for better concurrency?

T1

X(A)

R(A)

W(A)

X(B)

R(B)

W(B)

Commit // release locks

T2

X(A)

R(A)

W(A)

X(B)

R(B)

W(B)

Commit // release locks

T1

X(A)

R(A)

W(A)

X(B)

R(B)

W(B)

Commit // release locks

T2

Suspend

X(A)

R(A)

W(A)

X(B)

R(B)

W(B)

Commit // release locks

T1

X(A)

R(A)

W(A)

X(B)

R(B)

W(B)

Release X(A)

Release X(B)

Commit

T2

Suspend

66

X(A)

R(A)

W(A)

R(B)

W(B)

Commit // release locks

2PL: without strict

T1

X(A)

R(A)

W(A)

X(B)

R(B)

W(B)

Commit // release locks

T2

X(A)

R(A)

W(A)

X(B)

R(B)

W(B)

Commit // release locks

T1

X(A)

R(A)

W(A)

X(B)

Release X(A)

T2

Suspend

X(A)

R(A)

W(A)

R(B)

W(B)

Release X(B)

Commit

X(B)

R(B)

W(B)

Release X(A)

Release X(B)

Commit

67

Two-Phase Locking (2PL)

 2PL Protocol is the same as Strict 2PL, except

 A transaction can release locks before the end (unlike Strict

2PL), i.e., after it is done with reading & writing the objects.

 However, a transaction can not request additional locks after it releases any locks.

 2PL has lock growing phase and shrinking phase.

68

2PL: without strict

T1

X(A)

R(A)

W(A)

X(B)

R(B)

W(B)

Commit // release locks

T2

X(A)

R(A)

W(A)

X(B)

R(B)

W(B)

Commit // release locks

T1

X(A)

R(A)

W(A)

X(B)

Release X(A)

T2

Suspend

X(A)

R(A)

W(A)

R(B)

W(B)

Release X(B)

Commit

X(B)

R(B)

W(B)

Release X(A)

Release X(B)

Commit

69

2PL: what if T1 aborts?

T1

X(A)

R(A)

W(A)

X(B)

R(B)

W(B)

Commit // release locks

T2

X(A)

R(A)

W(A)

X(B)

R(B)

W(B)

Commit // release locks

6.70

T1

X(A)

R(A)

W(A)

X(B)

Release X(A)

T2

Suspend

X(A)

R(A)

W(A)

R(B)

W(B) // abort!

Release X(B)

Commit

X(B)

R(B)

W(B)

Release X(A)

Release X(B)

Commit

70

Silberschatz, Galvin and Gagne ©2005

Operating System Principles

End of Chapter 6

Download