Lecture 28

advertisement
CGS 3763 Operating Systems Concepts
Spring 2013
Dan C. Marinescu
Office: HEC 304
Office hours: M-Wd 11:30 - 12:30 AM
Lecture 28 – Monday, March 25, 2013

Last time:




Today





Monitors
Atomicity
Hardware support for atomicity
Coordination with a bounded buffer
Next time


Deadlock detection
Wait-for-graphs
Semaphores
Storage models
Reading assignments

Chapters 6 and 7 of the textbook
Lecture 28
2
Monitors

Semaphores can be used incorrectly





multiple threads may be allowed to enter the critical section guarded by the semaphore
may cause deadlocks
Threads may access the shared data directly without checking the semaphore.
Solution  encapsulate shared data with access methods to operate on them.
Monitors  an abstract data type that allows access to shared data with specific
methods that guarantee mutual exclusion
Lecture 28
3
Lecture 28
4
Atomic operations



Concurrency control requires atomic operations.
Atomic operation operation consisting of multiple steps that have
to executed without any interruption, all steps must be executed or
none of them.
All-or-Nothing atomicity  A sequence of steps is an all-or-nothing
action if, from the point of view of its invoker, the sequence always
either

completes, or
 aborts in such a way that it appears that the sequence had never been
undertaken in the first place. That is, it backs out.


Before-or-After atomicity  actions whose effect from the point of
view of their invokers is the same as if the actions occurred either
completely before or completely after one another.
Atomicity requires hardware support, special instructions.
Lecture 28
5
Lecture 28
6
Hardware support for atomicity


It is not possible to implement atomic operations without some
hardware support.
All processors include in the instruction set an instruction for
implementing atomic operations:

Compare and Swap or
 Test and Test or
 Read and Set Memory
Lecture 28
7
Compare-and-swap instruction



Compare-and-swap instruction (CMPSWP)  atomic instruction
used in multithreading to achieve synchronization.
It compares the contents of a memory location to a given value and,
only if they are the same, modifies the contents of that memory
location to a given new value. This is done as a single atomic
operation.
We can use CMPSWP to implement a semaphore as follows:





read the value in the memory location;
add one to the value
use compare-and-swap to write the incremented value back
retry if the value read in by the compare-and-swap did not match the
value we originally read
Since the compare-and-swap occurs (or appears to occur)
instantaneously, if another thread updates the location while we are inprogress, the compare-and-swap is guaranteed to fail.
Lecture 28
8
Test and Set



Test-and-Set  instruction used to write to a memory location and
return its old value as a single atomic (i.e., non-interruptible)
operation.
If multiple threads may access the same memory, and if a process is
currently performing a test-and-set, no other thread may begin
another test-and-set until the first one is done.
A lock can be implemented using the test-and-set instruction
function Lock(boolean *lock)
{
while (test_and_set(lock) == 1);
}
Lecture 28
9
Read and Set Memory- RSM instruction
Lecture 28
10
What if the locking is not atomic?
Lecture 28
11
Thread coordination with a bounded buffer


Producer-consumer problem  two threads cooperate – the
producer is writing in a buffer and the consumer is reading from the
buffer.
Basic assumptions:

We have only two threads
 Threads proceed concurrently at independent speeds/rates
 Bounded buffer – only N buffer cells
 Messages are of fixed size and occupy only one buffer cell.
Lecture 28
12
Read from
the buffer
location
pointed by
out
0
1
in
2
N-2 N-1
out
Write to
the buffer
location
pointed by
out
shared structure buffer
message instance message[N]
integer in initially 0
integer out initially 0
procedure SEND (buffer reference p, message instance msg)
while p.in – p.out = N do nothing
/* if buffer full wait
p.message [p.in modulo N] ßmsg
/* insert message into buffer cell
p.in ß p.in + 1
/* increment pointer to next free cell
procedure RECEIVE (buffer reference p)
while p.in = p.out do nothing
/* if buffer empty wait for message
msgß p.message [p.in modulo N] /* copy message from buffer cell
p.out ß p.out + 1
/* increment pointer to next message
return msg
Lecture 28
13
Implicit assumptions for the correctness of the
implementation
1.
2.
3.
4.
5.
6.
One sending and one receiving thread. Only one thread updates each
shared variable.
Sender and receiver threads run on different processors to allow spin
locks
in and out are implemented as integers large enough so that they do not
overflow (e.g., 64 bit integers)
The shared memory used for the buffer provides read/write coherence
The memory provides before-or-after atomicity for the shared variables in
and out
The result of executing a statement becomes visible to all threads in
program order. No compiler optimization supported
Lecture 28
14
Race condition affecting the pointers; both threads A and B increment the
pointer “in” (the pointer where the data is written.
Two senders execute the code concurrently
Processor 1 runs
thread A
Processor 2 runs
thread B
Processor-memory
bus
Memory contains shared data
Buffer
In
out
time
Fill entry 0 at time t2
with item a
Increment pointer
at time t3
Operations of Thread A
in=out=0
inß1
Buffer is empty
0
inß2
on=out=0
Operations of Thread B
Fill entry 0 at time t1
with item b
Increment pointer
at time t4
Item b is overwritten, it is lost
Lecture 28
15
Lecture 28
16
Storage models


Cell storage
Journal storage
Lecture 28
17
Desirable properties of cell storage
Lecture 28
18
Asynchronous events and signals


Signals, or software interrupts, were originally introduced in Unix to notify a
process about the occurrence of a particular event in the system.
Signals are analogous to hardware I/O interrupts:

When a signal arrives, control will abruptly switch to the signal handler.
 When the handler is finished and returns, control goes back to where it came from

After receiving a signal, the receiver reacts to it in a well-defined manner.
That is, a process can tell the system (OS) what they want to do when signal
arrives:

Ignore it.
 Catch it and deliver it. In this case, it must specify (register) the signal handling
procedure. This procedure resides in the user space. The kernel will make a call to
this procedure during the signal handling and control returns to kernel after it is
done.
 Kill the process (default for most signals).

Examples: Event - child exit, signal - to parent. Control signal from keyboard.
Lecture 28
19
Solutions to thread coordination problems must satisfy a
set of conditions
1. Safety: The required condition will never be violated.
2. Liveness: The system should eventually progress irrespective of contention.
3. Freedom From Starvation: No process should be denied progress for ever. That
is, every process should make progress in a finite time.
4. Bounded Wait: Every process is assured of not more than a fixed number of
overtakes by other processes in the system before it makes progress.
5. Fairness: dependent on the scheduling algorithm
• FIFO: No process will ever overtake another process.
• LRU: The process which received the service least recently gets the service next.
For example for the mutual exclusion problem the solution should guarantee that:
Safety  the mutual exclusion property is never violated
Liveness  a thread will access the shared resource in a finite time
Freedom for starvation  a thread will access the shared resource in a finite time
Bounded wait  a thread will access the shared resource at least after a fixed number of
accesses by other threads.
Lecture 28
20
Thread coordination problems


Dining philosophers
Critical section
Lecture 28
21
A solution to critical section problem

Applies only to two threads Ti and Tj with i,j ={0,1} which share
integer turn  if turn=i then it is the turn of Ti to enter the critical section
 boolean flag[2]  if flag[i]= TRUE then Ti is ready to enter the critical section


To enter the critical section thread Ti





If both threads want to enter then turn will end up with a value of either i or j
and the corresponding thread will enter the critical section.
Ti enters the critical section only if either flag[j]= FALSE or turn=i
The solution is correct




sets flag[i]= TRUE
sets turn=j
Mutual exclusion is guaranteed
The liveliness is ensured
The bounded-waiting is met
But this solution may not work as load and store instructions can be
interrupted on modern computer architectures
Lecture 28
22
Lecture 28
23
Signals state and implementation

A signal has the following states:

Signal send - A process can send signal to one of its group member process
(parent, sibling, children, and further descendants).
 Signal delivered - Signal bit is set.
 Pending signal - delivered but not yet received (action has not been taken).
 Signal lost - either ignored or overwritten.

Implementation: Each process has a kernel space (created by default)
called signal descriptor having bits for each signal. Setting a bit is delivering
the signal, and resetting the bit is to indicate that the signal is received. A
signal could be blocked/ignored. This requires an additional bit for each
signal. Most signals are system controlled signals.
Lecture 28
24
Locks; Before-or-After actions

Locks shared variables which acts as a flag to coordinate access to a
shared data. Manipulated with two primitives






Support implementation of Before-or-After actions; only one thread can
acquire the lock, the others have to wait.
All threads must obey the convention regarding the locks.
The two operations ACQUIRE and RELEASE must be atomic.
Hardware support for implementation of locks



ACQUIRE
RELEASE
RSM – Read and Set Memory
CMP –Compare and Swap
RSM (mem)


If mem=LOCKED then RSM returns r=LOCKED and sets mem=LOCKED
If mem=UNLOCKED the RSM returns r=LOCKED and sets mem=LOCKED
Lecture 28
25
Download