Task Synchronization Prepared by: Jamil Alomari Suhaib Bani Melhem Supervised by: Dr.Loai Tawalbeh

advertisement
Task Synchronization
Prepared by: Jamil Alomari
Suhaib Bani Melhem
Supervised by: Dr.Loai Tawalbeh
Outlines









Background
Producer Consumer Problem
The Critical-Section Problem
Software Solutions
Hardware Solutions
Semaphore
Deadlock and starvation
Monitors
Message Passing
Concurrent Execution




Concurrent execution has to give the same results as
serial execution.
Concurrent execution with shared data leads us to
speak about synchronization.
To get data consistency we should have mechanism to
avoid data inconsistency problem.
Synchronization as embedded system topic we have
to speak about producer consumer problem
Synchronizing Reasons
1. For getting shared access to resources (variables,
buffers, devices, etc.)
2. For communicating

Cases where cooperating processes need not
synchronize to share resources:



All processes are read only.
All processes are write only.
One process writes, all other processes read.
Producers Consumers Systems


One system produce items that will be used by
other system
Examples
 shared printer, the printer here acts the
consumer, and the computers that produce
the documents to be printed are the
consumers.
 Sensors network, where the sensors here the
producers, and the base stations (sink) are
the producers.
Producer Consumer Problem


The producer-consumer problem illustrates the
need for synchronization in systems where many
processes share a resource. In the problem, two
processes share a fixed-size buffer. One
process produces information and puts it in the
buffer, while the other process consumes
information from the buffer. These processes do
not take turns accessing the buffer, they both
work concurrently.
It is also called bounded buffer problem
Producer
While(Items_number ==buffer size)
; //waiting since the buffer is full
Buffer[i]=next_produced_item;
i=(i+1)%Buffer_size;
Items_number++;
Consumer
while (Items_number == 0)
; // do nothing since the buffer is empty
Consumed _item= buffer[j];
j = (j + 1) % Buffer_size;
Items_number--;
Producer Consumer Problem
As RTL Item_number++ is implemented as the following:
Register 1=Item_number;
 Register1=register1 +1;
 Item_number=register1;
and Item_number-- in the same way using different register.


Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = count {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute count = register1 {count = 6 }
S5: consumer execute count = register2 {count = 4}
The Critical-Section Problem
n processes all competing to use some shared
data
 Each process has a code segment, called
critical section, in which the shared data is
accessed.
 Problem: ensure that when one process is
executing in its critical section, no other
process is allowed to execute in its critical
section.

Solution to Critical-Section Problem



Mutual Exclusion: one process at a time gets in
critical section.
Progress: a process operating outside of its critical
section cannot prevent other processes from entering
their critical section, processes attempting to enter
their critical sections simultaneously must decide
which process enters eventually.
Bounded Waiting: a process attempting to enter its
critical region will be able to do so eventually.
Structure of process Pi
repeat
critical section
remainder section
until false
Types of solution

Software solutions
-Algorithms whose correctness dose not rely on any
assumptions other than positive processing speed.

Hardware solutions
- Rely on some special machine instructions.

Operating system solutions
- Extending hardware solutions to provide some functions and
data structure support to the programmer.
Software Solutions




Initial Attempts to Solve Problem.
Only 2 processes, P0 and P1.
General structure of process Pi (other process Pj )
repeat
entry section
critical section
exit section
remainder section
until false;
Processes may share some common variables to synchronize
their actions.
Algorithm 1
int turn = 0;
Pi:
while (turn != i) ;
CSi;
turn = 1 - i;



/* shared control variable */
/* i is 0 or 1 */
/* busy wait */
Guarantees mutual exclusion.
Does not guarantee progress enforces strict alternation of
processes entering CS's.
Bounded waiting violated suppose one process terminates.
Algorithm 2
Remove strict alternation requirement
int flag[2] = { FALSE, FALSE
while (flag[1 - i]) ;
flag[i] = TRUE;
CSi;
flag[i] = FALSE;



Mutual exclusion violated
Progress ok.
Bounded wait ok.
/* flag[i] indicates that Pi is in its */
/* critical section */
Algorithm 3
Restore mutual exclusion
int flag[2] = { FALSE, FALSE }
/* flag[i] indicates that Pi wants to */
/* enter its critical section */
flag[i] = TRUE;
while (flag[1 - i]) ;
CSi;
flag[i] = FALSE;



Guarantees mutual exclusion
Violates progress both processes could set flag and then
deadlock on the while.
Bounded waiting violated
Algorithm 4
Attempt to remove the deadlock
int flag[2] = { FALSE, FALSE }
flag[i] = TRUE;
while (flag[1 - i]) {
flag[i] = FALSE;
delay;
flag[i] = TRUE;
}
CSi;
flag[i] = FALSE;



Mutual exclusion guaranteed.
Progress violated.
Bounded waiting violated.
/* flag[i] indicates that Pi wants to */
/* sleep for some time */
Peterson's Algorithm
int flag[2] = { FALSE, FALSE }
int turn = 0;
/* flag[i] indicates that Pi wants to */
/* enter its critical section */
/* turn indicates which process has */
/* priority in entering its critical section */
flag[i] = TRUE;
turn = 1 - i;
while (flag[1 - i] && turn == 1 - i) ;
CSi;
flag[i] = FALSE;
Satisfies all solution requirements
Hardware Solutions


Many systems provide hardware support for critical section
code
Uniprocessors – could disable interrupts


Currently running code would execute without preemption
Generally too inefficient on multiprocessor systems


Operating systems using this not broadly scalable
Modern machines provide special atomic hardware
instructions



Atomic = non-interruptable
Either test memory word and set value
Or swap contents of two memory words
Solution using Fetch and Increment
int nextTicket = 0, serving = 0;
mutexbegin()
{
int myTicket;
myTicket =
FAI(nextTicket);
while (myTicket != serving)
;
}
mutexend()
{
++serving;
}
Correctness proof:
1.
Mutual exclusion: contradiction.
Assumption: no process ``plays''
with control variables.
2.
Progress: by inspection, there's
no involvement from processes
operating outside their CSs.
3.
Bounded waiting: myTicket at
most n more than serving
TestAndSet
boolean TestAndSet (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
Solution using TestAndSet


Shared Boolean variable lock., initialized to false.
Solution:
while (true) {
while ( TestAndSet (&lock ))
; /* do nothing
// critical section
lock = FALSE;
// remainder section
}
Semaphore





Synchronization tool that does not require busy waiting
Semaphore is un integer flag, indicated that it is safe to proceed
Two standard operations modify S: wait() and signal()
 Originally called P() and V()
Less complicated
Can only be accessed via two indivisible (atomic) operations
 wait (S) {
while S <= 0
; // no-op
S--;
}
 signal (S) {
S++;
}
Semaphore Implementation



Must guarantee that no two processes can execute wait () and
signal () on the same semaphore at the same time
Thus, implementation becomes the critical section problem
where the wait and signal code are placed in the critical
section.
 Could now have busy waiting in critical section
implementation
 But implementation code is short
 Little busy waiting if critical section rarely occupied
Note that applications may spend lots of time in critical
sections and therefore this is not a good solution
Semaphore Implementation with no
Busy waiting

With each semaphore there is an associated waiting queue.
Each entry in a waiting queue has two data items:



value (of type integer)
pointer to next record in the list
Two operations:


block – place the process invoking the operation on the appropriate
waiting queue.
wakeup – remove one of processes in the waiting queue and place it in
the ready queue.
Semaphore Implementation with no
Busy waiting (Cont.)

Implementation of wait:
wait (S){
value--;
if (value < 0) {
add this process to waiting queue
block(); }
}

Implementation of signal:
Signal (S){
value++;
if (value <= 0) {
remove a process P from the waiting queue
wakeup(P); }
}
Semaphore

In the Producer-Consumer problem, semaphores are used for two
purpose:
 mutual exclusion and
 synchronization.

In the following example there are three semaphores, full, used for
counting the number of slots that are full; empty, used for counting the
number of slots that are empty; and mutex, used to enforce mutual
exclusion.
BufferSize = 3; semaphore mutex = 1; // Controls access to critical section
semaphore empty = BufferSize; // counts number of empty buffer slots,
semaphore full = 0; // counts number of full buffer slots

Solution to the Producer-Consumer
problem using Semaphores
 Producer()
{
while (TRUE) {
make_new(item);
wait(&empty);
wait(&mutex); // enter critical section
put_item(item); //buffer access
signal(&mutex); // leave critical section
signal(&full); // increment the full semaphore } }
Solution to the Producer-Consumer
problem using Semaphores
Consumer() {while (TRUE)
{
wait(&full); // decrement the full semaphore
wait(&mutex); // enter critical section
remove_item(item); // take a widget from the buffer
signal(&mutex); // leave critical section
signal(&empty); // increment the empty semaphore consume_item(item);
// consume the item
}
}
Difficulties with Semaphore
Wait(s) and signal(s) are scattered among
several processes therefore it is difficult to
understand their effect.
 Usage must be correct in all the processes.
 One bad process or one program errore can kill
the whole system.

Deadlock and starvation problems



Deadlock – two or more processes are waiting indefinitely for an event that can be
caused by only one of the waiting processes
Let S and Q be two semaphores initialized to 1
P0
P1
wait (S);
wait (Q);
wait (Q);
wait (S);
.
.
.
.
.
.
signal (S);
signal (Q);
signal (Q);
signal (S);
Starvation: indefinite blocking. A process may never be removed from the
semaphore queue in which it is suspended
Readers-Writers Problem

A data set is shared among a number of concurrent processes
 Readers – only read the data set; they do not perform any updates
 Writers – can both read and write.

Problem – allow multiple readers to read at the same time. Only one single
writer can access the shared data at the same time.

Shared Data
 Data set
 Semaphore mutex initialized to 1.
 Semaphore wrt initialized to 1.
 Integer readcount initialized to 0.
Readers-Writers Problem (Cont.)

The structure of a writer process
while (true) {
wait (wrt) ;
// writing is performed
signal (wrt) ;
}
Readers-Writers Problem (Cont.)

The structure of a reader process
while (true) {
wait (mutex) ;
readcount ++ ;
if (readercount == 1) wait (wrt) ;
signal (mutex)
// reading is performed
wait (mutex) ;
readcount - - ;
if (redacount == 0) signal (wrt) ;
signal (mutex) ;
}
Dining-Philosophers Problem
Dining-Philosophers Problem
(Cont.)

The structure of Philosopher i:
While (true) {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );
// eat
signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think
}
Real Time Systems



In real time systems the queue policy is not practical,
so we follow higher priority policy.
Shared systems we concern by fairness, but in real
time we focus on stability, which mean the system
has to meet the deadline, even if all deadlines cant be
met.
Priority Inversion: If a higher priority thread wants to
enter the critical section while a lower priority thread
is in the Critical Section, it must wait for the lower
priority thread to complete.
Priority Inversion
Consider the executions of four periodic threads: A, B, C, and D; two resources :
Q and V
Thread
 A
 B
 C
 D
Priority
1
2
3
4
Execution Sequence
EQQQQE
EE
EVVE
EEQVE
Arrival Time
0
2
2
4
Where E is executing for one time unit, Q is accessing resource Q for one time unit,
V is accessing resource V for one time unit
Example
Priority Inheritance

From the previous figure we can see that thread D has the
higher priority and finished the last one, this is the problem
of priority inversion ( the threads with medium priority
suspend the higher priority threads) .

Priority Inheritance: Let the lower priority task use the
highest priority of the higher priority tasks it blocks. In this
way, the medium priority tasks can no longer preempt low
priority task , which has blocked the higher priority task.
Priority Inheritance
Priority Ceiling

Priority Ceiling: is assigned to each mutex,
which is equal to the highest priority task that
may use this mutex.
Priority Ceiling
Synchronization with I/O devices

The I/O devices have 3 states:
 Idle:
inactive, or no I/O occur
 Busy: accepting output, or generate input
 Done: ready for transaction
Moving from state to other cause changing in
the flag.
Synchronization

Busy waiting loop: a software checks
status flag in loop that doesn't exit until the
flag set to one
new data
New input is
Ready-done
Waiting for input
busy
waiting for new data
Synchronization with no Buffering
Synchronization with Buffering
Synchronization with blind cycles
FIFO Queue
Synchronization-DMA
DMA
References






http://phoenix.goucher.edu/~kelliher/cs42/sep27.html
http://informatik.unibas.ch/lehre/ws06/cs201/_Downl
oads/cs201-osc-critsect-2up.pdf
http://sankofa.loc.edu/chu/web/Courses/Cosi410/Ch2/
require.htm
http://www2.cs.uregina.ca/~hamilton/courses/330/not
es/synchro/node2.html
http://www.cs.jhu.edu/~yairamir/cs418/os3/sld009.ht
m
http://doc.union.edu/152/Embedded
Download