Chapter 5 Processor Scheduling

advertisement
Chapter 5
Processor
Scheduling
Goals of Processor Scheduling
• The main goal of Processor (CPU) scheduling is
the sharing of the processor(s) among the
processes in the ready queue
• The critical activities of OS are:
1. Carry out the ordering of the allocation and deallocation jobs of the CPU to the various processes
and threads, one at a time
2. Deciding when to de-allocate and allocate the CPU
from a process to another process
2
Priorities and Scheduling
• Priorities can be used with either preemptive
(Proactive)or non-preemptive scheduling.
• Depending on the goals of an operating system,
one or more of various scheduling policies can
be used; each will result in a different system
performance.
• The criteria is based on relevant performance
measures and the various scheduling policies
are evaluated based on the criteria.
3
CPU Scheduling Policies
•
•
•
•
•
•
4
First-come-first-served (FCFS)
Shortest job first (Shortest process next)
Longest job first
Priority scheduling
Round robin (RR)
Shortest remaining time (SRT) also known as
shortest remaining time first (SRTF)
Types of Schedulers
• Long-term scheduler (memory allocation)
– The OS decided to create a new process from the jobs
waiting in the input queue and loaded into memory
– Controls the degree of multiprogramming
• Medium-term scheduler
– The OS decides when and which process to swap out or
swap in from or to memory.
– This also controls the degree of multiprogramming.
– Short-term scheduler (processor scheduling)
– The OS decides when and which process the CPU will be
allocated next to be executed.
5
Basic Concepts
• The success of CPU scheduling
depends on an observed
property of process as follows:
• CPU Burst and I/O Burst
Cycle – Process execution
consists of a cycle of CPU
execution and I/O wait
• CPU burst followed by I/O
burst
• CPU burst distribution is of
main concern
Maximum CPU utilization obtained with
multiprogramming
Durations of CPU-burst Times
Large number of
short CPU burst
Short
burst
Long burst
Depend on the process and computer, they tend to have a
frequently curve similar to the figure above, which the curve
is characterized as exponential, with a large number of short
CPU burst and small number of long CPU bursts.
Scheduling with Multiple Queues
In a system with multiprogramming OS, there are usually several processes in the
ready queue waiting to be receive service from CPU.
The degree of multiprogramming represent the number of processes in memory.
A system with different group of processes is called a multiclass system by given
different workload. In multiclass systems, there is potential for starvation (indefinite
waiting) of one or more processes.
8
CPU Scheduler
Short-term scheduler selects from among the processes in
ready queue, and allocates the CPU to one of them
Queue may be ordered in various ways
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Scheduling under 1 and 4 is nonpreemptive
All other scheduling is preemptive
Consider access to shared data
Consider preemption while in kernel mode
Consider interrupts occurring during crucial OS activities
Dispatcher
• Dispatcher module gives control of the CPU to
the process selected by the short-term
scheduler; this involves:
– switching context
– switching to user mode
– jumping to the proper location in the user program
to restart that program
• Dispatch latency – time it takes for the
dispatcher to stop one process and start
another running
In non preemptive means once a process
enters the running state, it is NOT removed
from the processor until it has completed its
service time. means we allow the current
process to finish its CPU burst time.
In preemptive means suspending a
work. making a task to wait for a
while based on condition.
This scheduling means we prevent
the currently executing process.
Scheduling Criteria
• CPU utilization – keep the CPU as busy as possible
• Throughput –# of processes that complete their
execution per time unit, or the amount of time that material or
items passing through a system or process;
• Turnaround time – amount of time to execute a
particular process
• Waiting time – amount of time a process has been
waiting in the ready queue
• Response time – amount of time it takes from when a
request was submitted until the first response is produced,
not output (for time-sharing environment)
• Average turnaround time – The average time between the
arrival time and completion time.
CPU-IO Bursts of Processes
An important property of a process is its CPU-IO burst
• CPU Burst- is a process waits for I/O completion
• I/O burst - is a process waits for CPU completion
• An I/O bound process has many short CPU burst
• A CPU bound process has few long CPU bursts
• The OS tries to main maintain a balance of these two
types pf processes
19
CPU Scheduling Policies
Categories of scheduling policies:
• Non-Preemptive -- no interruptions are
allowed. A process completes execution of its
CPU burst
• Preemptive – a process can be interrupted
before the process completes its CPU burst
20
FCFS Scheduling
First come first served (FCFS) scheduling
algorithm, a non-preemptive policy
– The order of service is the same order of arrivals
– Managed with FIFO queue
– Simple to understand and implement
– Scheduling is FAIR
– The performance of this scheme is relatively poor
21
First- Come, First-Served (FCFS) Scheduling
Process Burst Time
FCFC is a basic analysis of
P1
24
scheduling policy
calculate some of the
P2
3
performance metrics.
P3
3
• Suppose that the processes arrive in the
order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1
0
P2
24
P3
27
30
• Waiting time for P1 = 0; P2 = 24; P3 = 27
• Average waiting time: (0 + 24 + 27)/3 = 17
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the
order:
P2 , P3 , P1
• The Gantt chart for the schedule is:
P2
0
•
•
•
•
P3
3
P1
6
Waiting time for P1 = 6; P2 = 0; P3 = 3
Average waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case
Convoy effect - short process behind long process
– Consider one CPU-bound and many I/O-bound
processes
30
Deterministic Modeling - FCFS
Consider the following workload: Five processes arrive at time
0, in the order: P1, P2, P3, P4, P5; with CPU burst times: 135,
102, 56, 148, 125 msec., respectively. The chart for FCFS is:
The average waiting time FCFS is:
(0 + 135 + 237 + 293 + 441) / 5 = 221 msec
Deterministic Modeling is a Mathematical model in which outcomes are
precisely determined through known relationships among states and
24
events,
without any room for random variation
Algorithm Evaluation
• How to select CPU-scheduling algorithm for an OS?
• Determine criteria, then evaluate algorithms
• Deterministic modeling
– Type of analytic evaluation
– Takes a particular
predetermined workload
and defines the performance
of each algorithm for that workload
• Consider 5 processes
arriving at time 0:
Deterministic Evaluation
Consider the FCFS, SJF, and RR (Quantum=10 Msec)
For each algorithm, calculate minimum average
waiting time
FCFS is 28ms:
The Average Waiting Time is (0+10+39+42+49)/5=28
Non-preemptive SFJ is 13ms:
The Average Waiting Time is (10+32+0+3+20)/5=13
RR is 23ms:
The Average Waiting Time is (0+32+20+23+40)/5=23
Shortest (Job) Process Next (SPN)
Consider the following workload: Five processes arrive at time 0, in
the order: P1, P2, P3, P4, P5; with CPU burst times (completion time
between two states, no subtracting two numbers):
P1: time between arriving time 283 and completion time 418 = 135 (p3 is 4th SJF)
P2: time between arriving time 56 and completion time 158 = 102 (p3 is 2st SJF)
P3: time between arriving time 0 and completion time 56 = 56 (p3 is 1st SJF)
P4: time between arriving time 418 and completion time 566 = 148 (p3 is 5th SJF)
P5: time between arriving time 56 and completion time 283 = 125 (p3 is 3rd SJF)
The average waiting time for SPN is:
Job27Arrived at ((P1) 283+(P2) 56+(P3) 0+(P4) 418+ (P5) 158) / 5 = 183 msec
Shortest-Job-First (SJF)
Scheduling
• Associate with each process the length of its
next CPU burst
– Use these lengths to schedule the process with the
shortest time
• SJF is optimal – gives minimum average
waiting time for a given set of processes
– The difficulty is knowing the length of the next CPU
request
– Could ask the user
Example of SJF
Process
P1
P2
P3
P4
P4
0
P1
3
Burst Time
6 (Completion time in Msec)
8 (Completion time in Msec)
7 (Completion time in Msec)
3 (Completion time in Msec)
P3
9
• SJF scheduling chart
P2
16
24
Stochastic Models
• Discussed in Ch.3 that simulation model have
variables that change values in a
nondeterministic manner (with uncertainty).
As stochastic model include random variables
to implement uncertainly attributes.
• Refer to an Example: Appendix E- page 509
from textbook for exam
Normalization Turnaround Time (Ntat)
• Ntat for each process is computed by dividing the
turnaround time /CPU burst (by its service time )
135
0
P1
P2
Process
Strat
P1
P2
P3
P4
P5
0
135
237
293
441
P1; 135/135 = 1.0
P2; 237/(237-135) = 2.323
273
P3
Completion
135
237
293
441
566
293
441
P4
566
P5
Wait Turnaround Ntat
0
135
237
293
441
P3; 293/(293-237) =5.232
P4; 441/(441-293)=2.929
135
237
293
441
566
1.0
2.323
5.232
2.979
4.528
P5; 566/(566-441)=4.528
SJF (SPN) Scheduling
• The scheduler selects the next the process
with the shortest CPU burst
• Basically a non-preemptive policy
• SJF is optimal - gives the minimum average
waiting time for a given set of processes.
Non-Preemptive Shortest Job First Scheduling – Video 1:16
32
Example of Shortest-remaining-time-first
• Now we add the concepts of varying arrival times and preemption to
the analysis
Process
Arrival Time
P1
P2
P3
P4
0
1
2
3
Burst Time
8
4
9
5
• Preemptive SJF Gantt Chart
P1
0
P2
1
P4
5
P1
10
P3
17
26
• Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 =
6.5 msec
SRTF Example
Shortest remaining time first (SRTF_ a new process
that arrives will cause the scheduler to interrupt the
currently executing process.
Suppose that another process,
P6 arrives at time 200 with a CPU burst (Strat the
process and Completing the process) of 65 Mirosec.
This process (P6) arrives while P5 is executing and
has to wait until process P5 completes.
34
Shortest Remaining Time First
• Shortest remaining time (SRTF) is a preemptive
version of SPN scheduling.
• With this scheduling policy, a new process that
arrives will cause the scheduler to interrupt the
currently executing process if the CPU burst of the
newly arrived process is less than the remaining
service period of the process currently executing.
• There is then a context switch and the new
process is started immediately.
35
SRTF Scheduling
• When there are no arrivals, the scheduler selects
from the ready queue the process with the shortest
CPU service period (burst).
• As with SPN, this scheduling policy can be considered
multi-class because the scheduler gives preference to
the group of processes with the shortest remaining
service time and processes with the shortest CPU
burst.
36
Gantt Chart for SRTF Example
Turnaround time is the completion time
0
56
P3
Process
P1
P2
P3
P4
P5
P6
158
P2
Start
248
56
0
483
158
200
283
P5
P6
348
P1
483
631
P4
CPU burst
Wait
Turnaround Ntat
135
102
56
148
125
65
348
56
0
483
158
83
483
158
56
631
283
148
P6 start at 200; completion at 265, wait 0;
P6 Burst time 265-200=65
P6 Wait interval for process is: 283-200 = 83
P6 Turnaround time is: 348-200=148
3.577
1.549
1.0
4.263
2.264
2.276
Turnaround Time Varies With The Time Quantum
80% of CPU bursts
should be shorter than q
Priority Scheduling
• A priority number (integer) is associated with each
process
• The CPU is allocated to the process with the highest
priority (smallest integer  highest priority)
– Preemptive
– Nonpreemptive
• SJF is priority scheduling where priority is the inverse of
predicted next CPU burst time
• Problem  Starvation – low priority processes may never
execute
• Solution  Aging – as time progresses increase the
priority of the process
Preemptive Scheduling
• A running process can be interrupted when its
time slice expires (round robin)
• A running process can be interrupted when its
remaining time is longer than the CPU burst of
an arriving process shortest remaining time
(SRTF)
• Priority preemptive scheduling - A currently
running process will be preempted if a higherpriority process arrives (PPS)
40
Round Robin (RR)
• Each process gets a small unit of CPU time (time
quantum q), usually 10-100 milliseconds. After this time
has elapsed, the process is preempted and added to the
end of the ready queue.
• If there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU time
in chunks of at most q time units at once. No process
waits more than (n-1)q time units.
• Timer interrupts every quantum to schedule next process
• Performance
– q large  FIFO
– q small  q must be large with respect to context
switch, otherwise overhead is too high
Interrupts in Round Robin
• RR scheduling is used in t-me sharing systems.
It is the most common of the preemptive (Will
have interruption) scheduling
• When the executing interval of a process
reaches the time quantum, the timer will cause
the OS to interrupt the process
• The OS carries out a context switch to the next
selected process from the ready queue.
42
Example of RR with Time Quantum (time slice) = 4
Process Burst Time
P1
24
P2
3
P3
3
• The Gantt chart is:
P1
0
P2
4
P3
7
P1
10
P1
14
P1
18
P1
22
P1
26
– Typically, higher average turnaround than SJF, but
better response
– q should be large compared to context switch time
– q usually 10ms to 100ms, context switch < 10 usec
30
Turnaround Time Varies With The Time
Quantum (Time Slice)
80% of CPU bursts
should be shorter
than q
Example with Round Robin
The chart for Round-Robin, with Quantum =40 msec., is:
The average waiting time is: (0 + 40 + 80 + 120 + 160)/5 = 80
msec.
45
Evaluation of CPU Schedulers by Simulation
Scheduling Algorithm Evaluation
• Criteria for evaluation, and measure performance
of the computer system
– Maximize CPU utilization under certain constraints
– Minimize response time
– Maximize throughput under certain constraints
• Analytic evaluation - use algorithm and system
workload
– Deterministic modeling
– Queuing models
• Simulation - involves programming a model of
the computer system
47
Example 3 Scheduling
Assume the following processes P1,P2, P3, P4 and P5 arrive at 1, 2, 4, 5, 5
respectively.
The CPU burst and the priority assigned to each process are:
P1:
P2:
P3:
P4:
P5:
45
5
15
20
25
3
5
2
1
4
For FCFS, RR, SJF and PR scheduling, determine
a) the turnaround time for every process,
b) waiting time for every process and the average waiting time,
c) throughput for the system.
Use a time quantum of 10 time units, and negligible context time.
48
Heuristic Algorithm
1. Allocate the CPU to the highest priority process.
2. When a process is selected for execution, assign it a timeslice.
3. If the process requests an I/O operation before the time-slice
expires, raise its priority (i.e. assume it will carry out another
I/O request soon)
4. If the time-slice expires, lower its priority (i.e., assume it is
now in a CPU burst) and allocate the CPU to the highest
priority ready process.
49
Real-Time Scheduling Policies
• These scheduling policies attempt to maintain
the CPU allocated to the high-priority real-time
processes.
• One of the goals for this kind of scheduling is to
guarantee fast response of the real-time
processes.
50
Real-time Processes
• The real-time processes, each with its own service
demand, priority and deadline, compete for the CPU.
• Real-time processes must complete their service
before their deadlines expire.
• The second general goal of a real-time scheduler is to
guarantee that the processes can be scheduled in
some manner in order to meet their individual
deadlines.
• The performance of the system is based on this
guarantee.
51
Real-Time Scheduling Policies
• There are two widely known real-time
scheduling policies:
– The rate monotonic scheduling (RMS)
– The earliest deadline first scheduling (EDFS).
52
Multilevel Queues
• In multi-class systems there are several classes of processes
and each class of process is assigned a different priority.
• Multilevel queues are needed when the system has different
categories of processes.
• Each category needs a different type of scheduling.
• For example, one category of process requires interactive
processing and another category requires batch processing.
• For each category there may be more than one priority used.
53
Multiple Ready Queues
54
Multiple Processors
• For multiple-processor systems, there are several
ways a system is configured.
• The scheduling of the processes on any of the
processors is the simplest approach to use. This
assumes that the processors are tightly coupled, that
is, they share memory and other important
hardware/software resources.
• More advanced configurations and techniques for
example, parallel computing, are outside the scope
of this book.
55
Single Queue-Multiple Processors
56
Multiple Queues-Multiple Processors
57
Chapter 6
Synchronization
Principles
Synchronization as a Concept
Synchronization is the coordination of the
activities of two or more processes that
usually used to carry out the following
in the computer system.
– Processes compete for resources in a
mutually exclusive manner;
– Cooperate in sequencing specific events in
their individual activities.
59
Types of Synchronization
Race Condition
I. No synchronization
• When Two or more processes access and
manipulate (share resources) the same data item
together ;
• independently attempt to simulate;
• The outcome of the execution depends on the
“speed” of the processes and the particular
order in which each process accesses the shared
data item
– Most likely their executions will NOT produce
correct result since synchronization is absent
– No data integrity, results are generally incorrect
60
II. Mutual Exclusion
A synchronization principle needed when a group of
processes share a resource
• Each process should access the resource in a
mutual exclusive manner
• Solve the race condition- only one process
allowed to access the shared resource at a time.
• Two or more processes are prohibited from
concurrently a share resources.
• This is the most basic synchronization principle
61
III. Critical Section Problem
• Consider system of n processes {p0, p1, … pn-1}
• Each process has critical section segment of code.
• A critical section is a piece of code in which a process or
thread accesses a common resource.
– Process may be changing common variables,
updating table, writing file, etc
– When one process in critical section, no other may
be in its critical section
• Critical section problem is to design protocol to solve
this Each process must ask permission to enter
critical section in entry section, may follow critical
section with exit section, then remainder section
Solution to Critical-Section Problem
1. Mutual Exclusion – refers to the requirement of
ensuring that no two concurrent processes are in their
critical section at the same time; Mutual exclusion
(abbreviated to mutex) algorithms are used in
concurrent programming to avoid the simultaneous use
of a common resource, such as a global variable, by
pieces of computer code called critical sections.
2. Progress - If no process is executing in its critical section
and there exist some processes that wish to enter their
critical section, then the selection of the processes that
will enter the critical section next cannot be postponed
indefinitely
Solution to Critical-Section Problem
Cont.
Bounded Waiting - A bound must exist on the number of
times that other processes are allowed to enter their
critical sections after a process has made a request to
enter its critical section and before that request is
granted
 Assume that each process executes at a nonzero
speed
 No assumption concerning relative speed of the n
processes
Road Intersection
• Two vehicles, one moving on Road A and the other
moving on Road B are approaching the intersection
• If the two vehicles reach the intersection at the same
time, there will be a collision, which is an undesirable
event.
• The road intersection is a critical section for both roads
because it is part of Road A and also part of Road B, but
only one vehicle should reach the intersection at any
given time.
• Therefore, mutual exclusion should be applied on the
road intersection, the critical section.
65
Example of a Critical Section
Simple analogy to help understand the concept of critical Section is the example of
Road intersection.
Two Vehicles: one moving on the
Road A and the other moving on
Road B.
1. If the two vehicles reach the
intersection at the same time,
there will be a Collision.
2. The road intersection is
critical for both roads,
because only one vehicle
should use the road, but one
vehicle should enter the road at
a time.
3. Therefore, Mutual
exclusion should applied on
the road intersections.
66
Critical Sections in Two Processes
• Mutual Exclusion: Only one process may be executing the critical
section at any time.
• Absence of starvation: process wait a finite time interval to enter
their critical sections.
• Absent of deadlock: process should not block each other indefinitely
• Progress: A Process will take a finite time interval to execute the critical
section.
67
Critical-Section Handling in OS
Two approaches depending on if kernel is
preemptive or non- preemptive
– Preemptive – allows preemption of process
when running in kernel mode
– Non-preemptive – runs until exits kernel mode,
blocks, or voluntarily yields CPU
• Essentially free of race conditions in kernel mode
Example of Critical Sections
•
•
Shared resource: Printer buffer
Two processes:
– Producer:
1.produces a character,
2.places the character in buffer.
– Consumer:
1.removes a character from the buffer,
2.consumes the character.
69
CRITICAL
SECTION
Definition of a Critical Section Protocol
1. Entry section
check if any other process is
executing its C.S., if so then the
current process must wait
otherwise set flags and proceed
2. Critical Section
3. Exit section
clear flags to allow any waiting
process to enter its critical
section.
Solution is to critical section problem
is to use of a semaphore to
share resources.
70
Semaphores
• A semaphore is similar to a traffic light, means
simply to have every process involve the wait
in the entry section before its critical section
and involve the signal operation is its exit
section.
• It is an abstract data type that functions as a
software synchronization tool that can be used
in to implement a solution to the critical
section problem
71
mutex.wait();
[ critical section ]
mutex.signal();
Semaphores Objects
Are objects that, must be initialized and can be
manipulated only with two atomic operations:
wait and signal.
To implement a solution to the critical section
problem, the critical section in every processes
is between the wait and the signal operations.
72
Critical Section Protocol
A semaphore object
referenced by mutex, is
used here with the two
operations: wait and
signal.
73
Semaphore Class
class Semaphore {
private int sem;
private Pqueue sem_q; // semaphore queue
public Semaphore (int initval);
public wait ();
public signal ();
} // end of class Semaphore
74
Semaphore
• Synchronization tool that provides more sophisticated ways (than
Mutex locks) for process to synchronize their activities.
• Semaphore S – integer variable
• Can only be accessed via two indivisible (atomic) operations
– wait() and signal()
• Originally called P() and V()
•
Definition of the
wait() operation
wait(S)
{
while (S <= 0)
; // busy wait
S--;
}
•
Definition of the
signal() operation
signal(S)
S++;
}
{
Semaphore Implementation
• Must guarantee that no two processes can execute the
wait() and signal() on the same semaphore at the
same time
• Thus, the implementation becomes the critical section
problem where the wait and signal code are placed in
the critical section
– Could now have busy waiting in critical section
implementation
• But implementation code is short
• Little busy waiting if critical section rarely occupied
• Note that applications may spend lots of time in critical
sections and therefore this is not a good solution
Classical Syncronization Problems
• See simulations on the CD came
with the Textbook.
77
Bounded-Buffer or know as
producer-consumer Problem
• There are two processes that execute
continuously and a shared buffer
Involves two processes:
I) The Producer process generates data items and
insert them in the buffer, one by one.
2) The Consumer process which remove the data
items from the buffer and consumes them, one by
one.
78
– Buffer - a container of N slots, filled by the producer
process and emptied by the consumer process
Buffering and Caching
• Two processes in the I/O system should
make the physical I/O requests as big as
possible.
• The application's logical I/O requests
should copy data to/from a large
memory buffer. The physical I/O requests
then transfer the entire buffer.
79
Producer-Consumer Problem
Following Restrictions are:
• The producer
cannot deposit a
data item into the
buffer when the
buffer is full
• The consumer
cannot remove a
data item from the
buffer when the
buffer empty.
80
Data Declarations for Solution
// Shared data
int N = 100;
// size of buffer
char buffer [N]; // buffer implementation
char nextp, nextc;
Semaphorec full, empty; // counting semaphores
Semaphoreb mutex;
// binary semaphore
81
Initializing Semaphore Objects
full = new Semaphorec(0);
// counting semaphore obj
empty = new Semaphorec( N); // counting sem obj
mutex = new Semaphoreb(1); // binary semaphore obj
82
Solution to Producer-Consumer
problems
• A counting semaphore, full, for counting the
number of full slots
• A counting semaphore, empty, for counting
the number of empty slots
• A binary semaphore, mutex, for mutual
exclusion
Implementation of Producer
Producer process
while(true) {
...
// produce a data item
...
empty.wait(); // any empty slots? Decrease empty slots
mutex.wait(); // attempt exclusive access to buffer
...
// instructions to insert data item into buffer
...
mutex.signal(); // release exclusive access to buffer
full.signal(); // increment full slots
...
}
84
Implementation of Consumer
Consumer process
while(true) {
...
full.wait(); // any full slots? Decrease full slots
mutex.wait(); // attempt exclusive access to buffer
…
// remove a data item from buffer and put in nextc
…
mutex.signal(); // release exclusive access to buffer
empty.signal(); // increment empty slots
// consume the data item in nextc
… }
85
Simulation Models for the Bounded
Buffer Problem
• The basic simulation model of the bounded-buffer
problem (producer-consumer problem) includes five
classes: Semaphore, Buffer, Producer, Consumer, and
Consprod.
• The model with graphics and animation includes
additional classes for displaying the GUI and the
animation.
• The models implemented in Java with the PsimJ
simulation package, are stored in the archive files:
consprod.jar and consprodanim.jar.
• The C++ version is in file: consprod.cpp
86
GUI for the Simulation Model
87
Results of a Simulation Run
Project: Producer-Consumer Model
Run at: Thu Sep 15 00:00:11 EDT 2006 by jgarrido on Windows XP, localhost
Input Parameters
Simulation Period: 740
Producer Mean Period: 12.5
Prod Mean Buffer Access Period: 6.75
Coef of Variance: 0.13
Consumer Mean Period: 17.5
Cons Mean Buffer Access Period: 4.5
Buffer Size: 7
----------------------------------------------------Results of simulation: Producer-Consumer Model
Total Items Produced: 23
Mean Prod Buffer Access Time: 0006.735
Total Prod Buffer Access Time: 0154.916
Mean Producer Wait Time: 0000.760
Total Producer Wait Time: 0017.480
Total Items Consumed: 23
Mean Cons Buffer Access Time: 0004.575
Total Cons Buffer Access Time: 0105.218
Mean Consumer Wait Time: 0007.896
Total Consumer Wait Time: 0181.597
88
Implementation of Producer
Producer process
while(true) {
...
// produce a data item
...
empty.wait(); // any empty slots? Decrease empty slots
mutex.wait(); // attempt exclusive access to buffer
...
// instructions to insert data item into buffer
...
mutex.signal(); // release exclusive access to buffer
full.signal(); // increment full slots
...
}
89
Implementation of Consumer
Consumer process
while(true) {
...
full.wait(); // any full slots? Decrease full slots
mutex.wait(); // attempt exclusive access to buffer
…
// remove a data item from buffer and put in nextc
…
mutex.signal(); // release exclusive access to buffer
empty.signal(); // increment empty slots
// consume the data item in nextc
… }
90
GUI for the Simulation Model
91
Results of a Simulation Run
Project: Producer-Consumer Model
Run at: Thu Sep 15 00:00:11 EDT 2006 by jgarrido on Windows XP, localhost
Input Parameters
Simulation Period: 740
Producer Mean Period: 12.5
Prod Mean Buffer Access Period: 6.75
Coef of Variance: 0.13
Consumer Mean Period: 17.5
Cons Mean Buffer Access Period: 4.5
Buffer Size: 7
----------------------------------------------------Results of simulation: Producer-Consumer Model
Total Items Produced: 23
Mean Prod Buffer Access Time: 0006.735
Total Prod Buffer Access Time: 0154.916
Mean Producer Wait Time: 0000.760
Total Producer Wait Time: 0017.480
Total Items Consumed: 23
Mean Cons Buffer Access Time: 0004.575
Total Cons Buffer Access Time: 0105.218
Mean Consumer Wait Time: 0007.896
Total Consumer Wait Time: 0181.597
92
Readers and Writers Problem
• Processes attempt to access a shared data object
then terminate
• There are two types of processes:
– Readers
– Writers
93
Readers and Writers Problem
There are several processes of each type. These
process are share a data resources.
1.The reader access the data resources to read
ONLY;
2.The writer need access to the data resources
to update the data.
94
Access to Shared Data
• Writers need “individual” exclusive access to
the shared data object
• Readers need “group” exclusive access to the
shared data object
• In this problem, there is a need to define
two level of mutual exclusion:
– Individual mutually exclusion access to the
shared data resources
– Group exclusion access to the shared data
resources
95
First and Last Readers
Solution for Reader- Writer Problem
• The first reader competes with writers to gain
group exclusive access to the shared data object.
(Reader have priority – use two binary semaphores, wrt, uses
for processes; and mutex, used for reader processes)
• The last reader releases group exclusive access
to the shared data object
– Therefore, the last reader gives a chance to
waiting writers
96
Identifying First and Last Readers
• A counter variable, readcount, is needed to keep
a count of the number of reader processes that
are accessing the shared data
• When a reader requests access to the shared
data, the counter is incremented
• When a reader completes access to the shared
data, the counter is decremented
97
Reader Process
Reader (){
...
mutex.wait ();
increment readcount;
if reacount equals 1 then // first reader?
wrt.wait ();
// gain group access to shared data
mutex.signal ();
// Critical Section read shared data object
mutex.wait();
decrement readcount;
if readcount equals zero then // last reader?
wrt.signal (); // release group access to shared data
mutex.signal ();
...
} // end Reader()
98
Asynchronous Communication
Two processes communicate indirectly via a
mailbox that holds the message temporarily.
• Similar to consumer-producer problem
• Indirect communication between sender and
receiver processes
• No direct interaction between processes
• A mailbox stores the message
• Sender does not need to wait (?)
• Receiver waits until message becomes
available
99
Asynchronous Communication
The sender and/or
receiver processes may
have to wait if the
mailbox is full or empty
System mailbox in which
the sender process send
the message and placed
it in a slot of the mailbox:
100
The receiver process
will get the message
from the mailbox.
Synchronous Communication
• Direct interaction between processes
• Processes have to wait until the
communication occurs
• The processes have to participate at the same
time during the communication interval
• A channel is established between the
processes
101
Synchronous Communication
102
Download