chapter4

advertisement
1
_________________________________________________________________________________
CS323-650 Operating Systems
Chapter 4
Process Scheduling
CPU scheduling deals with the problem of deciding which of the processes in the ready
queue is to be allocated the CPU. There are many different CPU-scheduling algorithms.
Different scheduling algorithms have different properties and may favor one class of
processes over another. In choosing which algorithm to use in a particular situation, we must
consider the properties of the various algorithms.
Many criteria have been suggested for comparing CPU-scheduling algorithms. Which
characteristics are used for comparison can make a substantial difference in the determination of the
best algorithm. Criteria that are used include the following:
 Fairness, make sure each process gets its fair share of the CPU
 CPU utilization or Efficiency, keep CPU as busy as possible
 Throughput, the number of processes that are completed per time unit
 Turnaround time, the time of submission to the time of completion
 Waiting time (in the ready queue)
 Response time, the time from the submission of a request until the first response is
produced.
4.1 First-Come, First-Served (FCFS) Scheduling
The implementation of the FCFS is easily managed with a FIFO queue. When a process
enters the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is
allocated to the process at the head of the ready queue.
Example,
head Process
Burst Time
P1
24
P2
3
tail
P3
3
Gantt chart:
P1
0
P2
24
P3
27
Computer Science, Prince of Songkla University
30
2
_________________________________________________________________________________
The waiting time is 0 milliseconds for process P1, 24 milliseconds for process P2, and 27
milliseconds for process P3. Thus, the average waiting time is (0+24+27)/3 = 17 milliseconds. If the
process arrive in the order P2, P3, P1, however, the results will be as follow:
CS323-650 Operating Systems
P2
P3
P1
0
3
6
30
The average waiting time is now (6+0+3)/3 = 3 milliseconds. Thus, the average waiting time under a
FCFS is generally not minimal, and may vary substantially if the process CPU-burst times vary
greatly. There is a convoy effect, as all the other processes wait for the one big process to get off the
CPU. The FCFS scheduling algorithm is nonpreemptive.
4.2 Shortest-Job-First (SJF) Scheduling
When the CPU is available, it is assigned to the process that has the smallest next CPU
burst. If two processes have the same length next CPU burst, FCFS scheduling is used to break the
tie.
Example, for nonpreemptive,
Process
Burst Time
P1
6
P2
8
P3
7
P4
3
Gantt chart:
P4
P1
P3
P2
0
3
9
16
24
The waiting time is 3 milliseconds for process P1, 16 milliseconds for process P2, 9 milliseconds for
process P3, and 0 milliseconds for process P4. Thus, the average waiting time is (3+16+9+0)/4 = 7
milliseconds.
The SJF scheduling algorithm is provably optimal, in that it gives the minimum average
waiting time for a given set of processes.
The real difficulty with the SJF algorithm is knowing the length of the next CPU request. For
long-term (job) scheduling in batch system, we can use the process time limit. However, it is cannot
be implemented at the level of short-term CPU scheduling. There is no way to know the length of the
next CPU burst. One approach is to try to approximate SJF scheduling. The next CPU burst is
generally predicted as an exponential average of the measured lengths of previous CPU bursts.
Computer Science, Prince of Songkla University
3
_________________________________________________________________________________
Let tn be the length of the nth CPU burst, and let n+1 be our predicted value for the next CPU
burst. Then, for , 0  1, define an exponential average,
n+1 = tn+(1-)n.
Example,  = 1/2
12
10
burst length
8
6
4
2
CS323-650 Operating Systems
CPU burst (ti)
“guess”(i)
10
6
8
4
6
time
6
6
4
5
13
9
13
11
13 …
12…
The SJF algorithm may be either preemptive or nonpreemptive. The choice arises when a
new process arrives at the ready queue while a previous process is executing. The new process may
have a shorter next CPU burst than what is left of the currently executing process. A preemptive SJF
algorithm will preempt the currently executing process, whereas a nonpreemptive SJF algorithm will
allow the currently running process to finish its CPU burst. Preemptive SJF scheduling is sometimes
called shortest-remaining-time-first scheduling.
Example, for preemptive
Process
Arrival Time
Burst Time
P1
0
8
P2
1
4
P3
2
9
P4
3
5
Gantt chart:
P1
P2
P4
P1
P3
0
1
5
10
17
26
Process P1 is started at time 0, since it is the only process in the queue. Process P2 arrives at time 1.
The remaining time for process P1 (7 milliseconds) is larger than the time required by process P2 (4
milliseconds), so process P1 is preempted, and process P2 is scheduled. The average waiting time
for this example is ((10-1)+(1-1)+(17-2)+(5-3))/4 = 26/4 = 6.5 milliseconds. A nonpreemptive SJF
scheduling would result in an average waiting time of 7.75 milliseconds.
Computer Science, Prince of Songkla University
4
_________________________________________________________________________________
CS323-650 Operating Systems
4.3 Priority Scheduling
A priority is associated with each process, and the CPU is allocated to the process with the
highest priority. Equal-priority processes are scheduled in FCFS order. However, there is no general
agreement on whether 0 is the highest or lowest priority. The SJF is a special case of the general
priority scheduling algorithm, where the priority (p) is the inverse of the (predicted) next CPU burst
(): p = 1/). The larger the CPU burst, the lower the priority, and vice versa.
Example,
Process
Burst Time
Priority
P1
10
3
P2
1
1
P3
2
3
P4
1
4
P5
5
2
Gantt chart:
P2
P5
P1
P3
P4
0 1
6
16
18
19
The average waiting time is 8.2 milliseconds.
A major problem with priority scheduling algorithms is indefinite blocking or starvation. It can
leave some low-priority processes waiting indefinitely for the CPU. A solution to this problem is aging.
Aging is a technique of gradually increasing the priority of processes that wait in the system for a
long time. For example, if priorities range for 0 (low) to 127 (high), we could increment a waiting
process’s priority by 1 every 15 minutes.
4.4 Round-Robin (RR) Scheduling
The round-robin scheduling algorithm is designed especially for time-sharing systems. A
small unit of time, called a time quantum or time slice, is defined. A time quantum is generally from
10 to 100 milliseconds. The ready queue is treated as a circular queue. The CPU scheduler goes
around the ready queue, allocating the CPU to each process for a time interval of up to 1 time
quantum. The process may have a CPU burst less than 1 time quantum. In this case, the process
itself will release the CPU voluntarily.
Example, time quantum = 4 milliseconds,
Process
Burst Time
next
P1
24
P2
3
P3
3
Computer Science, Prince of Songkla University
5
_________________________________________________________________________________
Gantt chart:
P1
P2
P3
P1
P1
P1
P1
P1
0
4
7
10
14
18
22
26
30
The average waiting time is 17/3 = 5.66 milliseconds.
The RR scheduling algorithm is preemptive. The performance of the RR algorithm depends
heavily on the size of the time quantum. At one extreme, if the time quantum is very large (infinite),
the RR is the same as the FCFS. If the time quantum is very small (say 1 microsecond), the RR is
called processor sharing, and appears (in theory) to the users as though each of n processes has its
own processor running at 1/n the speed of the real processor.
In software, however, we need also to consider the effect of context switching on the
performance of RR scheduling. For example, a small time quantum increases context switches,
process time = 10
quantum
context switches
12
0
0
10
6
1
0
6
10
1
9
0 1 2 3 4 5 6 7 8 9 10
Thus, we want the time quantum to be large with respect to the context-switch time.
Turnaround time also depends on the size of the time quantum. As we can see from the
following figure:
Process Time
P1
6
P2
3
P3
1
P4
7
average
turn
around
time
CS323-650 Operating Systems
1
2
3
4
time quantum
Computer Science, Prince of Songkla University
5
6
7
6
_________________________________________________________________________________
The average turnaround time of a set of process does not necessarily improve as the timequantum size increases. In general, the average turnaround time can be improved if most processes
finish their next CPU burst in a single time quantum. For example, given three processes of 10 time
units each and a quantum of 1 time unit, the average turnaround time is 29. If the time quantum is 10,
however, the average turnaround time drops to 20.
CS323-650 Operating Systems
4.5 Multilevel Queue Scheduling
A multilevel queue scheduling algorithm partitions the ready queue into separate queues.
For example, a common division is made between foreground (interactive) processes and
background (batch) processes. Processes are permanently assigned to one queue. Each queue has
its own scheduling algorithm. For example, foreground queue might be scheduled by a RR
algorithm, while the background queue is scheduled by an FCFS algorithm. In addition, there must
be scheduling between the queues. This is commonly a fixed-priority preemptive scheduling. For
example, the foreground queue may have absolute priority over the background queue. Another
possibility is to time slice between the queues. Each queue gets a certain portion of the CPU time,
which it can then schedule among the various processes in its queue. For instance, in the
foreground-background queue example, the foreground queue can be given 80 percent of the CPU
time for RR among its processes, while the background queue receives 20 percent of the CPU to
give its processes in a FCFS manner.
Example,
highest priority
system processes
interactive processes
batch processes
student processes
lowest priority
4.6 Multilevel Feedback Queue Scheduling
Multilevel feedback queue scheduling, allow a process to move between queues. If a
process uses too much CPU time, it will be moved to a lower-priority queue.Similarly, a process that
Computer Science, Prince of Songkla University
7
_________________________________________________________________________________
waits too long in a lower-priority queue may be moved to a higher priority queue. This is a form of
aging that would prevent starvation.
CS323-650 Operating Systems
Example,
quantum = 8
quantum = 16
FCFS
4.7 Multiple Processor Scheduling
One major factor is the types of processors involved. The processors may be identical ( a
homogeneous system) or different (a heterogeneous system). If the processors are different, the
options are relatively limited. Each processor has its own queue and its own scheduling algorithm. If
several identical processors are available, then load sharing can occur. It would be possible to
provide a separate queue for each processor. In this case, however, one processor could be idle,
with an empty queue, while another processor was very busy. To prevent this situation, we use a
common ready queue. All processes go into one queue and are scheduled onto any available
processor. Two scheduling approaches may be used:
- Symmetric multiprocessing: each processor examines the common ready queue and
selects a process to execute. We must ensure that two processors do not choose the
same process, and that processes are not lost from the queue.
- Asymmetric multiprocessing: appointing one processor as scheduler for the other
processors, thus creating a master slave structure.
4.8 Real-Time Scheduling
Real-time systems are generally categorized as hard real time, meaning there are absolute
deadlines that must be met, or else, and soft real time, meaning that missing an occasional deadline
is tolerable. While in theory it is possible to turn a general-purpose operating system into a real-time
system by using one of the real-time scheduling (such as rate monotonic algorithm, earliest deadline
first, see text book), in practice the context-switching overhead of general-purpose system is so
large that real-time performance can only be archived for applications with easy time constraints. As
Computer Science, Prince of Songkla University
8
_________________________________________________________________________________
a consequence, most real-time work uses special real-time operating systems that have certain
important properties. Typically these include a small size, fast interrupt time, rapid context switch,
short interval during which interrupts are disabled, and the ability to manage multiple timers in the
millisecond or microsecond range.
CS323-650 Operating Systems
4.9 Algorithm Evaluation
 Analytic Evaluation
-
Deterministic Modeling
Advantage: simple&fast
Disadvantage: too specific & requires too much exact knowledge
- Queueing Models
We can use Little’s formular n =  x W
where n = average queue length
W = average waiting time
 = average arrival rate for the new processes in the queue
Advantage: useful in comparing algorithm
Disadvantage: accuracy depend on distribution which difficult to work with
 Simulations
Advantage: more accurate
Disadvantage:
- very expensive
- may be inaccurate in the real system
Computer Science, Prince of Songkla University
Download