Short-Term Scheduling

advertisement
Operating System 9
UNIPROCESSOR
SCHEDULING
TYPES OF PROCESSOR
SCHEDULING
•
The aim of processor scheduling is to assign processes to be
executed by the processor or processors over time, in a way that
meets system objectives, such as response time, throughput, and
processor efficiency. In many systems, this scheduling activity is
broken down into three separate functions: long-, medium-, and
shortterm scheduling.The names suggest the relative time scales
with which these functions are performed.
•
Long-term scheduling is performed when a new process is created.
•
Medium-term scheduling is a part of the swapping function.
•
Short-term scheduling is the actual decision of which ready process
to execute next.
Long-Term Scheduling

The long-term scheduler determines which programs are admitted to
the system for processing. Thus, it controls the degree of
multiprogramming.

The decision as to when to create a new process is generally driven
by the desired degree of multiprogramming.The more processes
that are created, the smaller is the percentage of time that each
process can be executed (i.e., more processes are competing for
the same amount of processor time).
Medium-Term Scheduling

Medium-term scheduling is part of the swapping
function. The issues involved are discussed in Chapters
3, 7, and 8.Typically, the swapping-in decision is based
on the need to manage the degree of multiprogramming.
Short-Term Scheduling

The short-term scheduler, also known as the dispatcher,
executes most frequently and makes the fine-grained
decision of which process to execute next.

The short-term scheduler is invoked whenever an event
occurs that may lead to the blocking of the current
process
SCHEDULING ALGORITHMS
Short-Term Scheduling Criteria

The main objective of short-term scheduling is to allocate processor
time in such a way as to optimize one or more aspects of system
behavior.

The commonly used criteria can be categorized along two
dimensions. First, we can make a distinction between user-oriented
and system-oriented criteria. User oriented criteria relate to the
behavior of the system as perceived by the individual user or
process. An example is response time in an interactive system.

Other criteria are system oriented. That is, the focus is on effective
and efficient utilization of the processor. An example is throughput.

Another dimension along which criteria can be classified is those
that are performance related and those that are not directly
performance related.
•>
Table
9.2
summarizes
key
scheduling criteria.
These
are
interdependent, and
it is impossible to
optimize all of them
simultaneously.
The Use of Priorities

In many systems, each process is assigned a priority and the
scheduler will always choose a process of higher priority over one of
lower priority.
Alternative Scheduling Policies

Table 9.3 presents some summary information about the
various scheduling policies that are examined in this
subsection. The selection function determines which
process, among ready processes, is selected next for
execution.
•> Preemptive policies incur greater overhead than nonpreemptive ones but
may provide better service to the total population of processes,
First-Come-First-Served

The simplest scheduling policy is first-comefirstserved (FCFS), also known as first-in-first-out
(FIFO)
•> Another difficulty with FCFS is
that it tends to favor processorbound processes over I/O-bound
processes.
Round Robin





A straightforward way to reduce the penalty that short jobs suffer with FCFS is
to use preemption based on a clock.The simplest such policy is round robin. A
clock interrupt is generated at periodic intervals.
Round robin is particularly effective in a general-purpose time-sharing system
or transaction processing system.
Generally, an I/O-bound process has a shorter processor burst (amount of
time spent executing between I/O operations) than a processor-bound
process.
[HALD91] suggests a refinement to round robin that he refers to as a virtual
round robin (VRR) and that avoids this unfairness. Figure 9.7 illustrates the
scheme. New processes arrive and join the ready queue, which is managed
on an FCFS basis. When a running process times out, it is returned to the
ready queue.
When a process is dispatched from the auxiliary queue, it runs no longer than
a time equal to the basic time quantum minus the total time spent running
since it was last selected from the main ready queue.
Shortest Process Next
•> Another approach to
reducing the bias in favor of
long processes inherent in
FCFS is the Shortest Process
Next (SPN) policy. This is a
non preemptive policy in
which the process with the
shortest expected processing
time is selected next. Thus a
short process will jump to the
head of the queue past longer
jobs.
•> One difficulty with the SPN
policy is the need to know or
at least estimate the required
processing time of each
process.
A risk with SPN is the possibility of
starvation for longer processes, as
long as there is a steady supply of
shorter processes.
Shortest Remaining Time

The shortest remaining time (SRT) policy is a preemptive version of SPN. In
this case, the scheduler always chooses the process that has the shortest
expected remaining processing time.
Highest Response Ratio Next

In Table 9.5, we have used the normalized turnaround time, which is the
ratio of turnaround time to actual service time, as a figure of merit. For each
individual process, we would like to minimize this ratio, and we would like to
minimize the average value over all processes. In general, we cannot know
ahead of time what the service time is going to be, but we can approximate it,
either based on past history or some input from the user or a configuration
manager.Consider the following ratio:

Thus, our scheduling rule becomes the following:When the current process
completes or is blocked, choose the ready process with the greatest value of
R.
Feedback

If we have no indication of the relative length of various processes, then none
of SPN, SRT, and HRRN can be used.Another way of establishing a
preference for shorter jobs is to penalize jobs that have been running longer.
In other words, if we cannot focus on the time remaining to execute, let us
focus on the time spent in execution so far.
•> Figure 9.10 illustrates
the feedback scheduling
mechanism by showing the
path that a process will
follow through the various
queues.5 This approach is
known
as
multilevel
feedback, meaning that the
operating system allocates
the processor to a process
and, when the process
blocks or is preempted,
feeds it back into one of
several priority queues.
Selesai....
Download