2.5 Scheduling

advertisement
2.5 Scheduling
2.5 Scheduling

Given a multiprogramming system, there
are many times when more than 1 process
is waiting for the CPU (in the ready
queue).
The scheduler (using a scheduling
algorithm) decides which process will run
next.
 User satisfaction is important.

Context switch
When the CPU changes from one process to
another.
 Expensive operation.

User mode to kernel mode
2. Save state of current process
1.

3.
Run scheduler


4.
Registers, invalidate cache, MMU (Note: Pipeline becomes
useless.)
Pick next process to run
Load state of next process
Run next process
Process
behavior

Types:
I/O bound – spend most time performing
I/O
2. Compute bound – spend most time
performing computation
3. Mixture
1.
When do we need to schedule?
creation (before/after parent?)
 exit
 block (I/O, semaphore, sleep, wait, etc.)
 I/O interrupt/completion
 clock interrupt

 non
preemptive (run until you block or
“cooperate”)
 preemptive
Scheduling environments
1.
Batch
2.
Interactive
3.
Real time
Scheduling algorithm goals
(for all systems)
Scheduling algorithm goals
(for batch systems)
Scheduling algorithm goals
(for interactive systems)
Scheduling algorithm goals
(for real-time systems)
Concepts & definitions

Throughput = # of jobs completed per hour
Concepts & definitions

Turnaround time = avg of “start (submit) to
completion” times; avg wait time
Concepts & definitions

CPU utilization = avg of CPU busyness
Concepts & definitions

Response time = time between issuing a
command and getting the result
Concepts & definitions

Proportionality = user perception that “complex”
things take a long time and that is fine, but
“simple” things must be quick
Concepts & definitions

Predictability = regularity, especially important
for audio and video streaming
BATCH SCHEDULING
Batch scheduling
1.
First-come first-served
2.
Shortest job first
3.
Shortest remaining time next
4.
Three-level scheduling
Batch scheduling
1. First-come first-served
 Simple
 Non
preemptive
 Process
finishes
runs until it either blocks on I/O or
Batch scheduling
2. Shortest job first
 Optimal
turnaround time (when all start
together)
 Requires
that we know run times a priori
Batch scheduling
3. Shortest remaining time next
 Preemptive
version of shortest job first
 Requires a priori information
 When a new job arrives, its total time is
compared to the current process’ remaining
time. If the new job needs less time to finish,
the new job is started.
 New, short jobs get good service
Batch scheduling
4. Three-level scheduling
Admission scheduler – chooses next job
begin
2. Memory scheduler – which jobs are kept in
memory and which jobs are swapped to disk
1.
 How
long swapped in or out?
 How much CPU time recently?
 How big is the process?
 How important is the process?
CPU scheduler – picks which runs next
 Degree of multiprogramming – number of
processes in memory
3.
INTERACTIVE SCHEDULING
Interactive scheduling
1.
2.
3.
4.
5.
6.
7.
Round-robin scheduling
Priority scheduling
Multiple queues
Shortest process next
Guaranteed scheduling
Lottery scheduling
Fair-share scheduling
Interactive scheduling
1. Round-robin scheduling
 Simple,
fair, widely used, preemptive
 Quantum = time interval
 Process/context
switch is expensive
 Too small and we waste time
 Too large and interactive system will appear
sluggish
 ~20-50 msec is good
 Every
process has equal priority
Interactive scheduling
2. Priority scheduling
 Each
process is assigned a priority; process
with highest priority is next to run.
 Types:
1.
2.
static
dynamic (ex. I/O bound jobs get a boost)
 Unix/Linux
nice command
Interactive scheduling

Ex. 4 priorities & RR w/in a priority
Interactive scheduling
3. Multiple queues
 different
 Ex.
Q’s for different types of jobs
4 queues for:
 Terminal,
I/O, short quantum, and long quantum
Interactive scheduling
4. Shortest process next
 Shortest
job first always produces min avg
response time (for batch systems)
 How do we estimate this (for interactive jobs)?
Interactive scheduling
4. Shortest process next
 Shortest
job first always produces min avg
response time (for batch systems)
 How do we estimate this (for interactive jobs)?
 From
recent behavior (of interactive commands, Ti)
 Example of aging (or IIR filter)


alpha near 1 implies little memory
alpha near 0 implies much memory
ˆ
Ti 1  Ti  (1   )Ti 1
Interactive scheduling

Example of aging (or IIR filter)
 alpha
near 1 implies little memory
 alpha near 0 implies much memory
 How
can we “increase” the “memory?”
ˆ
Ti 1  Ti  (1   )Ti 1
Interactive scheduling
5. Guaranteed scheduling
 Given
n processes, each process should get
1/n of the CPU time
 Say we keep track of the actual CPU used vs.
what we should receive (entitled
to/deserved).
 K = actual / entitled
K
= 1  we got what we deserved
 K < 1  we got less than deserved
 K > 1  we got more than deserved
 Pick
process w/ min K to run next
Interactive scheduling
6. Lottery scheduling
 Each
process gets tickets; # of tickets can
vary from process to process.
 If your ticket is chosen, you run next.
 Highly
responsive (new process might run
right away)
 Processes can cooperate (give each other
their tickets)
Interactive scheduling
7. Fair-share scheduling
 Consider
who (user) owns the process
 Ex.
 User
A has 1 process
 User B has 9 processes
 Should
user A get 10% and user B get 90%?
 Or should A get 50% and B get 50% (5.6% for
each of the 9 processes)?
 Latter is fair-share.
REAL TIME SCHEDULING
Real time scheduling

Time plays an essential role.

System must react w/in a fixed amount of time.
Real time scheduling

Categories:
1.
Hard – absolute deadlines must be met
2.
Soft – missing an occasional deadline is tolerable
Real time scheduling

Event types:
1.
Periodic = occurring at regular intervals
2.
Aperiodic = occurring unpredictably
Real time scheduling

Algorithm types:
1.
Static (before the system starts running)
2.
Dynamic (scheduling decisions at run time)
Real time scheduling

Given m periodic events.

Event i occurs w/ period Pi, and requires Ci
seconds of CPU time.

Schedulable iff:
We’re basically normalizing C
(event length) by P (event
frequency).
m
Ci

1

i 1 Pi
m
Schedulable exampleCi


Given periods Pi =
100, 200, and 500
msec and CPU time
requirements of Ci
= 50, 30, 100 msec
P
i 1
1
i
50
30 100


 0.5  0.15  0.2  0.85  1
100 200 500
Can we handle
another event w/
P4=1 sec?
50
30 100
?



1
100 200 500 1000
Yes, as long as
C4<=150 msec
50
30 100 150



1
100 200 500 1000

THREADS & THREAD
SCHEDULING
Thread scheduling types:
1.
User level
2.
Kernel level
Thread scheduling
1. User level threads
No clock interrupts (per thread)
 A compute bound thread will dominate its process but
not the CPU
 A thread can yield to other threads within the same
process
 Typically round robin or priority
Tradeoffs:
+ Context switch from thread to thread is simpler
+ App specific thread scheduler can be used
- If a thread blocks on I/O, the entire process (all
threads) block

Thread scheduling
2. Kernel level threads
 Threads
scheduled like processes.
Tradeoffs:
- Context switch from thread to thread is
expensive (but scheduler can make more
informed choices).
+ A thread, blocking on I/O, doesn’t block all
other threads in process.
Download
Related flashcards

Suicides by jumping

23 cards

Deaths from falls

57 cards

Parachuting

24 cards

Geriatrics

29 cards

Create Flashcards