Chapter 5: Process Scheduling Lecture 5 10.14.2008 Hao-Hua Chu

advertisement
Chapter 5: Process Scheduling
Lecture 5
10.14.2008
Hao-Hua Chu
1
Announcement
• Nachos assignment #2 out on course webpage
– Due in two weeks
– TA will talk about this assignment today.
2
Persuasive computing
• Using computing to
change human behaviors
– Baby Think It Over
– Textrix VR Bike
– Slot Machine
Example: Playful Tray (NTU)
• How to encourage good eating habit
in young children?
• Sense to recognize behavior
– Weight sensor underneath the tray to
sense eating actions
– Eating actions as game input
• Play to engage behavior change
– Interactive games: coloring cartoon
character or penguin fishing
4
Outline
• The scheduling problem
• Defining evaluation criteria
– CPU utilization, throughput, turnaround time, waiting time,
response time
• Scheduling algorithms
– FCFS, SJF, RR, Multilevel queue, Multilevel feedback queue
– Preemptive vs. Non-preemptive
• Evaluating scheduling algorithms
– Deterministic model, queueing model, simulation, implementation
• Others
– Multiple-Processor scheduling, process affinity, load balancing,
thread Scheduling
5
The Scheduling Problem for the CPU Scheduler
• How to maximize CPU utilization in multiprogramming?
– Multiprogramming
– CPU utilization – other performance parameters?
• Consider that a process follows the CPU–I/O Burst Cycle
– Process execution consists of a cycle of CPU execution and I/O
wait
– A typical CPU burst distribution (next slides): Web browser, web
server
6
Alternating sequence of CPU And
I/O bursts
7
Histogram of CPU-burst Times
8
Problem solving skill
• Before designing your solutions ….
• Model the problem
– Put the problem in a form that can be solved (using an algorithm)
• Define the evaluation metrics
– “Measure” how successful is a solution
• Now design your solutions
9
Modeling the Scheduling Problem
• For the CPU scheduler
– Selects from among the processes in memory (ready queue) that
are ready to execute, and allocates the CPU to one of them.
• When does the CPU scheduler make a scheduling
decision?
10
CPU Scheduling Decisions
• CPU scheduling decisions may take place when a process:
1.
2.
3.
4.
Switches from running to waiting state: how does it happen?
Switches from running to ready state
Switches from waiting to ready
Terminates
• Scheduling under 1 and 4 is nonpreemptive
• All other scheduling is preemptive
• Preemption: (chinese definition)
11
Dispatcher
• Dispatcher module gives control of the CPU to the process
selected by the short-term scheduler; this involves:
– switching context
– switching to user mode
– jumping to the proper location in the user program to restart that
program
• Dispatch latency – time it takes for the dispatcher to stop
one process and start another running
12
Scheduling Criteria
• CPU utilization – keep the CPU as busy as possible
• Throughput – # of processes that complete their
execution per time unit
• Turnaround time – amount of time to execute a particular
process
• Waiting time – amount of time a process has been waiting
in the ready queue
• Response time – amount of time it takes from when a
request was submitted until the first response is
produced, not output (for time-sharing environment)
13
More Scheduling Criteria
•
•
•
•
•
Max CPU utilization
Max throughput
Min turnaround time
Min waiting time
Min response time
For P2
Response
time
P1
0
Submitted
time
P2
24
P3
27
30
Waiting Turnaround
time
time
14
First-Come, First-Served (FCFS)
Scheduling
Process
Burst Time
P1
24
P2
3
P3
3
• Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1
0
P2
24
P3
27
• Waiting time for P1 = 0; P2 = 24; P3 = 27
• Average waiting time: (0 + 24 + 27)/3 = 17
30
15
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order
P2 , P3 , P1
• The Gantt chart for the schedule is:
P2
P3
P1
0
3
6
• Waiting time for P1 = 6; P2 = 0; P3 = 3
• Average waiting time: (6 + 0 + 3)/3 = 3
• Why is this much better than previous case?
30
– Convoy effect short process behind long process
16
Shortest-Job-First (SJR) Scheduling
• Associate with each process the length of its next CPU
burst. Use these lengths to schedule the process with the
shortest time
• Two schemes:
– Nonpreemptive: once CPU given to the process it cannot be
preempted until completes its CPU burst
– Preemptive: if a new process arrives with CPU burst length less
than remaining time of current executing process, preempt. This
scheme is know as the Shortest-Remaining-Time-First (SRTF)
• SJF is optimal: give minimum average waiting time for a
given set of processes
17
Example of Non-Preemptive SJF
Process Arrival Time
P1
0.0
P2
2.0
P3
4.0
P4
5.0
• SJF (non-preemptive)
P1
0
3
Burst Time
7
4
1
4
P3
7
P2
8
P4
12
• Average waiting time = (0 + 6 + 3 + 7)/4 = 4
16
18
Example of Preemptive SJF
Process Arrival Time
P1
0.0
P2
2.0
P3
4.0
P4
5.0
• SJF (preemptive)
P1
P2 P3
P2
Burst Time
7 5 0
4 2 0
1 0
4 0
P4
P1
11
2
4 5
7
• Average waiting time = (9 + 1 + 0 +2)/4 = 3
0
16
19
Shortest-Job-First (SJR) is Optimal
• SJF is optimal
– Give minimum average waiting time for a given set of processes
• What is the strong assumption in SJF?
– Know CPU burst time a-priori
• How to estimate (guess) the CPU burst time?
• Is SJR still optimal if the estimated CPU burst time is
incorrect?
• What is the other assumption in SJF?
– All processes have equal importance to users.
20
Determining Length of Next CPU
Burst
• Can only estimate the length
• Can be done by using the length of previous CPU bursts,
using exponential averaging
1. t n  actual lenght of n th CPU burst
2.  n 1  predicted value for the next CPU burst
3.  , 0    1
4. Define :
 n1   tn  1    n .
21
Prediction of the Length of the Next CPU Burst
 n1   tn  1    n .
  0.5
22
Examples of Exponential Averaging
 n1   tn  1    n .
•  =0
– n+1 = n
– Recent history does not count
•  =1
– n+1 = tn
– Only the actual last CPU burst counts
• If we expand the formula, we get:
n+1 =  tn+(1 - ) tn-1 + …
+(1 -  )j  tn -j + …
+(1 -  )n +1 0
• Since both  and (1 - ) are less than or equal to 1, each
successive term has less weight than its predecessor
23
Priority Scheduling
• A priority number (integer) is associated with each
process
• The CPU is allocated to the process with the highest
priority (smallest integer = highest priority)
– Non-preemptive
– Preemptive -> when …
• SJF is actually a priority scheduling algorithm
– What is the used priority?
– The predicted next CPU burst time
24
Priority Scheduling
• What are the potential problems with priority scheduling?
– Can a process starve (although it is ready but never had a chance
to be executed)?
– Starvation – low priority processes may never execute
• What to solve starvation?
– Aging – as time progresses increase the priority of the process
• Priority scheduling is unfair to all processes
– How to create a (preemptive) scheduling algorithm that is fair to
all processes?
– That is, share the processor fairly among all processes
25
Round Robin (RR)
• Each process gets a small unit of CPU time (time
quantum), usually 10-100 milliseconds.
– After this time has elapsed, the process is preempted and added
to the end of the ready queue.
• If there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU time
in chunks of at most q time units at once.
– What is the maximum waiting time for a process?
– (n-1)q time units
26
Example of RR with Time Quantum =
20
Process
P1
P2
P3
P4
• The Gantt chart is:
P1
0
P2
20
P3
37
Burst Time
53 33 13 0
17
68 48 28 8
24 4 0
P4
57
P1
77
P3
97
117
P4
P1
P3
121 134
P3
154 162
• How does preemptive RR compare with preemptive SJF?
– Average turnaround time? Average waiting time?
27
Round Robin (RR)
• How does the size of time quanta (q) affect performance?
– Effects on context switching time & turnaround time?
– Large q?
• q large  FIFO, smaller context switching time
– Small q?
• q small  larger context switching time, turnaround time may
increase.
• For example: 3 processes of 10 time units each. q=1 (avg turnaround
time = 29) vs. q=10 (avg turnaround time = 20)
28
Effect of Time Quanta Size on
Context Switch Time
• Smaller time quantum size leads to more context switches
29
Effect of Time Quanta Size on
Turnaround Time
• Smaller time
quanta “may”
increase
turnaround time.
30
Round Robin & Shortest-Job-First
• Their strong assumption is that all processes have equal
importance to users.
– background (batch - virus checking or video compressing) vs.
foreground processes (interactive processes - word document).
• Foreground processes: waiting/response time is important (excessive
context switching is okay).
• Background processes: waiting time is not important (excessive
context switching decreases throughput)
• What is the solution?
– Is it possible to have multiple scheduling algorithms (running at
the same time) for different types of processes?
31
Multilevel Queue
• Ready queue is partitioned into separate queues:
– foreground (interactive)
– background (batch)
• Each queue has its own scheduling algorithm
– foreground – RR, smaller time quantum
– background – FCFS, larger time quantum
• Scheduling must be done between the queues
– Fixed priority scheduling: serve all from foreground then from
background.
– Time slice: each queue gets a certain amount of CPU time
• 80% to foreground in RR and 20% to background in FCFS
32
Multilevel Queue Scheduling
33
Multilevel Queue
• Multilevel queue assumes processes are permanently
assigned to a queue at the start.
• Two problems: not flexible & starvation
– It is difficult to predict the nature of a process at the start (the
level of interactivity, CPU-bound, I/O-bound, etc.)
– In the fixed-priority scheduling, low priority process may never be
executed.
• Good thing: low overhead
• How to solve these two problems with a bit more
overhead?
34
Multilevel Feedback Queue
• A process can move between the various queues; aging
can be implemented this way
• Multilevel-feedback-queue scheduler defined by the
following parameters:
–
–
–
–
–
number of queues
scheduling algorithms for each queue
method used to determine when to upgrade a process
method used to determine when to demote a process
method used to determine which queue a process will enter
when that process needs service
35
Example of Multilevel Feedback
Queue
• Feedback: based on the observed process’s CPU burst,
enables upgrade or downgrade between queues.
– CPU-bound process downgrades to low priority queue.
– I/O-bound process upgrades to high priority queue.
• Three queues:
– Q0 – RR with time quantum 8 milliseconds
– Q1 – RR time quantum 16 milliseconds
– Q2 – FCFS
36
Example of Multilevel Feedback
Queue
• Scheduling
– A new job enters queue Q0 which is served FCFS. When it gains
CPU, job receives 8 milliseconds. If it does not finish in 8
milliseconds, job is moved to queue Q1.
– At Q1 job is again served FCFS and receives 16 additional
milliseconds. If it still does not complete, it is preempted and
moved to queue Q2.
• Comparison to Multilevel Queue:
– Benefits: flexibility & no starvation
– Cost: scheduling overhead
37
Multilevel Feedback Queues
38
Multiple-Processor Scheduling
• Scheduling more complex for multi-processor systems
– Asymmetric multiprocessing vs. symmetric multiprocessing (SMP)
• Asymmetric: master vs. slave processor, only the master processor
controls CPU scheduler
• Symmetric: each processor has its own CPU scheduler
• Tradeoff: synchronization overhead (running queue) vs. concurrency
– Processor Affinity
• A higher cost in process migration to another processor, why?
– A process’s cache memory on a specific processor
– Load Balancing
• Keep load evenly distributed on processors
• Counteracts with processor affinity
– Hyperthreading – logical CPUs, similar to physical CPUs
39
Thread Scheduling
• Two scopes of scheduling
– Global Scheduling: the CPU scheduler decides which kernel thread
to run next
– Local Scheduling: the user-level threads library decides which
thread to put onto an available LWP
40
How to evaluate scheduling
algorithms?
• Deterministic modeling – takes a particular
predetermined workload and defines the performance of
each algorithm for that workload
• Queueing models – analytical model based on probability
– Distributions on process arrival time, CPU burst, I/O time,
completion time, tec.
• Simulation
– Randomly generated CPU load or collect real traces
• Implementation
41
Pthread Scheduling API
#include <pthread.h>
#include <stdio.h>
#define NUM THREADS 5
int main(int argc, char *argv[])
{
int i;
pthread t tid[NUM THREADS];
pthread attr t attr;
/* get the default attributes */
pthread attr init(&attr);
/* set the scheduling algorithm to PROCESS or SYSTEM */
pthread attr setscope(&attr, PTHREAD SCOPE SYSTEM);
/* set the scheduling policy - FIFO, RT, or OTHER */
pthread attr setschedpolicy(&attr, SCHED OTHER);
/* create the threads */
for (i = 0; i < NUM THREADS; i++)
pthread create(&tid[i],&attr,runner,NULL);
42
Pthread Scheduling API
/* now join on each thread */
for (i = 0; i < NUM THREADS; i++)
pthread join(tid[i], NULL);
}
/* Each thread will begin control in this function */
void *runner(void *param)
{
printf("I am a thread\n");
pthread exit(0);
}
43
Operating System Examples
• Solaris scheduling
• Windows XP scheduling
• Linux scheduling
44
Solaris 2 Scheduling
45
Solaris Dispatch Table
• Multilevel feedback queue for the Interactive and timesharing class
46
Windows XP Priorities
47
Linux Scheduling
• Two algorithms: time-sharing and real-time
• Time-sharing
– Prioritized credit-based – process with most credits is scheduled
next
– Credit subtracted when timer interrupt occurs
– When credit = 0, another process chosen
– When all processes have credit = 0, recrediting occurs
• Based on factors including priority and history
• Real-time
– Soft real-time
– Posix.1b compliant – two classes
• FCFS and RR
• Highest priority process always runs first
48
The Relationship Between Priorities
and Time-slice length
49
List of Tasks Indexed According to Prorities
50
Review
• The scheduling problem
• Defining evaluation criteria
– CPU utilization, throughput, turnaround time, waiting time,
response time
• Scheduling algorithms
– FCFS, SJF, RR, Multilevel queue, Multilevel feedback queue
– Preemptive vs. Non-preemptive
• Evaluating scheduling algorithms
– Deterministic model, queueing model, simulation, implementation
• Others
– Multiple-Processor scheduling, process affinity, load balancing,
thread Scheduling
51
End of Chapter 5
52
Download