Lecture 6 Reminder: Homework 1, Shell Project due on Wednesday Questions?

advertisement
Lecture 6


Reminder: Homework 1, Shell Project due on
Wednesday
Questions?
Monday, January 23
CS 470 Operating Systems - Lecture 6
1
Outline

Round-robin (RR) scheduling

Multi-level queue scheduling

Multi-processor scheduling

Methods for evaluating scheduling algorithms
Monday, January 23
CS 470 Operating Systems - Lecture 6
2
Round-Robin (RR)



Round-robin (RR) scheduling can be thought
of as FCFS with preemption at fixed intervals.
As its name implies, each process gets some
CPU time in a round-robin manner, making it
very good for time sharing OS's.
Define a time quantum (or time slice) q,
usually 10-100ms. The Ready Queue is FIFO.
The currently running process gets up to 1
quantum of time. At that time, it is interrupted
and transitions back to the READY state.
Monday, January 23
CS 470 Operating Systems - Lecture 6
3
Round-Robin (RR)

Reconsider the first example from last time:
Process
CPU burst time
Arrival time
p1
24
0
p2
3
0
p3
3
0
with arrival order p1, p2, p3

Do wait time analysis with q = 4ms. Do again
with q = 2ms.
Monday, January 23
CS 470 Operating Systems - Lecture 6
4
Round-Robin (RR)



Wait time is bounded: (n-1)*q ms is maximum
wait time between each quantum. Actual
performance depends on size of q.
One extreme: q =  => FCFS
Other extreme: q = 1ms, approaches
appearance of n processes each having its own
processor that is 1/n as fast as the physical
processor.
Monday, January 23
CS 470 Operating Systems - Lecture 6
5
Round-Robin (RR)


Making this all work requires hardware support.
In particular, context switches could be costly.
This tends to favor larger q sizes to reduce the
number of switches. E.g. for a process with
burst size of 10 ms:

q = 12ms requires no context switches

q = 6ms requires one switch

q = 1ms ??
Luckily, context switches usually are in the 10
microseconds range.
Monday, January 23
CS 470 Operating Systems - Lecture 6
6
Round-Robin (RR)


Too large of q size approaches FCFS. Recall
distribution of burst sizes:
General rule is 80% of processes should finish
in 1 quantum.
Monday, January 23
CS 470 Operating Systems - Lecture 6
7
Round-Robin (RR)

Another previous example scenario:
Process
p1
p2
p3
p4

CPU burst time
8
4
9
5
Arrival time
0
0
0
0
Do wait time analysis with q = 4ms. Do again
with q = 2ms.
Monday, January 23
CS 470 Operating Systems - Lecture 6
8
Multi-level Queue Scheduling (MQS)



The processes that do not finish in 1 quantum
may be very long as the chart indicates.
If we want to minimize the maximum wait time,
it might be better to give these processes more
time the next time it gets the CPU.
Also, processes may have different processing
requirements. E.g., interactive foreground
processes vs. batch background processes.
Monday, January 23
CS 470 Operating Systems - Lecture 6
9
Multi-level Queue Scheduling (MQS)



For more flexibility, we can partition the Ready
Queue into separate queues each with a
different scheduling algorithm. Called multilevel queue scheduling (MQS).
Example: foreground processes in one queue
using RR, background processes in another
queue using FCFS.
Also must schedule between queues. Example:
80% time choose from foreground queue; 20%
time choose from background queue.
Monday, January 23
CS 470 Operating Systems - Lecture 6
10
Multi-level Queue Scheduling (MQS)

Example: different queues for different priority
processes. Could be absolute, preemptive.
(highest)
(lowest)
Monday, January 23
→ System
→
→ Interactive
→
→ Interactive Editing →
→ Batch
→
→ Student
→
CS 470 Operating Systems - Lecture 6
11
Multi-level Feedback Queue
Scheduling (MFQS)


Another organization would be to allow
processes to move between queue levels.
Example:
admitted
Monday, January 23
→q=8
→ q = 16
→ FCFS
→
→
→
CS 470 Operating Systems - Lecture 6
12
Multi-level Feedback Queue
Scheduling (MFQS)


Processes start in first queue, then if do not
terminate, put on the second queue, etc.
Usually preemptive.
Parameters for MQS and MFQS include

number of queues

schedule for each queue

when to put into lower queue

when to raise to higher queue, e.g. for aging

do admitted processes always start in first queue
Monday, January 23
CS 470 Operating Systems - Lecture 6
13
Multiprocessor Scheduling

Assume homogeneous hardware; any process
can run on any processor. Would like to share
the load among processors. Two approaches:


Asymmetric multiprocessing: all scheduling is
done by one processor, the master server. Others
only execute code given to them. System data
structures are not shared, simplifying system.
Symmetric multiprocessing (SMP): each
processor self-schedules. Common or individual
Ready Queues. System data is shared, so must
synchronized.
Monday, January 23
CS 470 Operating Systems - Lecture 6
14
Multiprocessor Scheduling

Some issues in conflict in multiprocessor
scheduling


Processor affinity: once a process is running on a
processor, its data is already there, so we would
like to keep the process running on the same
processor, if possible
Load balancing: try to keep utilization evenly
distributed; migrate processes when needed.
Monday, January 23
CS 470 Operating Systems - Lecture 6
15
Multi-Core Processor Scheduling


Multi-core processors usually provide a limited
number of hardware threads that run on each
core. To the OS, each hardware thread
appears as a logical processor. Example:
UltraSPARC T1 CPU has 8 cores and 4
hardware threads per core giving an
appearance of 32 logical processors.
OS assigns a software thread to a hardware
thread, but the hardware schedules the threads
among the actual cores.
Monday, January 23
CS 470 Operating Systems - Lecture 6
16
How to Choose a Scheduling Algorithm?

Select criteria for evaluation and performance
measurement. For example,


Maximize CPU utilization while keeping maximum
response time under 1 second
Maximize throughput such that turnaround time is
linearly proportional to execution time on average
Monday, January 23
CS 470 Operating Systems - Lecture 6
17
How to Choose a Scheduling Algorithm?


Our method of evaluation is called
deterministic modeling, a form of analysis
that uses a synthetic workload to produce a
formula or number for comparison.
This method is simple and fast to compute, but
not very realistic. Using real system traces can
give more realistic numbers, but mostly this
method is used to explore trends.
Monday, January 23
CS 470 Operating Systems - Lecture 6
18
How to Choose a Scheduling Algorithm?



Queuing models are a more mathematically
accurate evaluation method. Need to
determine the distribution of CPU and I/O
bursts as a probability function and an arrival
(process creation) distribution.
Using queuing-network analysis can compute
utilization, average queue length, average wait
time, etc.
Still need lots of simplifying assumptions to
make the math tractable.
Monday, January 23
CS 470 Operating Systems - Lecture 6
19
How to Choose a Scheduling Algorithm?

Simulations can be used to evaluate
algorithms. Most interesting aspect is how to
generate data to drive the simulation.



Random number generation in some probability
distribution (a la queuing models)
Trace tapes from actual systems
Some drawbacks: expensive, time-consuming,
a large software project itself.
Monday, January 23
CS 470 Operating Systems - Lecture 6
20
How to Choose a Scheduling Algorithm?


Can evaluate a scheduling algorithm by
implementing it into an actual OS and taking
actual measurements.
Most realistic, but also costly.
Monday, January 23
CS 470 Operating Systems - Lecture 6
21
Download