Outline • Announcements • Process Scheduling– continued • Preemptive scheduling algorithms – – – – SJN Priority scheduling Round robin Multiple-level queue Please pick up your quiz if you have not done so Please pick up a copy of Homework #2 Announcements • About the first quiz – If the first quiz is not the worst among all of your quizzes, it will replace the worst you have – If the first quiz is the worst among all of your quizzes, it will be ignored in grading 5/29/2016 COP4610 2 Announcements – cont. • A few things about the first lab – How to check a directory that is valid or not • Using opendir and closedir • Using access() – Note that your program should accept both command names only and full path commands • Using strchr to check if ‘/’ occurs in my_argv[0] • If yes, execv(my_argv[0], my_argv) • If no, then you need to search through the directories 5/29/2016 COP4610 3 Scheduling - review • Scheduling mechanism is the part of the process manager that handles the removal of the running process of CPU and the selection of another process on the basis of a particular strategy – Scheduler chooses one from the ready threads to use the CPU when it is available – Scheduling policy determines when it is time for a thread to be removed from the CPU and which ready thread should be allocated the CPU next 5/29/2016 COP4610 4 Process Scheduler - review Preemption or voluntary yield New Process Ready List Scheduler CPU job job Done job “Running” “Ready” Allocate Resource Manager job job Request “Blocked” Resources 5/29/2016 COP4610 5 Voluntary CPU Sharing • Each process will voluntarily share the CPU – By calling the scheduler periodically – The simplest approach – Requires a yield instruction to allow the running process to release CPU 5/29/2016 COP4610 6 Involuntary CPU Sharing • Periodic involuntary interruption – Through an interrupt from an interval timer device • Which generates an interrupt whenever the timer expires – The scheduler will be called in the interrupt handler – A scheduler that uses involuntary CPU sharing is called a preemptive scheduler 5/29/2016 COP4610 7 Programmable Interval Timer 5/29/2016 COP4610 8 Scheduler as CPU Resource Manager Ready List Release Dispatch Process Dispatch Ready to run Release Scheduler Release Dispatch Units of time for a time-multiplexed CPU 5/29/2016 COP4610 9 The Scheduler Organization From Other States Process Descriptor Ready Process Enqueuer Ready List Context Switcher Dispatcher CPU Running Process 5/29/2016 COP4610 10 Dispatcher • Dispatcher module gives control of the CPU to the process selected by the scheduler; this involves: – switching context – switching to user mode – jumping to the proper location in the user program to restart that program 5/29/2016 COP4610 11 Diagram of Process State 5/29/2016 COP4610 12 CPU Scheduler • Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. • CPU scheduling decisions may take place when a process: 1. 2. 3. 4. Switches from running to waiting state. Switches from running to ready state. Switches from waiting/new to ready. Terminates. 5/29/2016 COP4610 13 CPU Scheduler – cont. • Non-preemptive and preemptive scheduling – Scheduling under 1 and 4 is non-preemptive • A process runs for as long as it likes • In other words, non-preemptive scheduling algorithms allow any process/thread to run to “completion” once it has been allocated to the processor – All other scheduling is preemptive • May preempt the CPU before a process finishes its current CPU burst 5/29/2016 COP4610 14 Working Process Model and Metrics • P will be a set of processes, p0, p1, ..., pn-1 – S(pi) is the state of pi – t(pi), the service time • The amount of time pi needs to be in the running state before it is completed – W (pi), the wait time • The time pi spends in the ready state before its first transition to the running state – TTRnd(pi), turnaround time • The amount of time between the moment pi first enters the ready state and the moment the process exists the running state for the last time 5/29/2016 COP4610 15 Alternating Sequence of CPU And I/O Bursts 5/29/2016 COP4610 16 Scheduling Criteria • CPU utilization – keep the CPU as busy as possible • Throughput – # of processes that complete their execution per time unit • Turnaround time – amount of time to execute a particular process • Waiting time – amount of time a process has been waiting in the ready queue • Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment) 5/29/2016 COP4610 17 First-Come-First-Served • First-Come-First-Served – Assigns priority to processes in the order in which they request the processor 0 350 p0 5/29/2016 475 p1 900 p2 COP4610 1200 p3 1275 p4 18 Predicting Wait Time in FCFS • In FCFS, when a process arrives, all in ready list will be processed before this job • Let m be the service rate • Let L be the ready list length • Wavg(p) = L*1/m + 0.5* 1/m = L/m+1/(2m) • Compare predicted wait with actual in earlier examples 5/29/2016 COP4610 19 Shortest-Job-Next Scheduling • Associate with each process the length of its next CPU burst. – Use these lengths to schedule the process with the shortest time. • SJN is optimal – gives minimum average waiting time for a given set of processes. 5/29/2016 COP4610 20 Nonpreemptive SJN 0 75 p4 5/29/2016 200 p1 450 p3 800 p0 COP4610 1275 p2 21 Determining Length of Next CPU Burst • Can only estimate the length. • Can be done by using the length of previous CPU bursts, using exponential averaging. 1. tn actual lenght of nthCPU burst 2. t n1 predicted value for the next CPU burst 3. , 0 1 4. Define : t n1 tn 1 t n 5/29/2016 COP4610 22 Examples of Exponential Averaging • =0 – tn+1 = tn – Recent history does not count. • =1 – tn+1 = tn – Only the actual last CPU burst counts. • If we expand the formula, we get: tn+1 = tn+(1 - ) tn -1 + … +(1 - )j tn -1 + … +(1 - )n=1 tn t0 • Since both and (1 - ) are less than or equal to 1, each successive term has less weight than its predecessor. 5/29/2016 COP4610 23 Priority Scheduling • In priority scheduling, processes/threads are allocated to the CPU based on the basis of an externally assigned priority – A commonly used convention is that lower numbers have higher priority – SJN is a priority scheduling where priority is the predicted next CPU burst time. – FCFS is a priority scheduling where priority is the arrival time 5/29/2016 COP4610 24 Nonpreemptive Priority Scheduling 5/29/2016 COP4610 25 Priority Scheduling – cont. • Starvation problem – low priority processes may never execute. • Solution through aging – as time progresses increase the priority of the process 5/29/2016 COP4610 26 Deadline Scheduling i 0 1 2 3 4 t(pi) Deadline 350 575 125 550 475 1050 250 (none) 75 200 •Allocates service by deadline •May not be feasible 200 550 575 1050 0 1275 p1 p4 p4 5/29/2016 p4 p1 p0 p0 p2 p3 p0 p2 p3 p2 p3 p1 COP4610 27 Real-Time Scheduling • Hard real-time systems – required to complete a critical task within a guaranteed amount of time. • Soft real-time computing – requires that critical processes receive priority over less fortunate ones. 5/29/2016 COP4610 28 Preemptive Schedulers Preemption or voluntary yield New Process Ready List Scheduler CPU Done • Highest priority process is guaranteed to be running at all times – Or at least at the beginning of a time slice • Dominant form of contemporary scheduling • But complex to build & analyze 5/29/2016 COP4610 29 Preemptive Shortest Job Next • Also called the shortest remaining job next • When a new process arrives, its next CPU burst is compared to the remaining time of the running process – If the new arriver’s time is shorter, it will preempt the CPU from the current running process 5/29/2016 COP4610 30 Example of Preemptive SJF Process Arrival Time Burst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 P1 0 P2 2 P3 4 P2 5 P4 7 P1 11 16 • Average time spent in ready queue = (9 + 1 + 0 +2)/4 =3 5/29/2016 COP4610 31 Comparison of Non-Preemptive and Preemptive SJF Process Arrival Time Burst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 • SJN (non-preemptive) P1 0 P3 3 7 P2 8 P4 12 16 • Average time spent in ready queue – (0 + 6 + 3 + 7)/4 = 4 5/29/2016 COP4610 32 Round Robin (RR) • Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. • If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units. 5/29/2016 COP4610 33 Round Robin (TQ=50) i 0 1 2 3 4 t(pi) 350 125 475 250 75 0 50 p0 W(p0) = 0 5/29/2016 COP4610 34 Round Robin (TQ=50) – cont. i 0 1 2 3 4 t(pi) 350 125 475 250 75 0 100 p0 p1 W(p0) = 0 W(p1) = 50 5/29/2016 COP4610 35 Round Robin (TQ=50) – cont. i 0 1 2 3 4 t(pi) 350 125 475 250 75 0 p0 100 p1 p2 W(p0) = 0 W(p1) = 50 W(p2) = 100 5/29/2016 COP4610 36 Round Robin (TQ=50) – cont. i 0 1 2 3 4 t(pi) 350 125 475 250 75 0 p0 100 p1 p2 200 p3 W(p0) = 0 W(p1) = 50 W(p2) = 100 W(p3) = 150 5/29/2016 COP4610 37 Round Robin (TQ=50) – cont. i 0 1 2 3 4 t(pi) 350 125 475 250 75 0 p0 100 p1 p2 200 p3 p4 W(p0) = 0 W(p1) = 50 W(p2) = 100 W(p3) = 150 W(p4) = 200 5/29/2016 COP4610 38 Round Robin (TQ=50) – cont. i 0 1 2 3 4 t(pi) 350 125 475 250 75 0 p0 100 p1 p2 200 p3 p4 300 p0 W(p0) = 0 W(p1) = 50 W(p2) = 100 W(p3) = 150 W(p4) = 200 5/29/2016 COP4610 39 Round Robin (TQ=50) – cont. i 0 1 2 3 4 t(pi) 350 125 475 250 75 0 p0 100 p1 p2 200 p3 p4 300 p0 p1 W(p0) = 0 W(p1) = 50 W(p2) = 100 W(p3) = 150 W(p4) = 200 TTRnd(p4) = 475 5/29/2016 400 475 p2 p3 p4 COP4610 40 Round Robin (TQ=50) – cont. i 0 1 2 3 4 t(pi) 350 125 475 250 75 0 p0 100 p1 p2 200 p3 p4 300 p0 p1 W(p0) = 0 W(p1) = 50 W(p2) = 100 W(p3) = 150 W(p4) = 200 TTRnd(p1) = 550 TTRnd(p4) = 475 5/29/2016 400 475 550 p2 p3 p4 p0 p1 COP4610 41 Round Robin (TQ=50) – cont. i 0 1 2 3 4 t(pi) 350 125 475 250 75 0 p0 650 p0 100 p1 p2 750 p2 p3 200 p3 p4 850 p0 p2 300 p0 p1 650 p3 950 p3 W(p0) = 0 W(p1) = 50 W(p2) = 100 W(p3) = 150 W(p4) = 200 TTRnd(p1) = 550 TTRnd(p3) = 950 TTRnd(p4) = 475 5/29/2016 400 475 550 p2 p3 p4 p0 p1 p2 COP4610 42 Round Robin (TQ=50) – cont. i 0 1 2 3 4 t(pi) 350 125 475 250 75 0 p0 650 p0 100 p1 p2 200 p3 p4 750 p2 p3 850 p0 p2 300 p0 p1 950 p3 p0 TTRnd(p0) = 1100 TTRnd(p1) = 550 650 p3 1050 p2 p0 W(p0) = 0 W(p1) = 50 W(p2) = 100 W(p3) = 150 W(p4) = 200 TTRnd(p3) = 950 TTRnd(p4) = 475 5/29/2016 400 475 550 p2 p3 p4 p0 p1 p2 COP4610 43 Round Robin (TQ=50) – cont. i 0 1 2 3 4 t(pi) 350 125 475 250 75 0 p0 650 p0 100 p1 p2 200 p3 p4 750 p2 p3 850 p0 p2 300 p0 p1 950 p3 p0 TTRnd(p0) = 1100 TTRnd(p1) = 550 TTRnd(p2) = 1275 TTRnd(p3) = 950 TTRnd(p4) = 475 5/29/2016 400 475 550 p2 p3 p4 p0 p1 p2 1050 p2 p0 1150 p2 p2 650 p3 1250 1275 p2 p2 W(p0) = 0 W(p1) = 50 W(p2) = 100 W(p3) = 150 W(p4) = 200 COP4610 44 Round Robin (TQ=50) – cont. i 0 1 2 3 4 t(pi) 350 125 475 250 75 •Equitable •Most widely-used •Fits naturally with interval timer 0 p0 650 p0 100 p1 p2 750 p2 p3 200 p3 p4 300 p0 p1 850 p0 p2 950 p3 p0 400 475 550 p2 p3 p4 p0 p1 p2 1050 p2 p0 1150 p2 p2 650 p3 1250 1275 p2 p2 W(p0) = 0 TTRnd(p0) = 1100 W(p1) = 50 TTRnd(p1) = 550 W(p2) = 100 TTRnd(p2) = 1275 W(p3) = 150 TTRnd(p3) = 950 W(p4) = 200 TTRnd(p4) = 475 TTRnd_avg = (1100+550+1275+950+475)/5 = 4350/5 = 870 Wavg = (0+50+100+150+200)/5 = 500/5 = 100 5/29/2016 COP4610 45 Round Robin – cont. • Performance – q large FIFO – q small q must be large with respect to context switch, otherwise overhead is too high. 5/29/2016 COP4610 46 Turnaround Time Varies With The Time Quantum 5/29/2016 COP4610 47 How a Smaller Time Quantum Increases Context Switches 5/29/2016 COP4610 48 Round-robin Time Quantum 5/29/2016 COP4610 49 Process/Thread Context Right Operand Status Registers Left Operand R1 R2 ... Rn Functional Unit ALU Result PC Ctl Unit 5/29/2016 COP4610 IR 50 Context Switching - review Old Thread Descriptor CPU New Thread Descriptor 5/29/2016 COP4610 51 Cost of Context Switching • Suppose there are n general registers and m control registers • Each register needs b store operations, where each operation requires K time units – (n+m)b x K time units • Note that at least two pairs of context switches occur when application programs are multiplexed each time 5/29/2016 COP4610 52 Round Robin (TQ=50) – cont. i 0 1 2 3 4 t(pi) 350 125 475 0 p0 250 75 790 p0 •Overhead must be considered 120 p1 240 p2 910 p2 p3 p3 360 p4 1030 p0 p2 p0 p1 1150 p3 p0 TTRnd(p0) = 1320 TTRnd(p1) = 660 TTRnd(p2) = 1535 TTRnd(p3) = 1140 TTRnd(p4) = 565 480 540 575 635 670 p2 p3 p4 p0 p1 p2 1270 p2 p0 1390 p2 p2 790 p3 1510 1535 p2 p2 W(p0) = 0 W(p1) = 60 W(p2) = 120 W(p3) = 180 W(p4) = 240 TTRnd_avg = (1320+660+1535+1140+565)/5 = 5220/5 = 1044 Wavg = (0+60+120+180+240)/5 = 600/5 = 120 5/29/2016 COP4610 53 Multi-Level Queues Preemption or voluntary yield Ready List0 New Process Ready List1 Ready List2 Scheduler CPU Done •All processes at level i run before any process at level j •At a level, use another policy, e.g. RR Ready List3 5/29/2016 COP4610 54 Multilevel Queues • Ready queue is partitioned into separate queues – foreground (interactive) – background (batch) • Each queue has its own scheduling algorithm – foreground – RR – background – FCFS 5/29/2016 COP4610 55 Multilevel Queues – cont. • Scheduling must be done between the queues. – Fixed priority scheduling; i.e., serve all from foreground then from background. Possibility of starvation. – Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., • 80% to foreground in RR • 20% to background in FCFS 5/29/2016 COP4610 56 Multilevel Queue Scheduling 5/29/2016 COP4610 57 Multilevel Feedback Queue • A process can move between the various queues; aging can be implemented this way. • Multilevel-feedback-queue scheduler defined by the following parameters: – – – – – number of queues scheduling algorithms for each queue method used to determine when to upgrade a process method used to determine when to demote a process method used to determine which queue a process will enter when that process needs service 5/29/2016 COP4610 58 Multilevel Feedback Queues – cont. 5/29/2016 COP4610 59 Example of Multilevel Feedback Queue • Three queues: – Q0 – time quantum 8 milliseconds – Q1 – time quantum 16 milliseconds – Q2 – FCFS • Scheduling – A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1. – At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q2. 5/29/2016 COP4610 60 Two-queue scheduling 5/29/2016 COP4610 61 Three-queue scheduling 5/29/2016 COP4610 62 Multiple-Processor Scheduling • CPU scheduling more complex when multiple CPUs are available. • Homogeneous processors within a multiprocessor. • Load sharing • Asymmetric multiprocessing – only one processor accesses the system data structures, alleviating the need for data sharing. 5/29/2016 COP4610 63 Algorithm Evaluation • Deterministic modeling – takes a particular predetermined workload and defines the performance of each algorithm for that workload. • Queuing models • Implementation 5/29/2016 COP4610 64 Evaluation of CPU Schedulers by Simulation 5/29/2016 COP4610 65 Contemporary Scheduling • Involuntary CPU sharing -- timer interrupts – Time quantum determined by interval timer -usually fixed for every process using the system – Sometimes called the time slice length • Priority-based process (job) selection – Select the highest priority process – Priority reflects policy • With preemption • Usually a variant of Multi-Level Queues 5/29/2016 COP4610 66 Scheduling in real OSs • All use a multiple-queue system • UNIX SVR4: 160 levels, three classes – time-sharing, system, real-time • Solaris: 170 levels, four classes – time-sharing, system, real-time, interrupt • OS/2 2.0: 128 level, four classes – background, normal, server, real-time • Windows NT 3.51: 32 levels, two classes – regular and real-time • Mach uses “hand-off scheduling” 5/29/2016 COP4610 67 BSD 4.4 Scheduling • Involuntary CPU Sharing • Preemptive algorithms • 32 Multi-Level Queues – Queues 0-7 are reserved for system functions – Queues 8-31 are for user space functions – nice influences (but does not dictate) queue level 5/29/2016 COP4610 68 Windows NT/2K Scheduling • Involuntary CPU Sharing across threads • Preemptive algorithms • 32 Multi-Level Queues – Highest 16 levels are “real-time” – Next lower 15 are for system/user threads • Range determined by process base priority – Lowest level is for the idle thread 5/29/2016 COP4610 69 Scheduler in Linux • In file kernel/sched.c • The policy is a variant of RR scheduling 5/29/2016 COP4610 70 Summary • The scheduler is responsible for multiplexing the CPU among a set of ready processes / threads – It is invoked periodically by a timer interrupt, by a system call, other device interrupts, any time that the running process terminates – It selects from the ready list according to its scheduling policy • Which includes non-preemptive and preemptive algorithms 5/29/2016 COP4610 71