Outline • • • • Announcements Process Management – continued Process Scheduling Non-preemptive scheduling algorithms – – – – FCFS SJN Priority scheduling Deadline scheduling Announcements • We will have recitation session tomorrow – We will go over the first quiz – We will discuss thread creation and thread synchronization through mutex • On Oct. 2, Dr. Andy Wang will give a lecture – I will be attending a symposium on that day • On Oct. 16 – I need to attend a conference – I will use Oct. 15 to make up the lecture and use Oct. 16 class time for demonstration purpose of the first lab 5/29/2016 COP4610 2 Announcements – cont. • The midterm exam will be on Oct. 23, 2003 – During the regular class time – We will have a review on Tuesday, Oct. 21, 2003 – I will answer questions on Wed., Oct. 22, 2003 during the recitation sessions 5/29/2016 COP4610 3 Hardware Process - Review Hardware process progress Machine is Powered up Bootstrap Process Interrupt Loader Manager Handler P1 Load the kernel Initialization Execute a thread Schedule P,2 Pn … Service an interrupt 5/29/2016 COP4610 4 Implementing the Process Abstraction - review Pi CPU Pj CPU Pi Executable Memory Pj Executable Memory Pk CPU … Pk Executable Memory OS Address Space Pi Address Space CPU ALU Pk Address Space … Control Unit Pj Address Space 5/29/2016 COP4610 Machine Executable Memory OS interface 5 Context Switching - review Old Thread Descriptor CPU New Thread Descriptor 5/29/2016 COP4610 6 Process Descriptors • OS creates/manages process abstraction • Descriptor is data structure for each process – – – – – 5/29/2016 Type & location of resources it holds List of resources it needs List of threads List of child processes Security keys COP4610 7 System Overview 5/29/2016 COP4610 8 The Abstract Machine Interface Application Program Abstract Machine Instructions Trap Instruction User Mode Instructions fork() open() create() OS User Mode Instructions 5/29/2016 Supervisor Mode Instructions COP4610 9 Modern Processes and Threads – cont. 5/29/2016 COP4610 10 The Address Space Address Space Address Binding Executable Memory Process Files Other objects 5/29/2016 COP4610 11 Diagram of Process State 5/29/2016 COP4610 12 A Process Hierarchy 5/29/2016 COP4610 13 Process Hierarchies • Parent-child relationship may be significant: parent controls children’s execution Done Request Running Yield Request Ready-Active Schedule Suspend Suspend Activate Allocate Suspend Blocked-Active 5/29/2016 Activate COP4610 Start Ready-Suspended Allocate Blocked-Suspended 14 UNIX State Transition Diagram Request Wait by parent zombie Sleeping Done Running Schedule Request I/O Request Start Allocate Runnable I/O Complete Traced or Stopped Uninterruptible Sleep 5/29/2016 Resume COP4610 15 Scheduling • Scheduling mechanism is the part of the process manager that handles the removal of the running process of CPU and the selection of another process on the basis of a particular strategy – Scheduler chooses one from the ready threads to use the CPU when it is available – Scheduling policy determines when it is time for a thread to be removed from the CPU and which ready thread should be allocated the CPU next 5/29/2016 COP4610 16 Process Scheduler Organization Preemption or voluntary yield New Process Ready List Scheduler CPU job job Done job “Running” “Ready” Allocate Resource Manager job job Request “Blocked” Resources 5/29/2016 COP4610 17 Scheduler as CPU Resource Manager Ready List Release Dispatch Process Dispatch Ready to run Release Scheduler Release Dispatch Units of time for a time-multiplexed CPU 5/29/2016 COP4610 18 The Scheduler From Other States Process Descriptor Ready Process Enqueuer Ready List Context Switcher Dispatcher CPU Running Process 5/29/2016 COP4610 19 Process/Thread Context Right Operand Status Registers Left Operand R1 R2 ... Rn Functional Unit ALU Result PC Ctl Unit 5/29/2016 COP4610 IR 20 Context Switching - review Old Thread Descriptor CPU New Thread Descriptor 5/29/2016 COP4610 21 Dispatcher • Dispatcher module gives control of the CPU to the process selected by the scheduler; this involves: – switching context – switching to user mode – jumping to the proper location in the user program to restart that program 5/29/2016 COP4610 22 Diagram of Process State 5/29/2016 COP4610 23 CPU Scheduler • Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. • CPU scheduling decisions may take place when a process: 1. 2. 3. 4. Switches from running to waiting state. Switches from running to ready state. Switches from waiting/new to ready. Terminates. 5/29/2016 COP4610 24 CPU Scheduler – cont. • Non-preemptive and preemptive scheduling – Scheduling under 1 and 4 is non-preemptive • A process runs for as long as it likes • In other words, non-preemptive scheduling algorithms allow any process/thread to run to “completion” once it has been allocated to the processor – All other scheduling is preemptive • May preempt the CPU before a process finishes its current CPU burst 5/29/2016 COP4610 25 Voluntary CPU Sharing • Each process will voluntarily share the CPU – By calling the scheduler periodically – The simplest approach – Requires a yield instruction to allow the running process to release CPU 5/29/2016 COP4610 26 Voluntary CPU Sharing – cont. 5/29/2016 COP4610 27 Involuntary CPU Sharing • Periodic involuntary interruption – Through an interrupt from an interval timer device • Which generates an interrupt whenever the timer expires – The scheduler will be called in the interrupt handler – A scheduler that uses involuntary CPU sharing is called a preemptive scheduler 5/29/2016 COP4610 28 Programmable Interval Timer 5/29/2016 COP4610 29 Strategy Selection • The scheduling criteria will depend in part on the goals of the OS and on priorities of processes, fairness, overall resource utilization, throughput, turnaround time, response time, and deadlines 5/29/2016 COP4610 30 Working Process Model and Metrics • P will be a set of processes, p0, p1, ..., pn-1 – S(pi) is the state of pi – t(pi), the service time • The amount of time pi needs to be in the running state before it is completed – W (pi), the waiting time • The time pi spends in the ready state before its first transition to the running state – TTRnd(pi), turnaround time • The amount of time between the moment pi first enters the ready state and the moment the process exists the running state for the last time 5/29/2016 COP4610 31 Partitioning a Process into Small Processes • A process intersperses computation and I/O requests – If a process requests k different I/O operations during its life time, the result is k+1 service time requests interspersed with k I/O requests – For CPU scheduling, pi can be decomposed into k+1 smaller processes pij, where each pij can be executed without I/O 5/29/2016 COP4610 32 Alternating Sequence of CPU And I/O Bursts 5/29/2016 COP4610 33 Histogram of CPU-burst Times 5/29/2016 COP4610 34 Review: Compute-bound and I/O-bound Processes • Compute-bound processes – Generate I/O requests infrequently – Spend more of its time doing computation • I/O-bound processes – Spend more of its time doing I/O than doing computation 5/29/2016 COP4610 35 Scheduling Criteria • CPU utilization – keep the CPU as busy as possible • Throughput – # of processes that complete their execution per time unit • Turnaround time – amount of time to execute a particular process • Waiting time – amount of time a process has been waiting in the ready queue • Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment) 5/29/2016 COP4610 36 Optimization Criteria • • • • • • Max CPU utilization Max throughput Min turnaround time Min waiting time Min response time Which one to use depends on the system’s design goal 5/29/2016 COP4610 37 Everyday scheduling methods • • • • • First-come, first served Shorter jobs first Higher priority jobs first Job with the closest deadline first Round-robin 5/29/2016 COP4610 38 FCFS at the supermarket 5/29/2016 COP4610 39 SJF at the supermarket 5/29/2016 COP4610 40 Round-robin scheduling 5/29/2016 COP4610 41 First-Come-First-Served • First-Come-First-Served – Assigns priority to processes in the order in which they request the processor 5/29/2016 COP4610 42 First-Come-First-Served – cont. i 0 1 2 3 4 t(pi) 350 125 475 250 75 0 350 p0 TTRnd(p0) = t(p0) = 350 5/29/2016 W(p0) = 0 COP4610 43 First-Come-First-Served – cont. i 0 1 2 3 4 t(pi) 350 125 475 250 75 350 p0 475 p1 TTRnd(p0) = t(p0) = 350 TTRnd(p1) = (t(p1) +TTRnd(p0)) = 125+350 = 475 5/29/2016 COP4610 W(p0) = 0 W(p1) = TTRnd(p0) = 350 44 First-Come-First-Served – cont. i 0 1 2 3 4 t(pi) 350 125 475 250 75 475 p0 p1 950 p2 TTRnd(p0) = t(p0) = 350 TTRnd(p1) = (t(p1) +TTRnd(p0)) = 125+350 = 475 TTRnd(p2) = (t(p2) +TTRnd(p1)) = 475+475 = 950 5/29/2016 COP4610 W(p0) = 0 W(p1) = TTRnd(p0) = 350 W(p2) = TTRnd(p1) = 475 45 First-Come-First-Served – cont. i 0 1 2 3 4 t(pi) 350 125 475 250 75 950 p0 p1 p2 TTRnd(p0) = t(p0) = 350 TTRnd(p1) = (t(p1) +TTRnd(p0)) = 125+350 = 475 TTRnd(p2) = (t(p2) +TTRnd(p1)) = 475+475 = 950 TTRnd(p3) = (t(p3) +TTRnd(p2)) = 250+950 = 1200 5/29/2016 COP4610 1200 p3 W(p0) W(p1) W(p2) W(p3) = 0 = TTRnd(p0) = 350 = TTRnd(p1) = 475 = TTRnd(p2) = 950 46 First-Come-First-Served – cont. i 0 1 2 3 4 t(pi) 350 125 475 250 75 p0 p1 p2 TTRnd(p0) = t(p0) = 350 TTRnd(p1) = (t(p1) +TTRnd(p0)) = 125+350 = 475 TTRnd(p2) = (t(p2) +TTRnd(p1)) = 475+475 = 950 TTRnd(p3) = (t(p3) +TTRnd(p2)) = 250+950 = 1200 TTRnd(p4) = (t(p4) +TTRnd(p3)) = 75+1200 = 1275 5/29/2016 COP4610 1200 1275 p3 p4 W(p0) W(p1) W(p2) W(p3) W(p4) = 0 = TTRnd(p0) = 350 = TTRnd(p1) = 475 = TTRnd(p2) = 950 = TTRnd(p3) = 1200 47 FCFS Average Wait Time i 0 1 2 3 4 t(pi) 350 125 475 250 75 •Easy to implement •Ignores service time, etc •Not a great performer 0 350 p0 475 p1 900 p2 TTRnd(p0) = t(p0) = 350 TTRnd(p1) = (t(p1) +TTRnd(p0)) = 125+350 = 475 TTRnd(p2) = (t(p2) +TTRnd(p1)) = 475+475 = 950 TTRnd(p3) = (t(p3) +TTRnd(p2)) = 250+950 = 1200 TTRnd(p4) = (t(p4) +TTRnd(p3)) = 75+1200 = 1275 W(p0) W(p1) W(p2) W(p3) W(p4) 1200 1275 p3 p4 =0 = TTRnd(p0) = 350 = TTRnd(p1) = 475 = TTRnd(p2) = 950 = TTRnd(p3) = 1200 Wavg = (0+350+475+950+1200)/5 = 2974/5 = 595 5/29/2016 COP4610 48 Predicting Wait Time in FCFS • In FCFS, when a process arrives, all in ready list will be processed before this job • Let m be the service rate • Let L be the ready list length • Wavg(p) = L*1/m + 0.5* 1/m = L/m+1/(2m) • Compare predicted wait with actual in earlier examples 5/29/2016 COP4610 49 Shortest-Job-Next Scheduling • Associate with each process the length of its next CPU burst. – Use these lengths to schedule the process with the shortest time. • SJN is optimal – gives minimum average waiting time for a given set of processes. 5/29/2016 COP4610 50 Shortest-Job-Next Scheduling – cont. • Two schemes: – non-preemptive – once CPU given to the process it cannot be preempted until completes its CPU burst. – Preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-Next (SRTN). 5/29/2016 COP4610 51 Nonpreemptive SJN 5/29/2016 COP4610 52 Shortest Job Next – cont. i 0 1 2 3 4 t(pi) 350 125 475 250 75 0 75 p4 TTRnd(p4) = t(p4) = 75 5/29/2016 W(p4) = 0 COP4610 53 Shortest Job Next – cont. i 0 1 2 3 4 t(pi) 350 125 475 250 75 0 75 200 p4 p1 TTRnd(p1) = t(p1)+t(p4) = 125+75 = 200 W(p1) = 75 TTRnd(p4) = t(p4) = 75 W(p4) = 0 5/29/2016 COP4610 54 Shortest Job Next – cont. i 0 1 2 3 4 t(pi) 350 125 475 250 75 0 75 200 450 p4 p1 p3 TTRnd(p1) = t(p1)+t(p4) = 125+75 = 200 W(p1) = 75 TTRnd(p3) = t(p3)+t(p1)+t(p4) = 250+125+75 = 450 TTRnd(p4) = t(p4) = 75 W(p3) = 200 W(p4) = 0 5/29/2016 COP4610 55 Shortest Job Next – cont. i 0 1 2 3 4 t(pi) 350 125 475 250 75 0 75 200 450 p4 p1 p3 800 p0 TTRnd(p0) = t(p0)+t(p3)+t(p1)+t(p4) = 350+250+125+75 = 800 TTRnd(p1) = t(p1)+t(p4) = 125+75 = 200 W(p0) = 450 W(p1) = 75 TTRnd(p3) = t(p3)+t(p1)+t(p4) = 250+125+75 = 450 TTRnd(p4) = t(p4) = 75 W(p3) = 200 W(p4) = 0 5/29/2016 COP4610 56 Shortest Job Next – cont. i 0 1 2 3 4 t(pi) 350 125 475 250 75 0 75 200 450 p4 p1 p3 800 p0 1275 p2 TTRnd(p0) = t(p0)+t(p3)+t(p1)+t(p4) = 350+250+125+75 = 800 TTRnd(p1) = t(p1)+t(p4) = 125+75 = 200 TTRnd(p2) = t(p2)+t(p0)+t(p3)+t(p1)+t(p4) = 475+350+250+125+75 = 1275 TTRnd(p3) = t(p3)+t(p1)+t(p4) = 250+125+75 = 450 TTRnd(p4) = t(p4) = 75 5/29/2016 COP4610 W(p0) = 450 W(p1) = 75 W(p2) = 800 W(p3) = 200 W(p4) = 0 57 Shortest Job Next – cont. i 0 1 2 3 4 t(pi) 350 125 475 250 75 •Minimizes wait time •May starve large jobs •Must know service times 0 75 200 450 p 4 p1 p3 800 p0 1275 p2 TTRnd(p0) = t(p0)+t(p3)+t(p1)+t(p4) = 350+250+125+75 = 800 TTRnd(p1) = t(p1)+t(p4) = 125+75 = 200 TTRnd(p2) = t(p2)+t(p0)+t(p3)+t(p1)+t(p4) = 475+350+250+125+75 = 1275 TTRnd(p3) = t(p3)+t(p1)+t(p4) = 250+125+75 = 450 TTRnd(p4) = t(p4) = 75 W(p0) = 450 W(p1) = 75 W(p2) = 800 W(p3) = 200 W(p4) = 0 Wavg = (450+75+800+200+0)/5 = 1525/5 = 305 5/29/2016 COP4610 58 Priority Scheduling • In priority scheduling, processes/threads are allocated to the CPU based on the basis of an externally assigned priority – A commonly used convention is that lower numbers have higher priority – Static priorities vs. dynamic priorities • Static priorities are computed once at the beginning and are not changed • Dynamic priorities allow the threads to become more or less important depending on how much service it has recently received 5/29/2016 COP4610 59 Priority Scheduling – cont. • There are non-preemptive and preemptive priority scheduling algorithms – Preemptive – nonpreemptive • SJN is a priority scheduling where priority is the predicted next CPU burst time. • FCFS is a priority scheduling where priority is the arrival time 5/29/2016 COP4610 60 Nonpreemptive Priority Scheduling 5/29/2016 COP4610 61 Priority Scheduling – cont. 5/29/2016 COP4610 62 Priority Scheduling – cont. 5/29/2016 COP4610 63 Priority Scheduling – cont. • Starvation problem – low priority processes may never execute. • Solution through aging – as time progresses increase the priority of the process. 5/29/2016 COP4610 64 Deadline Scheduling i 0 1 2 3 4 t(pi) Deadline 350 575 125 550 475 1050 250 (none) 75 200 •Allocates service by deadline •May not be feasible 200 550 575 1050 0 1275 p1 p4 p4 5/29/2016 p4 p1 p0 p0 p2 p3 p0 p2 p3 p2 p3 p1 COP4610 65 Summary • Processes/threads scheduler organization • Non-preemptive scheduling algorithms – – – – FCFS SJN Priority Deadline 5/29/2016 COP4610 66