CS342302 Operating Systems Mid-term Examination (I) (11/4/2013) 10:10am-12:00pm 1. (24%) Explain the following terms as details as possible: (a) DMA (Direct Memory Access), (b) Interrupt Vectors and interrupt service routine, (c) Multiprogramming, (d) Multitasking, (e) Virtual Machine (VM), (f) Deterministic modeling to evaluate algorithms (a) Device controller transfers blocks of data from buffer storage directly to main memory without CPU intervention, only one interrupt is generated per block, rather than the one interrupt per byte (b) Interrupt vector contains the addresses of all the interrupt service routines, whose execution are triggered by the reception of interrupts. (c) The running task keeps running until it performs an operation that requires waiting for an external event or until the computer's scheduler forcibly swaps the running task out of the CPU. Multiprogramming systems are designed to maximize CPU usage. (d) CPU switches jobs so frequently that users can interact with each job while it is running, creating interactive computing (e) A virtual machine (VM) is a software implementation of a machine that executes programs like a physical machine. (f) Deterministic modeling takes a particular predetermined workload and defines the performance of each algorithm for that workload 1 2.(6%) Explain the following queueing-diagram representation of process scheduling. Where are the long-term, mid-term, and short-term scheduling algorithms in this diagram? What are their function goals ? (1) 0.5 point each Mid-term Long-term Short-term (2) 1.5 points each Short-term scheduler: Selects from among the process that are ready to execute and allocate CPU to it. Mid-term scheduler: Swap out some process out of memory to solve memory space problem or reduce the degree of multiprogramming. When there’s enough memory space, processes will be swap-in and start from where it left. Swap out: 0.5 point Swap in: 0.5 point Solve memory space problem…. : 0.5 point Long-term scheduler: Select processes from pool and loads them into memory for execution. 3.(5%) Using the following program. Explain what the output will be at Line A. # include <sys/types.h> # include <stdio.h> # include <unistd.h> int value = 5; int main () { Pid_t pid; pid = fork(); if (pid ==0) { /* child process */ value += 20; return 0; } else if (pid > 0) { /* parent process */ wait(NULL); 2 printf(“PARENT: value = %d”, value); /* LINE A */ return 0; } } LINE A will print PARENT: value =5. (3 points) fork() copy the entire code to create a new as child process. (1 point) Parent and child process have their own memory, the child process simply continues to execute exactly the same program that the parent process was running.(1 point) 每個解釋不完整扣 0.5 分 4.(10%) What is a “thread” and what is the “context” of a thread ? What are the differences among user-level threads, kernel-level threads, and processes ? Thread (2) –the smallest sequence of programmed instructions that can be managed independently by an operating system scheduler Context of a thread (2) –minimal set of data used by this task that must be saved to allow a task interruption at a given date, and a continuation of this task at the point it has been interrupted and at an arbitrary future date User-thread (2) – scheduled and created by user-thread library, need kernel thread’s help if it want to execute system call. Kernel don’t know the existence of user thread . Kernel-thread (2) – scheduled and created by Operating system, can execute system call. There must be a mapping between user thread and kernel thread. Processes (2) - an instance of a computer program or application that is being executed. It contains the program code and its current activity 答案不完整扣 1 分,扣至該部分分數為零 5.(10%) Explain the “one-to-one model” and “many-to-many model” to establish the relationship between user threads and kernel threads. How user-level threads are scheduled by the Thread Library with the Light Weight Processes (LWP) ? How kernel threads are scheduled by the OS ? and how OS communicates with a process to inform the number of allocated LWPs ? One-to-one model (2) : Creating a user thread requires creating the corresponding kernel thread Many-to-many model (2) : Multiplexes many user level threads to a small or equal number of kernel threads 3 (2) Each LWP is attached to a kernel thread and it is kernel threads that the operating system schedules to run on physical processor (2) Kernel threads are scheduled by the OS -> 提到任何部分,描述合 理即給分。 (2) Kernel makes an upcall to the application informing block which can then be used to know the number of allocated LWPs. -> 提到 Upcall 即給分 6.(10%) The following program uses the Pthreads APIs. What would be the output from the program at LINE C and LINE P ? # include <pthread.h> # include <stdio.h> int value = 0; void *runner(void *param); /*the thread */ int main (int argc, char *argv[]) { int pid; pthread_t tid; pthread_attr_t attr; pid = fork(); if (pid ==0) { /* child process */ pthread_attr_init(&attr); pthread_create(&tid,&attr, runner, NULL); pthread_join (tid, NULL); printf (“CHILD: value = %d”, value); /* LINEC*/ } else if (pid > 0) { /* parent process */ wait(NULL); printf(“PARENT: value = %d”, value); /* LINE P */ } } void *runner (void *param) { value = 10; pthread_exit(0); } The parent forks a child process which has its own memory space, then waits for the forked child process to complete. The child process creates a new thread with the same memory space as the child process, and changes the value of the global variable to 10. Then it outputs the value of that variable 4 which is 10 and the child process finishes. In the parent process, the value of the global variable remains unchanged and its value is outputted, which is 0. 答案 2 分,解釋 3 分 解釋不完全扣 1 至 2 分,錯誤解釋不給分 7.(10%) The traditional UNIX scheduler enforces an inverse relationship between priority numbers and priorities: the higher the number, the lower the priority. The scheduler recalculates process priorities once per second using the following function: Priority = (recent CPU usage /2) + base Where base = 60 and recent CPU usage refers to a value indicating how often a process has used the CPU since priorities were last recalculated. Assume that recent CPU usage for process P1 is 40, for process P2 is 18, and for process P3 is 10. What will be the new priorities for these three processes when priorities are recalculated ? Based on this information, does the traditional UNIX scheduler raise or lower the relative priority of a CPU-bound process ? (1) The new priorities are recalculated below: P1: (40/2)+60=80 P2: (18/2)+60=69 P3: (10/2)+60=65 (6 points in total, 3 points for formula, 3 points for answer) (2) The traditional UNIX scheduler lower the relative priority of a CPU-bound process. (4 points) 8.(15%) Consider the following set of processes, with the length of the CPU burst given in milliseconds. Process Burst time Priority P1 12 3 P2 1 1 P3 2 3 P4 1 4 P5 6 2 The processes are assumed to have arrived in the order P1, P2, P3, P4, P5, all at time 0. (a) (4%) Draw four Gantt charts that illustrate the execution of these processes using the following scheduling algorithms: FCFS, SJF, nonpreemptive priority (a smaller priority number implies a higher 5 priority), and RR (quantum =2) (b) (4%) What is the turnaround time of each process for each of the scheduling algorithms in part (a) (c) (4%) What is the waiting time of each of these scheduling algorithms ? (d) (3%) Which of the algorithms results in the minimum average waiting time (over all processes) ? FCFS: P1 0 P2 12 SJF: P2 P3 13 P4 15 P4 P3 P5 16 P5 22 P1 0 1 2 4 10 22 Non-preemptive: P2 P5 P1 P3 P4 0 1 7 19 21 22 RR: P1 P2 P3 P4 P5 P1 P5 P1 P5 P1 P1 P1 0 2 3 5 6 8 10 12 14 16 18 20 22 Turnaround Time: FCFS -> P1: 12, P2: 13, P3: 15, P4: 16, P5: 22 SJF -> P1: 22, P2: 1, P3: 4, P4: 2, P5: 10 Non-preemptive priority -> P1: 19, P2: 1, P3: 21, P4: 22, P5: 7 RR -> P1: 22, P2: 3, P3: 5, P4: 6, P5: 16 Waiting Time: FCFS -> P1: 0, P2: 12, P3: 13, P4: 15, P5: 16 SJF -> P1: 10, P2: 0, P3: 2, P4: 1, P5: 4 Non-preemptive priority -> P1: 7, P2: 0, P3: 19, P4: 21, P5: 1 RR -> P1: 10, P2: 2, P3: 3, P4: 5, P5: 10 Minimum average waiting time: SJF Gantt Chart 每項目 1 分,小錯誤扣 0.5 分 Turnaround time / waiting time 每項目 0.2 分 題目規定 for each process of each algorithm 需標出 turnaround time,若未標出各 Process 為多少,不給分 6 9.(10%) Suppose that the following processes arrive for execution at the times indicated. Each process will run for the amount of time listed. In answering the questions, use nonpreemptive scheduling, and base all decisions on the information you have at the time the decision must be made. Process P1 P2 P3 Arrival time 0.0 0.4 1.0 Burst time 8 4 1 (a) (3%)What is the average turnaround time for these processes with the FCFS scheduling algorithm ? (b) (3%) What is the average turnaround time for these processes with the SJF scheduling algorithm ? (c) (4%) The SJF algorithm is supposed to improve performance, but notice that we chose to run process P1at time 0 because we did not know that two shorter processes would arrive soon. Compute what the average turnaround time will be if the CPU is left idle for the first 1 unit and then SJF scheduling is used. Remember that processes P1 and P2 are waiting during this idle time, so their average waiting time may increase. (a) FCFS: P1 0 P1: 8-0 = 8 P2: 12-0.4 = 11.6 P3: 13-1 = 2 Avg: 10.53 P2 P3 8 12 13 (b) SJF: P1 0 P1: 8 - 0 = 8 P2: 13 - 0.4 = 12.6 P3: 9 - 1 = 8 Avg: 9.53 P2 P3 8 9 (c) 7 13 Idle 0 1 P1: 14 - 0 = 14 P2: 6 - 0.4 = 5.6 P3: 2 - 1 = 1 Avg: 6.87 P3 P2 P1 2 6 14 (a) (b) 答案對且附計算過程即給全部分數,若平均為錯,分項目給分, 每項目一分 (c) 項目、平均都各一分 8