Tentamen TDDB63/TDDB68
Operativsystem och Processprogrammering
2 po
ang
Linie: D, DI, EI, I, Ii, C, Y
August 13, 2001
Tid: 14.00 - 18.00
Solution
Betygsgranser: Underkant - 30p, 3 - 31-40p, 4 - 41-50p, 5 - 51-60p.
Eller: Underkant - 30p, G - 31-45p, VG - 46-60p.
Skriv namn, klass, och personnummer pa samtliga losa blad. I ovrigt sa galler aven foljande:
1) For the multiple choice questions, cross the circles on the examp paper sheet directly and return
the exam paper sheet!
2) Endast ett svar per blad. Anvand endast framsidan (delfragor kan vara pa samma sida).
3) Sortera inlamnade svar med avseende pa uppgiftsnummer i stigande ornding.
Poangavdrag kommer at goras om detta inte atfoljs!
Var vanlig och skriv tydligt och lasbart. Poang kommer inte at ges till olosbara svar. Svaren far
vara pa svenska och/eller engelska.
Dina svar skall tydligt visa losningsmetod. Enbart ratt svar kommer inte att ge poang. I det fall
du ar osaker pa fragestallning, skriv ner din tolkning och los uppgiften utifran din tolkning.
Lycka till!
Uwe and Jorgen
Uppg. 1 (Mult) 1 (Struc) 2 (Sched) 3 (I-O) 4 (Phil) 5 (VMem) 6 (File) Grade
Mogligt 6
8
15
6
9
6
10
60
Poang
1
Solution
2
Task 1: Multiple Choice (6 Points)
Multiple choice questions. Only correct columns give points. No substraction in case a solution
wrong.
Cross the circles on this paper sheet and return the paper sheet!
correct wrong
X
An operating system normally runs faster on a virtual machine.
X
Writing an OS in assembler is the best you can do since it results in fast
execution of the OS
X
A typical microvawe oven doesn't have an OS.
X
A process in a real-time OS always runs until the timer interupts it.
X
Multiprogramming serves to do more in less time.
X
Synchronous I/O is the fastest I/O method since it does the job on-line.
X
A CPU-bound process has many CPU bursts.
X
Shortest Job-First Scheduling with Preemption is also called ShortestRemaining-Time-First Scheduling.
X
With Priority Scheduling, processes are subject to starvation.
X
Directories can never be graphs.
X
Symbolic links vanish when the entity they point to is removed.
X
UNIX provides rings for protection.
2
Solution
3
Task 2: Operating System Structure (8 Points)
a) (4 pt.)
What is the architectural structuring pattern found in most operating systems?
Solution:
The layering.
Why is UNIX a rather bad example for this architectural pattern?
Solution:
Because it only has two layers, the kernel and the application.
What is dierent in Mach-based operating systems to standard UNIX approaches?
Solution:
Mach-based Unixes have three layers; micokernel, kernel, application.
Name another OS you know that was or is very good in the structuring pattern.
Solution:
Multics (had rings), THE (5 layers), OS-2, or WinNT/2000
b) (4 pt.)
Which hardware modes do you know in which a chip can execute?
Solution:
User mode and monitor/supervisor mode.
In which one runs the OS kernel and why?
Solution:
In supervisor mode, special operations (priviliged instructions) are possible which are not allowed
in user mode, e.g., disabling of interrupts, trapping, and others. Hence the kernel runs in monitor
mode, and user programs in user mode.
How are these modes distinguished?
Solution:
By a special hardware bit.
What happens during a system call?
Solution:
a) switch to supervisor mode, b) enter kernel code and address space
Task 3: Scheduling (15 Points)
a) (2 pt.) Enumerate some of the goals which process switching (dispatching) tries to achieve
for users and the CPU.
Solution:
Max. 2 pt, but a half point for each of: Maximize CPU utilization, Maximize throughput, Minimize
turnaround time, Minimize waiting time, Minimize response time
b) (3 pt.) Consider to use an eective scheduling method for your daily tasks as a student.
Assume that the following tasks arrive over time at the job queue of your brain:
3
Solution
4
Task
P1 brew coee
P2 have breakfast
P3 read OS book
P4 visit skattemyndigheten
P5 have lunch
P6 visit friend
P7 solve old OS exams
P8 brew tea
Arrival time in your mind
6.00
6.00
7.00
10.00
12.00
12.00
14.00
17.00
Burst time of your brain
0.15
1
5
1
0.15
4
4
0.15
For non-preemptive SJF scheduling (shortest job rst scheduling), i) draw a Gantt chart with arrival
and execution time, ii) calculate the average waiting time for all of your jobs, and iii) calculate the
turnaround time. You may ignore the time for context switching.
Gantt chart:
start
06.00-06.15
06.15-07.15
07.15-12.15
12.15-12.30
12.30-13.30
13.30-17.30
17.30-17.30
17.45-21.45
task
P1 brew coee
P2 have breakfast
P3 read OS book
P5 have lunch
P4 visit skattemyndigheten
P6 visit friend
P8 brew tea
P7 solve old OS exams
turnaround (compl - arrival)
0.15
1.15
5.15
0.30
3.30
5.30
0.45
7.45
Unfortunately, P5 comes very late, although a lot of tasks are done already early in the afternoon.
It holds
averagewaitingtime = jobi arrivaltimei waitingtimei
Average waiting time = (0 + 0.15 + 0.15 + 0.15 + 3.45 + 2.30 + 1.30 + 0.30)/8 = 9/8
c) (3 pt.) Do the same for Preemptive SJF scheduling (shortest job rst scheduling). Gantt
chart:
start
06.00-06.15
06.15-07.15
07.15-10.00
10.00-11.00
11.00-12.00
12.00-12.15
12.15-13.30
13.30-17.00
17.00-17.15
17.15-17.45
17.45-21.45
task
P1 brew coee
P2 have breakfast
P3 read OS book
P4 visit skattemyndigheten
P3 read OS book
P5 have lunch
P3 read OS book
P6 visit friend
P8 brew tea
P6 visit friend
P7 solve old OS exams
4
turnaround (compl - arrival)
0.15
1.15
1
0.15
6.30
0.15
5.45
7.45
Solution
5
Average waiting time = (0 + 0.15 + (0.15 + 1+ 0.15) +0 + 0 + 1.45 + 3.45 + 0)/8 = 7.15/8
d) (4 pt.) Do the same for Round Robin Scheduling with time quantum 1h. If two processes
are candidates with equal rights, prefer the one with the shorter burst. Solution: Gantt chart:
start
06.00-06.15
06.15-07.15
07.15-10.15
10.15-11.15
11.15-12.15
12.15-12.30
12.30-13.30
13.30-14.30
14.30-15.30
15.30-16.30
16.30-17.30
17.30-17.45
17.45-18.45
18.45-19.45
19.45-20.45
20.45-21.45
task
P1 brew coee
P2 have breakfast
P3 read OS book
P4 visit skattemyndigheten
P3 read OS book
P5 have lunch
P6 visit friend
P3 read OS book
P7 solve old OS exams
P6 visit friend
P7 solve old OS exams
P8 brew tea
P6 visit friend
P7 solve old OS exams
P6 visit friend
P7 solve old OS exams
waiting
0
0.15
0.15
0.15
1
0.15
0.30
1.15
0.30
2
1
1.15
1.15
1
1
turnaround
0.15
1.15
1.15
0.30
7.30
0.45
8.45
7.45
rest: P3-2
rest: P3-2
rest: P3-1
rest: P6-3, P3-1
rest: P6-3
rest: p6-3, P7-3
rest: p6-2, P7-3
rest: P6-2, P7-2
rest: P6-2, P7-2
rest: p6-1, P7-2
rest: p6-1, P7-1
rest:P7-1
rest: p6-1, P7-1
Average waiting time = (0 + 0.15 + (0.15 + 1+ 1.15)=2.30 +0.15 + 0.15 + (0.30+2+1.15+1)+=4.45
(0.30+1+1.15+1)=3.45 /8 = 11.45/8
e) (2 pt.)
If you don't like to wait long whenever a task comes up in your mind, which
scheduling method do you prefer for your day and why?
Solution:
a) One prefers SJF with preemption, b) since SJF has the minimum average waiting time, and c)
important long tasks come to late with simple SJF d) round-robin splits many tasks.
f) (2 pt.) With SJF, what is bad with task P7? How would you improve this?
Solution:
a) Since it is so long, P7 always is scheduled in the evening. b) With a 2-level priority scheme (work
items rst), it would be scheduled earlier. Then P6 would be scheduled in the evening. Another
scheduling method such as Round-Robin is also valid.
Task 4: Input and Output (6 Points)
a) (2 pt.)
What is DMA and why is it used?
Solution:
a) DMA, direct memory access, transfers bytes in blocks from the disk to the memory. b) It is
faster than interrupting the CPU byte by byte.
b) (1 pt.) When is it used, and when not?
Solution:
a) Used for fast I-O devices, not necessarily for slow ones.
5
Solution
c) (1 pt.)
d) (1 pt.)
6
Should DMA by synchronous or asynchronous? Motivate.
Which hardware is required for DMA?
Solution:
A DMA controller.
e) (1 pt.) Which software in the kernel is responsible for DMA?
Solution:
The device driver of the device.
Task 5: The Dining Philosophers (Parallelism and Deadlocks) (9
Points)
a) (1 pt.)
What is a deadlock?
Solution:
Processes can only work with several resources, and allocate them one after the other. If they
mutually allocate 'away' what they need they will block themselves forever.
Alternative: Processes allocate several devices. If in the induced relation, the resource allocation
graph, occurs a loop, processes wait for each other cyclically, and are blocked forever.
b) (2 pt.) Consider the Dining Philosophers problem: 5 philosophers sit around a table with 5
plates, but also only 5 chopsticks. Normally, they think. But once in a while, a philosopher takes
up his left, then his right chopstick and starts to eat from his plate. After being fed up, he lays
down the chopsticks again.
Consider the following code which realizes this algorithm. Every chopstick is realized by a semaphore. A philosopher i has to acquire his left (chopstick[i]) and his right semaphore before he can
eat.
chopstick: Array[0..4] of Semaphore = false;
while true do
wait(chopstick[i]);
...
eat();
...
think();
done
Copy the above code to your solution and ll in the missing operations.
c) (2 pt.) Is this code subject to deadlock? Tell why or why not.
Solution:
i) Yes, it is. i1) Every philosopher has to allocate 2 resources.
i2) If they all pick up their left chopstick at the same time before one of them can grip the right,
there is a deadlock. Alternative: there is a circular dependency among the philosophers via the
chopsticks.
d) (1 pt.) Draw the resource allocation graph for the above problem.
Solution:
6
Solution
7
chopstick1
chopstick0
DP0
e) (3 pt.)
code.
DP1
chopstick2
DP2
chopstick3
DP3
chopstick4
DP4
Propose a method to avoid the deadlock. Motivate why it works. Show the modied
Solution:
chopstick: Array[0..4] of Semaphore = false;
while true do
if (i mod 2 = 0)
wait(chopstick[i+1 mod 4]);
wait(chopstick[i]);
else
wait(chopstick[i]);
wait(chopstick[i+1 mod 4]);
end
eat();
signal(chopstick[i]);
signal(chopstick[i+1 mod 4]);
think();
done
Task 6: Virtual Memory (6 Points)
a) (4 pt.)
Suppose that we have a system with a virtual memory demand paging where the page size is 200
and each location is large enough to hold an integer. Consider the 100 by 100 two-dimensional
integer array
int a[100][100];
Assume that the array is stored in row-major order at logical address 200 onwards. Thus, the rst
row, a[0][0] to a[0][99], is stored from 200 to 299, the second row, a[1][0] to a[1][99], is stored from
7
Solution
8
300 to 399, and so on. Suppose further that there is a small process in page 0 (logical addresses
0 to 199) which manipulates the array. Thus, all instructions in the following are assumed to be
fetched from page 0.
For 10 page frames, how may page faults are generated by each of the following two arrayinitialization loops if LRU page replacement are being used, assuming page frame 9 initially holds
the process, the remaining page frames (0 to 8) are initially empty, and that the variables i and
j have been register allocated so that they do not have to be fetched from memory? Justify your
answers briey.
// Loop 1:
for (i = 0; i < 100; i++)
for (j = 0; j < 100; j++)
a[i][j] = 0;
// Loop 2:
for (j = 0; j < 100; j++)
for (i = 0; i < 100; i++)
a[i][j] = 0;
Solution:
The array occupies 50 pages. Loop 1 accesses the array sequentially row by row. There will thus
be page faults when the rows 0, 2, ... 98 are initially accessed; i.e., 50 page faults. Since LRU page
replacement is used and the code is used frequently, page 0 will never be paged out. [It should not
be paged out in any case since restarting an instruction which caused a page fault then immediately
will cause another page fault.] Once a page of the array has been paged in, all elements of the
array in that page are accessed before the next page is brought in, and then not accessed again.
Thus there are no more page faults, and the total number of page faults is 50.
Loop 2, on the other hand, accesses the array column by column. As before, the code page will
not be paged out. That leaves 9 page frames for the array. But 9 page frames is not enough to
hold the entire array. Since the columns are accessed sequentially, from row 0 to row 99, the LRU
scheme from the point of view of the array reduces to a simple FIFO scheme. Thus there will be
50 page faults for each column; i.e., 5000 page faults!
b) (1 pt.) What is meant by the term thrashing ?
Solution:
A process is thrashing if it spends more time on paging than on executing.
c) (1 pt.) Suggest a way to reduce thrashing!
Solution:
[The question is phrased in very general terms, so any one of the following answers is acceptable.] As a system administrator, one could add more primary memory to the system. As an
implementor, one could make use of a local page replacement policy. A better solution would be to
use the working set model to estimate the current memory needs of the processes in order to keep
the degree of multiprogramming at a sensible level. Another possibility is to monitor the page fault
frequency PFF of the processes and increase the number of page frames allocated to a process if
the PFF tends to increase and vice versa.
8
Solution
9
Task 7: Protection and File systems (10 Points)
a) (5 pt.) Describe the UNIX inode scheme for allocating disk space to a le. Then make
reasonable assumptions regarding the size of the inodes, disk blocks and disk block addresses, and
compute how many disk accesses it takes to read the second block of a 100 block le and how many
disk accesses it takes to write and add a new block to the end of a 100 block le. Do not count
I/O of the inode itself, but do count all other blocks that must be read and written in order to
accomplish the above operations.
Solution:
See S & G pp. 380/381. One reasonable assumption is that there is 10 direct pointers, the block
size is 4 Kbyte, and that each block address requires four bytes. Under these assumptions, there is
room for 1024 pointers in an indirect block. Reading the second block thus requires one disc access
(reading the inode is not counted) since the second block is a direct block whose address thus can
be found directly in the inode. Adding a new block to the end of a 100 block le would require
three accesses. First, the block itself has to be written. Then its add ess must be written into the
rst indirect block. This block must thus be read, modied, and then written back. [Finding a
free disk block may also require disk accesses. This need not be taken into account her e. Many
systems maintain tables of at least a few free blocks in primary memory so as to avoid having to
access the disk too often in or der to nd one. (Cf. FAT systems, where the FAT itself (which, at
least to a large extent, is cached in primary memory) also keeps track of the free disk blocks.)]
b) (1 pt.) What is meant by mounting a le system?
Solution:
Mounting can be understood as gluing together two separate le systems into a single, logical le
system. The mount point in the system on which the other is being mounted is identied with the
root of the mounted system. Thus the entire directory structure of the mounted system will appear
under the mount point.
c) (2 pt.) In UNIX, rights to various objects of protection (e.g., les) are granted to individual
users, to groups of users determined in advance by the system administrator, and to all users of
the system. Give a simple example of a situation where this is not exible enough. Would a system
which supported access lists solve your problem? Explain!
Solution:
Suppose a user wants to give a few of his friends access to one of his les while not granting the
same privileges to anyone else. Unless this group of friends are a group in the UNIX sense, this
cannot be done. However, had access lists been used, the user could have granted access rights
to people on an individual basis. An access list is basically a list associated with an object of
protection which lists users (or whatever domains there are) and corresponding rights to the object
in question.
d) (2 pt.) In UNIX, what happens when a le whose setuid bit is set is executed? Explain the
set user-id mechanism.
Solution:
All Unix processes have an UID attribute (user identity) indicating the user on behalf of whom the
process is executing. All objects of protection in Unix is associated with a mode granting rights to
individual users or groups of users. Thus the UID of a process determines what the process can do.
When a process calls exec in order to execute a new program (i.e., load new code into the process),
the UID is normally unchanged. However, if the setuid bit is set for the le containing the code,
the UID is changed to the identity of the owner of that le. Thus the process now have all the
rights and restrictions of that user instead. This mechanism is typically used to make it possible
9
Solution
10
for ordinary processes to perform privileged operations in a controlled manner. Such programs are
often setuid root and can thus perform any operation, but as long as the code is carefully written
(and the OS is bug-free and correctly congured), the program will only perform the intended
operations.
10