CPS 210 Final Exam

advertisement
CPS 210 Final Exam
Spring 2002
This exam contains 32 numbered statements. Indicate whether each statement is true or false. If you wish, you may
elaborate on each answer with one sentence.
Synchronization. Define a race as follows: a race exists if and only if two conflicting accesses to some shared variable
occur concurrently (i.e., neither happened-before the other). The following statements pertain to multithreaded programs
using mutexes to eliminate races.
1. The implementation of mutexes guarantees that happened-before defines a total ordering on the synchronization (acquire
and release) operations for each mutex.
2. Synchronization induces a total ordering on the write accesses to each shared variable, if and only if the program
execution has no races.
3. Synchronization induces a total ordering on all write accesses to shared variables, if and only if the program execution
has no races.
4. If a program performs a deterministic computation from its arguments with no I/O, then every execution of the program
results in the same ordering of synchronization events.
5. If two executions of a race-free program on the same input perform the same sequence of acquire and release events in
the same order, then the executions perform the same sequence of accesses to shared variables, and both executions
produce the same final result.
6. If a program uses condition variables correctly, then mutex synchronization induces a total ordering on the wait() and
signal() events for each condition variable.
7. If a program uses sleep() and wakeup() primitives correctly, then mutex synchronization induces a total ordering on the
sleep() and wakeup() events.
Kernels. These statements apply to classical multiprogrammed kernels, such as Unix, Multics, or Nachos.
8. A page fault handler executing in kernel mode must not block.
9. Giving higher internal priority for the CPU to processes that were recently awakened for I/O completions is likely to
improve overall system throughput and response time.
10. For systems with preemptive scheduling, increasing the scheduling quantum is a good way to improve system
throughput.
11. Interrupts and returns are strictly nested, but system call traps and returns might not be.
12. Virtual memory page table formats impose a limit on the amount of physical memory configured in a system.
13. The space overhead for forward-mapped virtual memory page tables depends on the page size and the virtual address
space size, but it is independent of the configured size of physical memory.
14. The overhead of the Unix fork primitive grows linearly with the size of the active address space, and virtual memory
increases this cost significantly.
15. A uniprocessor OS may need to flush entries from its TLB when evicting a page, but not when mapping a new page to
satisfy a page fault.
16. An OS may need to flush entries from its TLB on a context switch, but not when a process exits.
Storage
17. The peak read bandwidth of a RAID-4 array grows linearly with the number of disks, but the single parity disk limits
the peak write bandwidth.
18. Doubling the rotation speed of a 10K RPM disk doubles its peak transfer bandwidth, but improves its throughput for
random 8KB reads only marginally.
19. Doubling the bit density of a 10K RPM disk doubles its peak transfer bandwidth, but improves its throughput for
random 8KB reads only marginally.
20. Given a sufficiently fast CPU and appropriate block allocation policies, write-behind and delayed write mechanisms
can enable sequential disk writes at the storage system's peak transfer bandwidth, independent of the write size, but at the
cost of reducing file system reliability in the presence of failures.
21. A WAFL log-oriented file system can deliver data at close to the peak transfer bandwidth of its disk system,
independent of which files or directories are being accessed.
22. A WAFL log-oriented file system can absorb writes and updates at close to the peak transfer bandwidth of its disk
system, independent of which files or directories are being updated.
23. To preserve data integrity, file system software must control the order in which disk writes complete.
Performance and Benchmarking
24. If a computer system behaves as a standard (M/M/1 FIFO) queuing center, then its throughput and utilization are linear
with request arrival rate until saturation.
25. If a computer system behaves as a standard queuing center, then its response time is dominated by queuing delays when
its utilization is above 50%.
26. If a program's run time is evenly balanced between CPU and disk delays, then a faster CPU can improve performance
by no more than 25%.
27. If a program's run time is evenly balanced between CPU and disk delays, then doubling the number of disks will
improve performance by 25%.
28. Synthetic file service benchmarks are a good basis for evaluating server scalability, but they cannot predict the effect of
file server improvements on real application performance.
Other Stuff
29. In Rialto, selecting the next activity to run takes time linear with the number of reservations and time constraints in
effect.
30. A process running on a Disco virtual node can use processors and memory anywhere on the physical machine, at the
discretion of the virtual machine layer.
31. SnapMirror shows that asynchronous mirroring is effective at the block device level for RAID file systems using
NVRAM.
32. Muse is similar to Odyssey in that it provides an operating system interface for applications to adapt their behavior to
degrade service quality when energy is constrained.
Download