Lecture 22

advertisement
Lecture 22



Reminder: No class on Friday. Have a fun
spring break!
Reminder: Homework 5 due Wednesday after
spring break. Also, think about the case study.
Questions?
Wednesday, March 2
CS 470 Operating Systems - Lecture 22
1
Outline



Dynamic, partial, non-contiguous storage
organization
Demand paging

Address translation

Effective memory access time
Pre-fetching
Wednesday, March 2
CS 470 Operating Systems - Lecture 22
2
Dynamic, Partial, Non-Contiguous Organization

Recall: The issues in storage organization
include providing support for:






single vs. multiple processes
complete vs. partial allocation
fixed-size vs. variable-size allocation
contiguous vs. fragmented allocation
static vs. dynamic allocation of partitions
Start looking at effects of bolded choices on
storage organization and management. This
part of the design space is virtual memory.
Friday, February 25
CS 470 Operating Systems - Lecture 20
3
Partial Allocation


So far, all schemes considered have static,
complete allocation. When a process is
admitted, its entire logical address space is
loaded into physical memory.
What happens if we allow partial allocation?
Partial allocation necessarily implies dynamic
allocation (during run-time), so there must be a
mechanism for identifying the logical addresses
that are not yet present in physical memory and
loading them from the backing store.
Wednesday, March 2
CS 470 Operating Systems - Lecture 22
4
Partial Allocation


Why would partial allocation be a good idea?
Given that now some accesses will cause a
disk read, why should we expect this to work at
all?
Wednesday, March 2
CS 470 Operating Systems - Lecture 22
5
Demand Paging


Virtual memory (VM) commonly is implemented
using a demand paging technique. The idea
is fairly simple, only bring a logical page of a
process into memory when it is used.
As with complete allocation organizations, a
process's logical address space is loaded onto
a backing store (also called swap space) in a
contiguous manner to make loading into
memory easier. Backing store usually is a disk.
Wednesday, March 2
CS 470 Operating Systems - Lecture 22
6
Demand Paging



The PT is modified to have a valid/invalid bit
in each entry to indicate whether the page is in
memory.
If the entry is valid, it contains the physical
frame number as usual.
If the entry is invalid, it contains the disk
address on the backing store that holds the
page.
Wednesday, March 2
CS 470 Operating Systems - Lecture 22
7
Address Translation

Address translation must now handle the case
where the logical page is not in memory:

Page number p is obtained from logical address

If TLB hit, access memory

If TLB miss, access PT


If PT entry is valid, access memory
If PT entry is invalid, trap to OS and process goes
to the Wait Queue. This is called a page fault.
Wednesday, March 2
CS 470 Operating Systems - Lecture 22
8
Address Translation

Page fault processing consists of




Issuing a disk transfer request to the backing store
Loading the requested page into a free frame in
physical memory
Updating the page table entry with valid bit and
frame number
OS issues an Event completion interrupt and
process goes to the Ready Queue to wait for
CPU. Then it attempts the same access that
caused the page fault.
Wednesday, March 2
CS 470 Operating Systems - Lecture 22
9
Address Translation
log. addr.
phys. addr.
f
d
CPU
p
d
f
d
TLB
p#
f#
p
f
TLB hit
i/v
valid PT
entry
f#
main memory
p
TLB miss
update
page table
I/O completion interrupt
Wednesday, March 2
i/v m/f
invalid
PT entry
OS trap
page fault
page table
backing
store
CS 470 Operating Systems - Lecture 22
transfer page
from disk
to memory
10
Effective Memory Access Time



What is the effect of partial allocation on
performance? Could be very bad.
As seen before, emat of complete allocation
paging is 20-220ns. (I.e., effective addressing
time without page faults.) Use 200ns and call
this ma. The first cut estimate depends on the
probability of a page fault p (the page fault
rate).
Now the emat is:
emat = (1 - p) x ma + p x page fault time
Wednesday, March 2
CS 470 Operating Systems - Lecture 22
11
Effective Memory Access Time

How much time does it take to service a page
fault?




trap to OS, context switch, determine that trap is a
page fault: 1-100s
find page on disk, issue a disk read (may have to
wait), I/O completion interrupt: 8ms
update PT, wait for CPU (assume get it
immediately), context switch, resume interrupted
access: 1-100s
Disk read dominates time, so use 8ms for page
fault time.
Wednesday, March 2
CS 470 Operating Systems - Lecture 22
12
Effective Memory Access Time

This gives us:
emat = (1 - p) x 200ns + p x 8ms
= (1 - p) x 200ns + p x 8,000,000ns
= (200 + 7,999,800 x p) ns

emat is directly proportional to page fault rate.
Try out p = 0.001. (1 page fault out of 1000
accesses)
emat = 200 + 7,999,800 x 0.001 ns
= 8199.8 ns
= 8.2ms (! - slowdown factor of 40)
Wednesday, March 2
CS 470 Operating Systems - Lecture 22
13
Effective Memory Access Time

If we want degradation to be less than 10%,
need p to be close to 0. Can compute this:
220 > 200 + 7,999,800 x p
20 > 7,999,800 x p
p < 0.0000025


This is less than 1 page fault out of 399,990
accesses.
Turns out this is not unreasonable, due to
prefetching and locality.
Wednesday, March 2
CS 470 Operating Systems - Lecture 22
14
Pre-Fetching


To start a process, we could just load the first
instruction of the main program into the
program counter and fault in pages as new
logical addresses are encountered. This is
called pure demand paging.
Generally, programs exhibit locality. That is,
memory accesses tend to be near each other.
E.g. code instructions are often sequential, data
structures often fit in a page, etc.
Wednesday, March 2
CS 470 Operating Systems - Lecture 22
15
Pre-Fetching


This is especially true when a process first
starts running, so often it makes sense to prefetch (i.e., pre-load) the first few pages of the
program code at the beginning to reduce the
number of page faults when a process starts
up.
Pre-fetching also can be useful while a process
is running. As we will see, often when process
moves to a new logical page, it also will access
pages around the new one.
Wednesday, March 2
CS 470 Operating Systems - Lecture 22
16
Pre-Fetching


It is also possible for some instructions to
access more than one page.
E.g., ADD A, B, C could translate to:
Fetch and decode ADD instruction
Fetch A into R1
Fetch B into R2
Add A+B into R3
Store R3 into C

Could require up to 4 page faults without prefetching, but eventually will succeed.
Wednesday, March 2
CS 470 Operating Systems - Lecture 22
17
Pre-Fetching


Even worse is if an instruction can modify
more than one page. E.g. a block move that
straddles two pages.
Such an instruction could even overlap, e.g.
shifting data down by a half a page. In this
case, might need for the instruction to access
both pages before any modifications to make
sure they are in memory before changing
things. Or may need to save old contents that
can be restored if the move gets interrrupt.
Wednesday, March 2
CS 470 Operating Systems - Lecture 22
18
Download