CS414 Review Session

advertisement
CS414 Review Session
Address Translation
Example
•
•
•
•
•
•
•
Logical Address: 32 bits
Number of segments per process: 8
Page size: 2 KB
Page table entry size: 2B
Physical Memory: 32MB
Paged Segmentation
2 level paging
Logical Address Space
•
•
•
•
•
Total Number of bits 32
Page Offset: 11 bits (2KB = 211B)
Segment Number: 3 bits (8 = 23)
Number of pages per segment: 218 (32-3-11=16)
Number of page table entries in one page of page
table: 1K (2KB/2B)
• Page number in inner page table: 10 bits (1K =
210)
• Page number in outer page table: 8 bits (18-10)
Segment Table
• Number of entries = 8
• Width of each entry (sum of)
– Base Address of outer page table: 14 bits
• Number of page frames = 16K (32MB/2KB)
– Length of Segment: 29 bits (32 – 3)
– Miscellaneous items
Page Table
• Outer Page Table:
– Number of entries = 28
– Width of entry (sum of)
• Page frame number of inner page table: 14 bits
• Miscellaneous bits (total 2B specified)
• Inner Page Table
– Number of entries = 210
– Width: same as outer page table
Translation Look-aside Buffer
• Just an Associative Cache
• Number of entries (pre fixed size)
• Width of each entry (sum of)
– Key: segment#+page# = 3 + 18 = 21 bits
• Some TLBs may also use process IDs
– Value: page frame# = 14 bits
The Page Size Issue
• With a very small page size,
each page matches the code
that is actually used  page
faults are low
• Increased page size causes
each page to contain code
that is not used  Fewer
pages in memory  Page
faults rise. (Thrashing)
• Small pages  large page
tables  costly translation
• 2KB to 8KB
Load Control
• Determines the number of
processes that will be
resident in main memory
(i.e. multiprogramming
level)
– Too few processes:
often all processes will
be blocked and the
processor will be idle
– Too many processes: the
resident size of each
process will be too small
and flurries of page
faults will result:
thrashing
Handling Interrupts and Traps
• Terminate current instruction (instructions)
– Pipeline flush.
• Save state
– Registers, PC, may need to repeat instructions.
• Invoke Interrupt Handling Routine
– Interrupt vector table
– User space to Kernel space context switch
• Execute the interrupt handling routine
• Invoke the scheduler to schedule a ready process.
– Kernel space to user space context switch
Disk Optimizations
• Seek Time (biggest overhead)
• Disk Scheduling Algorithms
– SSTF, SCAN, C-SCAN, LOOK, C-LOOK
• Contiguous file allocation
– Place contiguous block on same cylinder
– Same track, if not same numbered track on another disk.
• Organ Pipe Distribution
– Place most used blocks (I-nodes, directory structure) closer to the
middle of the disk.
– Place the head in the middle of the disk
• Use multiple heads.
Disk Optimizations
• Rotational Latency (next biggest)
• Interleaving
– Adjacent sectors are actually not adjacent on the disk.
• Disk Cache
– Cache all sectors on the track. (2 rotations)
6
3
2
5
7
4
1
Redundant Array of Inexpensive Disks
• Mirroring or Shadowing
– Expensive, small gain in read time, reliable
• Striping
– Inexpensive, faster access time, not reliable
• Striping + Parity
– Inexpensive, small performance gain, reliable
• Interleaving + Parity + Striping
– Inexpensive, faster access time, reliable
Storage Hierarchy
Register
B
nsec
Level 1 Cache
100 nsec
Level 2 Cache
KB+
500 KB+
usec
Main Memory
msec
GB+
Hard Disk
sec
100 MB+
10-1000 usec
Network ??
Paging vs Segmentation
• Fixed size partitions
• Internal Fragmentation
(average=page size/2)
• No External
Fragmentation.
• Small chunk of memory.
(~ 4 KB)
• Linear address space,
invisible to programmer.
• Variable size partition
• No Internal Fragmentation
• External Fragmentation
(compaction, page segs)
• Large chunk of memory.
(~ 1 MB)
• Logical address space,
visible to programmer.
Demand-paging vs Pre-paging
• Pages swapped in on
demand.
• More page faults
(especially initially).
• No wastage of page
frames.
• No such overhead.
• Pages swapped in before
use in anticipation.
• Reduce future page faults.
• Pages may not be used
(wastage of memory
space).
• Good strategies to prepage. (working set,
contiguous pages, etc…)
Local vs Global Page Replacement.
• Only swap out current
process’ pages.
• Page frame allocation
strategies required. (page
fault frequency)
• Thrashing affects only
current process.
• Admission control
required.
• Can use different page
replacement algorithms
for each process.
• Swap out any page in
memory.
• No explicit allocation of
page frames.
• Can affect performance of
other processes.
• Admission control
required.
• Single page replacement
algorithm.
Interrupt driven IO vs Polling
• Each interrupt has a fixed
processing time overhead
(context switches).
• Other processes can
execute while waiting for
response.
• Good for long and
indefinite response time.
• Ex: Printer
• The response time on
polling is variable. (device
and request specific)
• No other process can
execute while waiting for
response.
• Good for short and
predictable response time
(< fixed interrupt
overhead).
• Ex: Fast Networks
Contiguous vs Indexed Allocation
• All blocks of the file in
contiguous disk locations.
• No additional index overhead.
(Disk addresses can be
computed)
• Disk fragmentation is a major
problem. (compaction
overhead)
• Smart allocation strategies
required.
• Low average latency for
sequential access. (only one
long seek, smart block layouts)
• Blocks of the file randomly
distributed throughout the disk.
• Each access involves a search
in the index. (Involves fetching
additional blocks from the disk)
• No Fragmentation on the disk.
• No allocation strategies
required.
• High average latency (disk
scheduling algorithms)
Contiguous vs Linked Allocation
• All blocks are in
contiguous disk addresses.
• Disk addresses can be
computed for each access.
• Suffers from
fragmentation of disk.
• Bad sectors affect
contiguity of blocks.
• Blocks are arranged in a
link list fashion.
• Each access involves
browsing the entire list.
• No disk fragmentation.
• All bad blocks can be
hidden away as a file.
Hard Disks vs Tapes
• Small capacity (few
GB)
• Subject to various
failures (disk crashes,
bad sectors, etc…)
• Random access
latency is very small
(msec)
• Huge capacity per unit
volume (TB)
• Permanent storage (no
corruption for long
time.)
• Very high random
access latency (sec)
(need to read the tape
from the beginning)
Unix FS vs Log FS
• Index used to map I-nodes to
physical blocks.
• Same read latency as indexed
allocation.
• Writes take place on the same
block where data is read from.
• Write latency equals is
dominated by seek time.
• No garbage collection required.
• Crash recovery is extremely
difficult.
• Index used to map I-nodes to
physical blocks.
• Same read latency as UNIX FS
• Writes are bunched together
and done on sequential blocks.
• Write latency is small because
of amortized seek time.
• Garbage collection required to
free old blocks.
• Checkpoints enable efficient
recovery from crashes.
Routing Strategies
Fixed
• Permanent path
between A and B
Virtual Circuit
• Per session path
between A and B
Dynamic
• Different path
per message
between A and B
• Congestion
independent of
paths.
• Some attempt to
uniform
congestion.
• Uniform
congestion across
paths.
• No set-up costs.
• Per session setup cost.
• Per message setup cost.
• Sequential
delivery.
• Sequential
delivery.
• Out of order
delivery.
Connection Strategies
Circuit Switch
• Permanent link
between A and B
(hardware)
Message Switch
• Per message link
between A and B
Packet Switch
• Different link
per packet
between A and B
• Congestion
independent of
paths.
• Some attempt to
uniform
congestion.
• Uniform
congestion across
links. (best link)
• No set-up costs.
• Initial set-up
cost.
• No set-up cost.
• Sequential
delivery.
• Sequential
delivery.
• Out of order
delivery.
Download