15-213 The Memory Hierarchy Oct 4, 2001 Topics

advertisement
15-213
“The course that gives CMU its Zip!”
The Memory Hierarchy
Oct 4, 2001
Topics
• Storage technologies and trends
• Locality of reference
• Caching in the memory hierarchy
class12.ppt
Random-Access Memory (RAM)
Key features
• RAM is packaged as a chip.
• Basic storage unit is a cell (one bit per cell).
• Multiple RAM chips form a memory.
Static RAM (SRAM)
• Each cell stores bit with a six-transistor circuit.
• Retains value indefinitely, as long as it is kept powered.
• Relatively insensitive to disturbances such as electrical noise.
• Faster and more expensive than DRAM.
Dynamic RAM (DRAM)
• Each cell stores bit with a capacitor and transistor.
• Value must be refreshed every 10-100 ms.
• Sensitive to disturbances.
• Slower and cheaper than SRAM.
class12.ppt
–2–
CS 213 F’01
SRAM vs DRAM summary
SRAM
DRAM
Tran.
per bit
Access
time
Persist? Sensitive?
Cost
Applications
6
1
1X
10X
100x
1X
cache memories
Main memories,
frame buffers
class12.ppt
Yes
No
No
Yes
–3–
CS 213 F’01
Conventional DRAM organization
d x w DRAM:
• dw total bits organized as d supercells of size w bits
16 x 8 DRAM chip
cols
0
2 bits
/
1
2
3
0
addr
1
rows
memory
controller
2
supercell
(2,1)
(to CPU)
8 bits
/
3
data
internal row buffer
class12.ppt
–4–
CS 213 F’01
Reading DRAM supercell (2,1)
Step 1(a): Row access strobe (RAS) selects row 2.
Step 1(b): Row 2 copied from DRAM array to row buffer.
16 x 8 DRAM chip
cols
0
RAS = 2
2
/
1
2
3
0
addr
1
rows
memory
controller
2
8
/
3
data
row 2
internal row buffer
class12.ppt
–5–
CS 213 F’01
Reading DRAM supercell (2,1)
Step 2(a): Column access strobe (CAS) selects column 1.
Step 2(b): Supercell (2,1) copied from buffer to data lines,
and eventually back to the CPU.
16 x 8 DRAM chip
cols
0
CAS = 1
2
/
1
2
3
0
addr
1
memory
controller
rows
supercell
(2,1)
8
/
2
3
data
internal row buffer
class12.ppt
–6–
CS 213 F’01
Memory modules
addr (row = i, col = j)
: supercell (i,j)
DRAM 0
64 MB
memory module
consisting of
eight 8Mx8 DRAMs
DRAM 7
data
bits bits bits
bits bits bits bits
56-63 48-55 40-47 32-39 24-31 16-23 8-15
63
56 55
48 47
40 39
32 31
24 23 16 15
8 7
bits
0-7
0
64-bit doubleword at main memory address A
Memory
controller
64-bit doubleword to CPU chip
class12.ppt
–7–
CS 213 F’01
Enhanced DRAMs
All enhanced DRAMs are built around the conventional
DRAM core.
• Fast page mode DRAM (FPM DRAM)
– Access contents of row with [RAS, CAS, CAS, CAS, CAS] instead of
[(RAS,CAS), (RAS,CAS), (RAS,CAS), (RAS,CAS)].
• Extended data out DRAM (EDO DRAM)
– Enhanced FPM DRAM with more closely spaced CAS signals.
• Synchronous DRAM (SDRAM)
– Driven with rising clock edge instead of asynchronous control signals.
• Double data-rate synchronous DRAM (DDR SDRAM)
– Enhancement of SDRAM that uses both clock edges as control signals.
• Video RAM (VRAM)
– Like FPM DRAM, but output is produced by shifting row buffer
– Dual ported (allows concurrent reads and writes)
class12.ppt
–8–
CS 213 F’01
Nonvolatile memories
DRAM and SRAM are volatile memories
• Lose information if powered off.
Nonvolatile memories retain value even if powered off.
• Generic name is read-only memory (ROM).
• Misleading because some ROMs can be read and modified.
Types of ROMs
• Programmable ROM (PROM)
• Eraseable programmable ROM (EPROM)
• Electrically eraseable PROM (EEPROM)
• Flash memory
Firmware
• Program stored in a ROM
– Boot time code, BIOS (basic input/ouput system)
– graphics cards, disk controllers.
class12.ppt
–9–
CS 213 F’01
Bus structure connecting
CPU and memory
A bus is a collection of parallel wires that carry
address, data, and control signals.
Buses are typically shared by multiple devices.
CPU chip
register file
ALU
system bus
main
memory
I/O
bridge
bus interface
class12.ppt
memory bus
– 10 –
CS 213 F’01
Memory read transaction (1)
CPU places address A on the memory bus.
register file
%eax
Load operation: movl A, %eax
ALU
I/O bridge
bus interface
class12.ppt
A
main memory
0
x
– 11 –
CS 213 F’01
A
Memory read transaction (2)
Main memory reads A from the memory bus, retreives
word x, and places it on the bus.
register file
%eax
Load operation: movl A, %eax
ALU
I/O bridge
bus interface
class12.ppt
x
main memory
0
x
– 12 –
A
CS 213 F’01
Memory read transaction (3)
CPU read word x from the bus and copies it into
register %eax.
register file
%eax
x
Load operation: movl A, %eax
ALU
I/O bridge
bus interface
class12.ppt
main memory
0
x
– 13 –
CS 213 F’01
A
Memory write transaction (1)
CPU places address A on bus. Main memory reads it
and waits for the corresponding data word to arrive.
register file
%eax
y
Store operation: movl %eax, A
ALU
I/O bridge
bus interface
class12.ppt
A
main memory
0
A
– 14 –
CS 213 F’01
Memory write transaction (2)
CPU places data word y on the bus.
register file
%eax
y
Store operation: movl %eax, A
ALU
I/O bridge
y
main memory
0
bus interface
class12.ppt
A
– 15 –
CS 213 F’01
Memory write transaction (3)
Main memory read data word y from the bus and
stores it at address A.
register file
%eax
y
Store operation: movl %eax, A
ALU
I/O bridge
bus interface
class12.ppt
main memory
0
y
– 16 –
CS 213 F’01
A
Disk geometry
Disks consist of platters, each with two surfaces.
Each surface consists of concentric rings called tracks.
Each track consists of sectors separated by gaps.
tracks
surface
track k
gaps
spindle
sectors
class12.ppt
– 17 –
CS 213 F’01
Disk geometry (muliple-platter view)
Aligned tracks form a cylinder.
cylinder k
surface 0
platter 0
surface 1
surface 2
platter 1
surface 3
surface 4
platter 2
surface 5
spindle
class12.ppt
– 18 –
CS 213 F’01
Disk capacity
Capacity: maximum number of bits that can be stored.
• Vendors express capacity in units of gigabytes (GB), where 1 GB =
10^6.
Capacity is determined by these technology factors:
• Recording density (bits/in): number of bits that can be squeezed into
a 1 inch segment of a track.
• Track density (tracks/in): number of tracks that can be squeezed into
a 1 inch radial segment.
• Areal density (bits/in2): product of recording and track density.
Modern disks partition tracks into disjoint subsets
called recording zones
• Each track in a zone has the same number of sectors, determined by
the circumference of innermost track.
• Each zone has a different number of sectors/track
class12.ppt
– 19 –
CS 213 F’01
Computing disk capacity
Capacity = (# bytes/sector) x (avg. # sectors/track) x
(# tracks/surface) x (# surfaces/platter) x
(# platters/disk)
Example:
• 512 bytes/sector
• 300 sectors/track (on average)
• 20,000 tracks/surface
• 2 surfaces/platter
• 5 platters/disk
Capacity = 512 x 300 x 20000 x 2 x 5
= 30,720,000,000
= 30.72 GB
class12.ppt
– 20 –
CS 213 F’01
Disk operation (single-platter view)
The disk
surface
spins at a fixed
rotational rate
The read/write head
is attached to the end
of the arm and flies over
the disk surface on
a thin cushion of air.
spindle
By moving radially, the arm
can position the read/write
head over any track.
class12.ppt
– 21 –
CS 213 F’01
Disk operation (multi-platter view)
read/write heads
move in unison
from cylinder to cylinder
arm
spindle
class12.ppt
– 22 –
CS 213 F’01
Disk access time
Average time to access some target sector
approximated by :
• Taccess = Tavg seek + Tavg rotation + Tavg transfer
Seek time
• Time to position heads over cylinder containing target sector.
• Typical Tavg seek = 9 ms
Rotational latency
• Time waiting for first bit of target sector to pass under r/w head.
• Tavg rotation = 1/2 x 1/RPMs x 60 sec/1 min
Transfer time
• Time to read the bits in the target sector.
• Tavg transfer = 1/RPM x 1/(avg # sectors/track) x 60 secs/1 min.
class12.ppt
– 23 –
CS 213 F’01
Disk access time example
Given:
• Rotational rate = 7,200 RPM
• Average seek time = 9 ms.
• Avg # sectors/track = 400.
Derived:
• Tavg rotation = 1/2 x (60 secs/7200 RPM) x 1000 ms/sec = 4 ms.
• Tavg transfer = 60/7200 RPM x 1/400 secs/track x 1000 ms/sec = 0.02 ms
• Taccess = 9 ms + 4 ms + 0.02 ms
Important points:
• Access time dominated by seek time and rotational latency.
• First bit in a sector is the most expensive, the rest are free.
• SRAM access time is about 4ns/doubleword, DRAM about 60 ns
– Disk is about 40,000 times slower than SRAM,
– 2,500 times slower then DRAM.
class12.ppt
– 24 –
CS 213 F’01
Logical disk blocks
Modern disks present a simpler abstract view of the
complex sector geometry:
• The set of available sectors is modeled as a sequence of b-sized
logical blocks (0, 1, 2, ...)
Mapping between logical blocks and actual (physical)
sectors
• Maintained by hardware/firmware device called disk controller.
• Converts requests for logical blocks into (surface,track,sector)
triples.
Allows controller to set aside spare cylinders for each
zone.
• Accounts for the difference in “formatted capacity” and “maximum
capacity”.
class12.ppt
– 25 –
CS 213 F’01
Bus structure connecting I/O and CPU
CPU chip
register file
ALU
system bus
memory bus
main
memory
I/O
bridge
bus interface
I/O bus
USB
controller
mouse keyboard
graphics
adapter
disk
controller
Expansion slots for
other devices such
as network adapters.
monitor
disk
class12.ppt
– 26 –
CS 213 F’01
Reading a disk sector (1)
CPU chip
register file
ALU
CPU initiates a disk read by writing a
command, logical block number, and
destination memory address to a port
(address) associated with disk controller.
main
memory
bus interface
I/O bus
USB
controller
mouse keyboard
graphics
adapter
disk
controller
monitor
disk
class12.ppt
– 27 –
CS 213 F’01
Reading a disk sector (2)
CPU chip
register file
ALU
Disk controller reads the sector and
performs a direct memory access (DMA)
transfer into main memory.
main
memory
bus interface
I/O bus
USB
controller
mouse keyboard
graphics
adapter
disk
controller
monitor
disk
class12.ppt
– 28 –
CS 213 F’01
Reading a disk sector (3)
CPU chip
register file
ALU
When the DMA transfer completes, the
disk controller notifies the CPU with an
interrupt (i.e., asserts a special “interrupt”
pin on the CPU)
main
memory
bus interface
I/O bus
USB
controller
mouse keyboard
graphics
adapter
disk
controller
monitor
disk
class12.ppt
– 29 –
CS 213 F’01
Storage trends
SRAM
DRAM
Disk
metric
1980
1985
1990
1995
2000
2000:1980
$/MB
access (ns)
19,200
300
2,900
150
320
35
256
15
100
2
190
100
metric
1980
1985
1990
1995
2000
2000:1980
$/MB
8,000
access (ns)
375
typical size(MB) 0.064
880
200
0.256
100
100
4
30
70
16
1
60
64
8,000
6
1,000
metric
1985
1990
1995
2000
2000:1980
100
75
10
8
28
160
0.30
10
1,000
0.05
8
9,000
10,000
11
9,000
1980
$/MB
500
access (ms)
87
typical size(MB) 1
class12.ppt
(Culled from back issues of Byte and PC Magazine)
– 30 –
CS 213 F’01
CPU clock rates
processor
clock rate(MHz)
cycle time(ns)
class12.ppt
1980
8080
1
1,000
1985
286
6
166
1990
386
20
50
– 31 –
1995
Pent
150
6
2000
P-III
750
1.6
2000:1980
750
750
CS 213 F’01
The CPU-Memory Gap
The increasing gap between DRAM, disk, and CPU
speeds.
100,000,000
10,000,000
1,000,000
Disk seek time
ns
100,000
DRAM access time
10,000
SRAM access time
1,000
CPU cycle time
100
10
1
1980
1985
1990
1995
2000
year
class12.ppt
– 32 –
CS 213 F’01
Locality of Reference
Principle of Locality:
• Programs tend to reuse data and instructions near those they have
used recently, or that were recently referenced themselves.
• Temporal locality: Recently referenced items are likely to be
referenced in the near future.
• Spatial locality: Items with nearby addresses tend to be referenced
close together in time.
Locality Example:
sum = 0;
for (i = 0; i < n; i++)
sum += a[i];
return sum;
• Data
– Reference array elements in succession (spatial)
• Instructions
– Reference instructions in sequence (spatial)
– Cycle through loop repeatedly (temporal)
class12.ppt
– 33 –
CS 213 F’01
Locality example
Claim: Being able to look at code and get a qualitative
sense of its locality is a key skill for a professional
programmer.
Does this function have good locality?
int sumarrayrows(int a[M][N])
{
int i, j, sum = 0;
for (i = 0; i < M; i++)
for (j = 0; j < N; j++)
sum += a[i][j];
return sum
}
class12.ppt
– 34 –
CS 213 F’01
Locality example
Does this function have good locality?
int sumarraycols(int a[M][N])
{
int i, j, sum = 0;
for (j = 0; j < N; j++)
for (i = 0; i < M; i++)
sum += a[i][j];
return sum
}
class12.ppt
– 35 –
CS 213 F’01
Locality example
Permute the loops so that the function scans the 3-d
array a with a stride-1 reference pattern (and thus has
good locality)
int sumarray3d(int a[M][N][N])
{
int i, j, k, sum = 0;
for (i = 0; i < M; i++)
for (j = 0; j < N; j++)
for (k = 0; k < N; k++)
sum += a[k][i][j];
return sum
}
class12.ppt
– 36 –
CS 213 F’01
Memory hierarchies
Some fundamental and enduring properties of
hardware and software:
• Fast storage technologies cost more per byte and have less
capacity.
• The gap between CPU and main memory speed is widening.
• Well-written programs tend to exhibit good locality.
These fundamental properties complement each other
beautifully.
Suggest an approach for organizing memory and
storage systems known as a “memory hierarchy”.
class12.ppt
– 37 –
CS 213 F’01
An example memory hierarchy
L0:
registers
Smaller,
faster,
and
costlier
(per byte)
storage
devices
L1: on-chip L1
cache (SRAM)
L2:
L3:
Larger,
slower,
and
cheaper
(per byte)
storage
devices
CPU registers hold words retrieved
from cache memory.
L4:
L5:
class12.ppt
off-chip L2
cache (SRAM)
L1 cache holds cache lines retrieved
from the L2 cache.
L2 cache holds cache lines
retrieved from memory.
main memory
(DRAM)
Main memory holds disk
blocks retrieved from local
disks.
local secondary storage
(local disks)
Local disks hold files
retrieved from disks on
remote network servers.
remote secondary storage
(distributed file systems, Web servers)
– 38 –
CS 213 F’01
Caches
Cache: A smaller, faster storage device that acts as a
staging area for a subset of the data in a larger,
slower device.
Fundamental idea of a memory hierarchy:
• For each k, the faster, smaller device at level k serves as a cache for
the larger, slower device at level k+1.
Why do memory hierarchies work?
• Programs tend to access the data at level k more often than they
access the data at level k+1.
• Thus, the storage at level k+1 can be slower, and thus larger and
cheaper per bit.
• Net effect: A large pool of memory that costs as much as the cheap
storage near the bottom, but that serves data to programs at the rate
of the fast storage near the top.
class12.ppt
– 39 –
CS 213 F’01
Caching in a memory hierarchy
Level k:
4
9
14
Smaller, faster, more expensive
device at level k caches a
subset of the blocks from level k+1
3
Data is copied between
levels in block-sized transfer
units
Level k+1:
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
class12.ppt
– 40 –
Larger, slower, cheaper storage
device at level k+1 is partitioned
into blocks.
CS 213 F’01
General caching concepts
Level k:
4
9
14
3
Program needs object d,
which is stored in some
block b.
Cache hit
• Program finds b in the cache
at level k. E.g. block 14.
Level k+1:
Cache miss
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
class12.ppt
• b is not at level k, so level k
cache must fetch it from level
k+1. E.g. block 12.
• If level k cache is full, then
some current block must be
replaced (evicted).
• Which one? Determined by
replacement policy. E.g. evict
least recently used block.
– 41 –
CS 213 F’01
General caching concepts
Types of cache misses:
• Cold (compulsary) miss
– Cold misses occur because the cache is empty.
• Conflict miss
– Most caches limit blocks at level k+1 to a small subset (sometimes a
singleton) of the block positions at level k.
– E.g. Block i at level k+1 must be placed in block (i mod 4) at level k+1.
– Conflict misses occur when the level k cache is large enough, but
multiple data objects all map to the same level k block.
– E.g. Referencing blocks 0, 8, 0, 8, 0, 8, ... would miss every time.
• Capacity miss
– Occurs when the set of active cache blocks (working set) is larger than
the cache.
class12.ppt
– 42 –
CS 213 F’01
Examples of caching in the hierarchy
Cache Type
What Cached
Where Cached
Registers
4-byte word
CPU registers
0 Compiler
TLB
Address
translations
32-byte block
32-byte block
4-KB page
On-Chip TLB
0 Hardware
On-Chip L1
Off-Chip L2
Main memory
Parts of files
Main memory
1 Hardware
10 Hardware
100 Hardware+
OS
100 OS
L1 cache
L2 cache
Virtual
Memory
Buffer cache
Network buffer Parts of files
cache
Browser cache Web pages
Local disk
Web cache
Remote server
disks
Web pages
class12.ppt
Local disk
– 43 –
Latency
(cycles)
Managed
By
10,000,000 AFS/NFS
client
10,000,000 Web
browser
1,000,000,000 Web proxy
server
CS 213 F’01
Download