Review: Cache performance Review: 6 Basic Cache Optimizations 1

advertisement
Review: Cache performance
CS252
Graduate Computer Architecture
Lecture 16
• Miss-oriented Approach to Memory Access:
MemAccess
⎛
⎞
CPUtime = IC × ⎜ CPI Execution +
× MissRate × MissPenalty ⎟ × CycleTime
Inst
⎝
⎠
Cache Optimizations (Con’t)
Memory Technology
• Separating out Memory component entirely
– AMAT = Average Memory Access Time
John Kubiatowicz
Electrical Engineering and Computer Sciences
University of California, Berkeley
MemAccess
⎛
⎞
CPUtime = IC × ⎜ CPI AluOps +
× AMAT ⎟ × CycleTime
Inst
⎝
⎠
AMAT = HitTime + MissRate × MissPenalt y
= ( HitTime Inst + MissRate Inst × MissPenalty Inst ) +
( HitTime Data + MissRate Data × MissPenalty Data )
http://www.eecs.berkeley.edu/~kubitron/cs252
http://www-inst.eecs.berkeley.edu/~cs252
3/19/2007
Review: 6 Basic Cache
Optimizations
• Send virtual address to cache? Called Virtually Addressed
Cache or just Virtual Cache vs. Physical Cache
– Every time process is switched logically must flush the cache; otherwise
get false hits
» Cost is time to flush + “compulsory” misses from empty cache
– Dealing with aliases (sometimes called synonyms);
Two different virtual addresses map to same physical address
– I/O must interact with cache, so need virtual address
E.g., Overlap TLB and cache access, Virtual Addressed Caches
• Reducing Miss Penalty
2. Giving Reads Priority over Writes
•
E.g., Read complete before earlier writes in write buffer
• Solution to aliases
3. Multilevel Caches
•
4.
5.
6.
3/19/2007
– HW guaranteess covers index field & direct mapped, they must be unique;
called page coloring
Reducing Miss Rate
Larger Block size (Compulsory misses)
Larger Cache size (Capacity misses)
Higher Associativity (Conflict misses)
cs252-S07, Lecture 16
2
1. Fast hits by Avoiding Address Translation
• Reducing hit time
1. Avoiding Address Translation during Cache
Indexing
•
cs252-S07, Lecture 16
• Solution to cache flush
– Add process identifier tag that identifies process as well as address within
process: can’t get a hit if wrong process
3
3/19/2007
cs252-S07, Lecture 16
4
3. Multi-level cache
Two options for avoiding translation:
CPU
CPU
CPU
VA
VA
PA
Tags
TB
• L2 Equations
VA
$
TB
PA
VA
Tags
AMAT = Hit TimeL1 + Miss RateL1 x Miss PenaltyL1
$
Miss PenaltyL1 = Hit TimeL2 + Miss RateL2 x Miss PenaltyL2
VA
PA
AMAT = Hit TimeL1 +
Miss RateL1 x (Hit TimeL2 + Miss RateL2 + Miss PenaltyL2)
L2 $
TB
$
MEM
PA
PA
• Definitions:
MEM
Physically Addressed
(“indexed)
Conventional
Organization
Still Physically Indexed
Overlap $ access
with VA translation:
requires $ index to
remain invariant
across translation
Variation A
3/19/2007
MEM
– Local miss rate— misses in this cache divided by the total number of
memory accesses to this cache (Miss rateL2)
– Global miss rate—misses in this cache divided by the total number of
memory accesses generated by the CPU
(Miss RateL1 x Miss RateL2)
– Global Miss Rate is what matters
Virtually Addressed (“indexed”)
Cache Translate only on miss
Synonym Problem
Variation B
cs252-S07, Lecture 16
5
Review: (Con’t)
12 Advanced Cache Optimizations
• Reducing hit time
1. Small and simple
caches
2. Way prediction
3. Trace caches
3/19/2007
6
• Pipeline cache access to maintain bandwidth, but
higher latency
• Instruction cache access pipeline stages:
1: Pentium
2: Pentium Pro through Pentium III
4: Pentium 4
- ⇒ greater penalty on mispredicted branches
- ⇒ more clock cycles between the issue of the load
and the use of the data
• Reducing Miss Rate
9. Victim Cache
10. Hardware prefetching
11. Compiler prefetching
12. Compiler Optimizations
cs252-S07, Lecture 16
cs252-S07, Lecture 16
4: Increasing Cache Bandwidth by
Pipelining
• Reducing Miss Penalty
7. Critical word first
8. Merging write buffers
• Increasing cache
bandwidth
4. Pipelined caches
5. Multibanked caches
6. Nonblocking caches
3/19/2007
7
3/19/2007
cs252-S07, Lecture 16
8
5. Increasing Cache Bandwidth:
Non-Blocking Caches
Value of Hit Under Miss for SPEC
(old data)
Hit Under i Misses
2
• Non-blocking cache or lockup-free cache allow data
cache to continue to supply cache hits during a miss
1.8
1.6
1.4
– requires F/E bits on registers or out-of-order execution
– requires multi-bank memories
1->2
1
• “hit under miss” reduces the effective miss penalty
by working during miss vs. ignoring CPU requests
• “hit under multiple miss” or “miss under miss” may
further lower the effective miss penalty by overlapping
multiple misses
2->64
0.8
cs252-S07, Lecture 16
Base
0.6
0.4
0->1
1->2
2->64
Base
“Hit under n Misses”
0.2
ora
nasa7
spice2g6
alvinn
hydro2d
wave5
mdljdp2
doduc
su2cor
tomcatv
swm256
ear
fpppp
mdljsp2
xlisp
Integer
compress
eqntott
espresso
0
– Significantly increases the complexity of the cache controller as
there can be multiple outstanding memory accesses
– Requires muliple memory banks (otherwise cannot support)
– Penium Pro allows 4 outstanding memory misses
3/19/2007
0->1
1.2
Floating Point
• FP programs on average: AMAT= 0.68 -> 0.52 -> 0.34 -> 0.26
• Int programs on average: AMAT= 0.24 -> 0.20 -> 0.19 -> 0.19
• 8 KB Data Cache, Direct Mapped, 32B block, 16 cycle miss, SPEC 92
9
6: Increasing Cache Bandwidth via
Multiple Banks
3/19/2007
cs252-S07, Lecture 16
10
7. Reduce Miss Penalty:
Early Restart and Critical Word First
• Rather than treat the cache as a single monolithic
block, divide into independent banks that can support
simultaneous accesses
• Don’t wait for full block before restarting CPU
• Early restart—As soon as the requested word of the
block arrives, send it to the CPU and let the CPU
continue execution
– E.g.,T1 (“Niagara”) L2 has 4 banks
• Banking works best when accesses naturally spread
themselves across banks ⇒ mapping of addresses to
banks affects behavior of memory system
• Simple mapping that works well is “sequential
interleaving”
– Spatial locality ⇒ tend to want next sequential word, so not clear size of
benefit of just early restart
• Critical Word First—Request the missed word first
from memory and send it to the CPU as soon as it
arrives; let the CPU continue execution while filling
the rest of the words in the block
– Spread block addresses sequentially across banks
– E,g, if there 4 banks, Bank 0 has all blocks whose address modulo 4
is 0; bank 1 has all blocks whose address modulo 4 is 1; …
– Long blocks more popular today ⇒ Critical Word 1st Widely used
block
3/19/2007
cs252-S07, Lecture 16
11
3/19/2007
cs252-S07, Lecture 16
12
8. Merging Write Buffer to
Reduce Miss Penalty
•
•
•
•
•
9. Reducing Misses: a “Victim Cache”
Write buffer to allow processor to continue
while waiting to write to memory
If buffer contains modified blocks, the
addresses can be checked to see if address of
new data matches the address of a valid write
buffer entry
If so, new data are combined with that entry
Increases block size of write for write-through
cache of writes to sequential words, bytes since
multiword writes more efficient to memory
The Sun T1 (Niagara) processor, among many
others, uses write merging
cs252-S07, Lecture 16
3/19/2007
• How to combine fast hit
time of direct mapped
yet still avoid conflict
misses?
• Add buffer to place data
discarded from cache
• Jouppi [1990]: 4-entry
victim cache removed
20% to 95% of conflicts
for a 4 KB direct mapped
data cache
• Used in Alpha, HP
machines
13
TAGS
DATA
Tag and Comparator
One Cache line of Data
Tag and Comparator
One Cache line of Data
Tag and Comparator
One Cache line of Data
Tag and Comparator
One Cache line of Data
To Next Lower Level In
Hierarchy
cs252-S07, Lecture 16
3/19/2007
14
11. Reducing Misses by
Software Prefetching Data
10. Reducing Misses by Hardware Prefetching
of Instructions & Data
• Prefetching relies on having extra memory bandwidth that can be
used without penalty
• Instruction Prefetching
• Data Prefetch
– Load data into register (HP PA-RISC loads)
– Cache Prefetch: load into cache
(MIPS IV, PowerPC, SPARC v. 9)
– Special prefetching instructions cannot cause faults;
a form of speculative execution
– Typically, CPU fetches 2 blocks on a miss: the requested block and the next
consecutive block.
– Requested block is placed in instruction cache when it returns, and prefetched
block is placed into instruction stream buffer
• Data Prefetching
– Is cost of prefetch issues < savings in reduced misses?
– Higher superscalar reduces difficulty of issue bandwidth
1.49
eq
ua
ke
1.40
m
gr
id
1.32
ap
pl
u
1.29
sw
im
1.26
fa
ce
re
c
1.21
ga
lg
el
1.20
fa
m
3d
w
up
w
is
e
1.18
lu
ca
s
1.45
1.16
SPECint2000
3/19/2007
• Issuing Prefetch Instructions takes time
1.97
m
cf
2.20
2.00
1.80
1.60
1.40
1.20
1.00
ga
p
Performance Improvement
– Pentium 4 can prefetch data into L2 cache from up to 8 streams from 8 different 4
KB pages
– Prefetching invoked if 2 successive L2 cache misses to a page,
if distance between those cache blocks is < 256 bytes
SPECfp2000
cs252-S07, Lecture 16
15
3/19/2007
cs252-S07, Lecture 16
16
12. Reducing Misses by Compiler
Optimizations
Merging Arrays Example
• McFarling [1989] reduced caches misses by 75%
on 8KB direct mapped cache, 4 byte blocks in software
• Instructions
/* Before: 2 sequential arrays */
int val[SIZE];
int key[SIZE];
– Reorder procedures in memory so as to reduce conflict misses
– Profiling to look at conflicts(using tools they developed)
/* After: 1 array of stuctures */
struct merge {
int val;
int key;
};
struct merge merged_array[SIZE];
• Data
– Merging Arrays: improve spatial locality by single array of compound
elements vs. 2 arrays
– Loop Interchange: change nesting of loops to access data in order
stored in memory
– Loop Fusion: Combine 2 independent loops that have same looping
and some variables overlap
– Blocking: Improve temporal locality by accessing “blocks” of data
repeatedly vs. going down whole columns or rows
3/19/2007
cs252-S07, Lecture 16
Reducing conflicts between val & key;
improve spatial locality
17
Loop Interchange Example
18
/* Before */
for (i = 0; i < N; i = i+1)
for (j = 0; j < N; j = j+1)
a[i][j] = 1/b[i][j] * c[i][j];
for (i = 0; i < N; i = i+1)
for (j = 0; j < N; j = j+1)
d[i][j] = a[i][j] + c[i][j];
/* After */
for (i = 0; i < N; i = i+1)
for (j = 0; j < N; j = j+1)
{
a[i][j] = 1/b[i][j] * c[i][j];
d[i][j] = a[i][j] + c[i][j];}
Sequential accesses instead of striding through
memory every 100 words; improved spatial
locality
cs252-S07, Lecture 16
cs252-S07, Lecture 16
Loop Fusion Example
/* Before */
for (k = 0; k < 100; k = k+1)
for (j = 0; j < 100; j = j+1)
for (i = 0; i < 5000; i = i+1)
x[i][j] = 2 * x[i][j];
/* After */
for (k = 0; k < 100; k = k+1)
for (i = 0; i < 5000; i = i+1)
for (j = 0; j < 100; j = j+1)
x[i][j] = 2 * x[i][j];
3/19/2007
3/19/2007
2 misses per access to a & c vs. one miss per access;
improve spatial locality
19
3/19/2007
cs252-S07, Lecture 16
20
Blocking Example
Blocking Example
/* Before */
for (i = 0; i < N; i = i+1)
for (j = 0; j < N; j = j+1)
{r = 0;
for (k = 0; k < N; k = k+1){
r = r + y[i][k]*z[k][j];};
x[i][j] = r;
};
/* After */
for (jj = 0; jj < N; jj = jj+B)
for (kk = 0; kk < N; kk = kk+B)
for (i = 0; i < N; i = i+1)
for (j = jj; j < min(jj+B-1,N); j = j+1)
{r = 0;
for (k = kk; k < min(kk+B-1,N); k = k+1) {
r = r + y[i][k]*z[k][j];};
x[i][j] = x[i][j] + r;
};
• Two Inner Loops:
– Read all NxN elements of z[]
– Read N elements of 1 row of y[] repeatedly
– Write N elements of 1 row of x[]
• B called Blocking Factor
• Capacity Misses from 2N3 + N2 to 2N3/B +N2
• Conflict Misses Too?
• Capacity Misses a function of N & Cache Size:
– 2N3 + N2 => (assuming no conflict; otherwise …)
• Idea: compute on BxB submatrix that fits
cs252-S07, Lecture 16
3/19/2007
21
cs252-S07, Lecture 16
3/19/2007
22
Summary of Compiler Optimizations to
Reduce Cache Misses (by hand)
Reducing Conflict Misses by Blocking
vpenta (nasa7)
0.1
gmty (nasa7)
tomcatv
btrix (nasa7)
Direct Mapped Cache
0.05
mxm (nasa7)
spice
cholesky
(nasa7)
compress
Fully Associative Cache
0
0
50
100
150
1
Blocking Factor
– Lam et al [1991] a blocking factor of 24 had a fifth the misses vs.
48 despite both fit in cache
cs252-S07, Lecture 16
2
2.5
3
Performance Improvement
• Conflict misses in caches not FA vs. Blocking size
3/19/2007
1.5
merged
arrays
23
3/19/2007
loop
interchange
loop fusion
cs252-S07, Lecture 16
blocking
24
Compiler Optimization vs. Memory
Hierarchy Search
Sparse Matrix – Search for Blocking
• Compiler tries to figure out memory hierarchy
optimizations
• New approach: “Auto-tuners” 1st run variations of
program on computer to find best combinations of
optimizations (blocking, padding, …) and algorithms,
then produce C code to be compiled for that
computer
• “Auto-tuner” targeted to numerical method
for finite element problem [Im, Yelick, Vuduc, 2005]
Mflop/s
Best: 4x2
– E.g., PHiPAC (BLAS), Atlas (BLAS),
Sparsity (Sparse linear algebra), Spiral (DSP), FFT-W
Reference
cs252-S07, Lecture 16
3/19/2007
25
row block size (r)
8
4
Sun Ultra 2,
Sun Ultra 3,
AMD Opteron
Intel
Pentium M
IBM Power 4,
Intel/HP Itanium
Intel/HP
Itanium 2
IBM
Power 3
cs252-S07, Lecture 16
3/19/2007
Technique
Best Sparse Blocking for 8 Computers
Mflop/s
Hit
Time
Band
width
Miss
penalty
26
Miss
rate
HW cost/
complexity
–
0
Trivial; widely used
Comment
Small and simple caches
+
Way-predicting caches
+
1
Used in Pentium 4
Trace caches
+
3
Used in Pentium 4
Pipelined cache access
–
1
Widely used
3
Widely used
1
Used in L2 of Opteron and
Niagara
+
Nonblocking caches
+
Banked caches
+
+
2
Critical word first and
early restart
+
2
Widely used
1
Merging write buffer
+
1
Widely used with write through
Victim Caches
–
+
1
Fairly Simple and common
+
0
+
+
2 instr.,
3 data
+
+
3
1
2
4
column block size (c)
8
• All possible column block sizes selected for 8 computers; How could
compiler know?
3/19/2007
cs252-S07, Lecture 16
27
Compiler techniques to
reduce cache misses
Hardware prefetching of
instructions and data
Compiler-controlled
prefetching
3/19/2007
cs252-S07, Lecture 16
Software is a challenge; some
computers have compiler option
Many prefetch instructions;
AMD Opteron prefetches data
Needs nonblocking cache; in
many CPUs
28
AMD Opteron Memory Hierarchy
Opteron Memory Hierarchy Performance
• 12-stage integer pipeline yields a maximum clock rate of 2.8
GHz and fastest memory PC3200 DDR SDRAM
• 48-bit virtual and 40-bit physical addresses
• I and D cache: 64 KB, 2-way set associative, 64-B block, LRU
• L2 cache: 1 MB, 16-way, 64-B block, pseudo LRU
• Data and L2 caches use write back, write allocate
• L1 caches are virtually indexed and physically tagged
• L1 I TLB and L1 D TLB: fully associative, 40 entries
• For SPEC2000
– I cache misses per instruction is 0.01% to 0.09%
– D cache misses per instruction are 1.34% to 1.43%
– L2 cache misses per instruction are 0.23% to 0.36%
• Commercial benchmark (“TPC-C-like”)
– I cache misses per instruction is 1.83% (100X!)
– D cache misses per instruction are 1.39% (≈ same)
– L2 cache misses per instruction are 0.62% (2X to 3X)
– 32 entries for 4 KB pages and 8 for 2 MB or 4 MB pages
• How compare to ideal CPI of 0.33?
• L2 I TLB and L1 D TLB: 4-way, 512 entities of 4 KB pages
• Memory controller allows up to 10 cache misses
– 8 from D cache and 2 from I cache
cs252-S07, Lecture 16
3/19/2007
29
CPI breakdown for Integer Programs
2.50
1.50
1.00
Min Pipeline Stall
3.00
Max Memory CPI
Base CPI
2.50
2.00
Min Pipeline Stall
Max Memory CPI
Base CPI
1.50
1.00
0.50
0.50
-
si
xt
ra
ck
m
e
w sa
up
w
is
e
m
gr
id
ap
fa plu
ce
re
ga c
lg
el
ap
am si
m
fm p
a3
d
lu
ca
s
sw
i
m
eq
ua
ke
TPC-C
twolf
vpr
parser
gcc
bzip2
vortex
gap
gzip
eon
crafty
perlbmk
ar
t
CPI
2.00
30
CPI breakdown for Floating Pt. Programs
CPI
3.00
cs252-S07, Lecture 16
3/19/2007
• CPI above base attributable to memory ≈ 50%
• L2 cache misses ≈ 25% overall (50% memory CPI)
• CPI above base attributable to memory ≈ 60%
• L2 cache misses ≈ 40% overall (70% memory CPI)
– Assumes misses are not overlapped with the execution pipeline
or with each other, so the pipeline stall portion is a lower bound
– Assumes misses are not overlapped with the execution pipeline
or with each other, so the pipeline stall portion is a lower bound
3/19/2007
cs252-S07, Lecture 16
31
3/19/2007
cs252-S07, Lecture 16
32
Misses Per Instruction: Pentium 4 vs. Opteron
Memory
200 MHz x 64 bits
200 MHz x 128 bits
cs252-S07, Lecture 16
33
SPECfp2000
• D cache miss: P4 is 2.3X to 3.4X vs. Opteron
• L2 cache miss: P4 is 0.5X to 1.5X vs. Opteron
• Note: Same ISA, but not same instruction count
3/19/2007
cs252-S07, Lecture 16
34
Main Memory Background
Fallacies and Pitfalls
• Performance of Main Memory:
• Not delivering high memory bandwidth in a cache-based system
– 10 Fastest computers at Stream benchmark [McCalpin 2005]
– Only 4/10 computers rely on data caches, and their memory BW
per processor is 7X to 25X slower than NEC SX7
– Latency: Cache Miss Penalty
» Access Time: time between request and word arrives
» Cycle Time: time between requests
– Bandwidth: I/O & Large Block Miss Penalty (L2)
1,000,000
• Main Memory is DRAM: Dynamic Random Access Memory
System Mem ory BW
100,000
↓Pentium better
0.5X
SPECint2000
*Clock rate for this comparison in 2005; faster versions existed
3/19/2007
-
mesa
1 stream to L2
1
applu
8 streams to L2
↑Opteron better
1.5X
mgrid
Prefetch
2.3X
2
swim
16-way associative,
1 MB, 64B block
3.4X
3
wupwise
L2 cache
8-way associative,
2 MB, 128B block
4
crafty
Data
Cache
8-way associative, 16 2-way associative,
KB, 64B block,
64 KB, 64B block,
inclusive in L2
exclusive to L2
L2 cache: P4/Opteron
5
mcf
2-way associative,
64 KB, 64B block
D cache: P4/Opteron
6
gzip
Instruction Trace Cache
Cache
(8K micro-ops)
7
gcc
Pentium 4 (3.2 GHz*) Opteron (2.8 GHz*)
vpr
CPU
Ratio of MPI: Pentium 4/Opteron
Pentium 4 vs. Opteron Memory Hierarchy
– Dynamic since needs to be refreshed periodically (8 ms, 1% time)
– Addresses divided into 2 halves (Memory as a 2D matrix):
» RAS or Row Address Strobe
» CAS or Column Address Strobe
Per Processor Mem ory BW
10,000
• Cache uses SRAM: Static Random Access Memory
1,000
SG
N
EC
_S
I_
XAl
7
tix
(3
_3
N
2)
EC
0
_S 00
(5
X1
2)
516
N
A
EC
(1
6)
_S
XN
7
SG EC
(1
_S
6)
I_
Al
Xtix
4
H
(
_
32
P_
30
)
00
A
lp
(2
ha
5
Se
6)
rv
er
N
E
(
H
64
P_ C_
)
S
A
Xlp
4
ha
(
24
Se
N
)
rv
EC
er
_S
(
6
X4)
516
A
(8
)
– No refresh (6 transistors/bit vs. 1 transistor
Size: DRAM/SRAM - 4-8,
Cost/Cycle time: SRAM/DRAM - 8-16
3/19/2007
cs252-S07, Lecture 16
35
3/19/2007
cs252-S07, Lecture 16
36
Main Memory Deep Background
•
•
•
•
•
Core Memories (1950s & 60s)
“Out-of-Core”, “In-Core,” “Core Dump”?
“Core memory”?
Non-volatile, magnetic
Lost to 4 Kbit DRAM (today using 512Mbit DRAM)
Access time 750 ns, cycle time 1500-3000 ns
The first magnetic core
memory, from the IBM 405
Alphabetical Accounting
Machine.
• Core Memory stored data as magnetization in
iron rings
– Iron “cores” woven into a 2-dimensional mesh of wires
– Origin of the term “Dump Core”
– Rumor that IBM consulted Life Saver company
• See:
http://www.columbia.edu/acis/history/core.html
cs252-S07, Lecture 16
3/19/2007
37
DRAM logical organization (4 Mbit)
11
Column Decoder
…
Sense Amps & I/O
cs252-S07, Lecture 16
3/19/2007
38
Quest for DRAM Performance
1. Fast Page mode
– Add timing signals that allow repeated accesses to row buffer
without another row access time
– Such a buffer comes naturally, as each array will buffer 1024 to
2048 bits for each access
D
2. Synchronous DRAM (SDRAM)
A0…A10
Memory Array
(2,048 x 2,048)
– Add a clock signal to DRAM interface, so that the repeated
transfers would not bear overhead to synchronize with DRAM
controller
Q
3. Double Data Rate (DDR SDRAM)
– Transfer data on both the rising edge and falling edge of the
DRAM clock signal ⇒ doubling the peak data rate
– DDR2 lowers power by dropping the voltage from 2.5 to 1.8
volts + offers higher clock rates: up to 400 MHz
– DDR3 drops to 1.5 volts + higher clock rates: up to 800 MHz
Storage
Word Line Cell
•
• Square root of bits per RAS/CAS
3/19/2007
cs252-S07, Lecture 16
39
3/19/2007
Improved Bandwidth, not Latency
cs252-S07, Lecture 16
40
Classical DRAM Organization (square)
Fastest for sale 4/06 ($125/GB)
DRAM name based on Peak Chip Transfers / Sec
DIMM name based on Peak DIMM MBytes / Sec
Standard
Clock Rate
(MHz)
M transfers
/ second
DRAM
Name
Mbytes/s/
DIMM
DIMM
Name
DDR
133
266
DDR266
2128
PC2100
DDR
150
300
DDR300
2400
PC2400
DDR
200
400
DDR400
3200
PC3200
DDR2
266
533
DDR2-533
4264
PC4300
DDR2
333
667
DDR2-667
5336
PC5300
DDR2
400
800
DDR2-800
6400
PC6400
DDR3
533
1066
DDR3-1066
8528
PC8500
DDR3
666
1333
DDR3-1333
10664
PC10700
DDR3
800
1600
DDR3-1600
12800
PC12800
x2
r
o
w
d
e
c
o
d
e
r
row
address
Each intersection represents
a 1-T DRAM Cell
RAM Cell
Array
word (row) select
Column Selector &
I/O Circuits
data
x8
Column
Address
• Row and Column Address
together:
– Select 1 bit a time
cs252-S07, Lecture 16
3/19/2007
bit (data) lines
41
cs252-S07, Lecture 16
3/19/2007
42
DRAM Capacitors: more capacitance
in a small area
Review:1-T Memory Cell (DRAM)
row select
• Write:
– 1. Drive bit line
– 2.. Select row
• Read:
– 1. Precharge bit line to Vdd/2
– 2.. Select row
bit
– 3. Cell and bit line share charges
» Very small voltage changes on the bit line
– 4. Sense (fancy sense amp)
» Can detect changes of ~1 million electrons
– 5. Write: restore the value
• Trench capacitors:
–
–
–
–
• Refresh
– 1. Just do a dummy read to every cell.
3/19/2007
cs252-S07, Lecture 16
43
3/19/2007
• Stacked capacitors
Logic ABOVE capacitor
Gain in surface area of capacitor
Better Scaling properties
Better Planarization
– Logic BELOW capacitor
– Gain in surface area of capacitor
– 2-dim cross-section quite small
cs252-S07, Lecture 16
44
DRAM Read Timing
4 Key DRAM Timing Parameters
• Every DRAM access
begins at:
RAS_L
– The assertion of the RAS_L
– 2 ways to read:
early or late v. CAS
A
CAS_L
WE_L
256K x 8
DRAM
9
OE_L
• tRAC: minimum time from RAS line falling to
the valid data output.
D
8
– Quoted as the speed of a DRAM when buy
– A typical 4Mb DRAM tRAC = 60 ns
– Speed of DRAM since on purchase sheet?
DRAM Read Cycle Time
RAS_L
• tRC: minimum time from the start of one row
access to the start of the next.
CAS_L
A
Row Address
Col Address
Junk
Row Address
Col Address
– tRC = 110 ns for a 4Mbit DRAM with a tRAC of 60 ns
Junk
• tCAC: minimum time from CAS line falling to
valid data output.
WE_L
OE_L
D
– 15 ns for a 4Mbit DRAM with a tRAC of 60 ns
High Z
Junk
Read Access
Time
Data Out
Early Read Cycle: OE_L asserted before CAS_L
High Z
Output Enable
Delay
– 35 ns for a 4Mbit DRAM with a tRAC of 60 ns
Late Read Cycle: OE_L asserted after CAS_L
cs252-S07, Lecture 16
3/19/2007
• tPC: minimum time from the start of one
column access to the start of the next.
Data Out
45
cs252-S07, Lecture 16
3/19/2007
46
Increasing Bandwidth - Interleaving
Main Memory Performance
Access Pattern without Interleaving:
CPU
Cycle Time
Access Time
Time
D1 available
Start Access for D1
• DRAM (Read/Write) Cycle Time >> DRAM
(Read/Write) Access Time
Start Access for D2
– - 2:1; why?
Memory
Bank 0
Access Pattern with 4-way Interleaving:
• DRAM (Read/Write) Cycle Time :
– How frequent can you initiate an access?
– Analogy: A little kid can only ask his father for money on Saturday
CPU
Access Bank 0
– How quickly will you get what you want once you initiate an access?
– Analogy: As soon as he asks, his father will give him the money
• DRAM Bandwidth Limitation analogy:
– What happens if he runs out of money on Wednesday?
cs252-S07, Lecture 16
Memory
Bank 1
Memory
Bank 2
• DRAM (Read/Write) Access Time:
3/19/2007
Memory
47
3/19/2007
Memory
Bank 3
Access Bank 1
Access Bank 2
Access Bank 3
We can Access Bank 0 again
cs252-S07, Lecture 16
48
Main Memory Performance
Main Memory Performance
• Timing model
– 1 to send address,
– 4 for access time, 10 cycle time, 1 to send data
– Cache Block is 4 words
• Simple M.P.
= 4 x (1+10+1) = 48
• Wide M.P.
= 1 + 10 + 1
= 12
• Interleaved M.P. = 1+10+1 + 3 =15
• Wide:
• Simple:
• Interleaved:
– CPU/Mux 1 word;
Mux/Cache, Bus,
Memory N words
(Alpha: 64 bits & 256
bits)
– CPU, Cache, Bus 1 word:
Memory N Modules
(4 Modules); example is
word interleaved
address
0
4
8
12
Bank 0
– CPU, Cache, Bus, Memory
same width
(32 bits)
cs252-S07, Lecture 16
3/19/2007
49
3/19/2007
• Lots of banks
int x[256][512];
for (j = 0; j < 512; j = j+1)
for (i = 0; i < 256; i = i+1)
x[i][j] = 2 * x[i][j];
• Even with 128 banks, since 512 is multiple of 128,
conflict on word accesses
• SW: loop interchange or declaring array not power of 2
(“array padding”)
• HW: Prime number of banks
3/19/2007
address
2
6
10
14
Bank 1
address
3
7
11
15
Bank 2
Bank 3
cs252-S07, Lecture 16
50
Finding Bank Number and Address
within a bank
Avoiding Bank Conflicts
–
–
–
–
address
1
5
9
13
Problem: We want to determine the number of banks, Nb, to use
and the number of words to store in each bank, Wb, such that:
• given a word address x, it is easy to find the bank where x will
be found, B(x), and the address of x within the bank, A(x).
• for any address x, B(x) and A(x) are unique.
• the number of bank conflicts is minimized
bank number = address mod number of banks
bank number = address mod number of banks
address within bank = ⎣address / number of words in bank
modulo & divide per memory access with prime no. banks?
cs252-S07, Lecture 16
51
3/19/2007
cs252-S07, Lecture 16
52
Finding Bank Number and Address
within a bank
Fast Bank Number
• Chinese Remainder Theorem
Solution: We will use the following relation to determine the bank
number for x, B(x), and the address of x within the bank, A(x):
As long as two sets of integers ai and bi follow these rules
b i = x mod a i , 0 ≤ b i < a i , 0 ≤ x < a 0 × a 1 × a 2 ×…
and that ai and aj are co-prime if i ≠ j, then the integer x has only one
solution (unambiguous mapping):
– bank number = b0, number of banks = a0
– address within bank = b1, number of words in bank = a1
– N word address 0 to N-1, prime no. banks, words power of 2
B(x) = x MOD Nb
A(x) = x MOD Wb
and we will choose Nb and Wb to be co-prime, i.e., there is no prime
number that is a factor of Nb and Wb (this condition is satisfied
if we choose Nb to be a prime number that is equal to an integer
power of two minus 1).
• 3 banks Nb = 3, and 8 words per bank, Wb = 8.
We can then use the Chinese Remainder Theorem
to show that B(x) and A(x) is always unique.
cs252-S07, Lecture 16
53
0
3
6
9
12
15
18
21
Modulo Interleaved
0
1
2
1
2
4
5
7
8
10
11
13
14
16
17
19
20
22
23 16
cs252-S07,
Lecture
• Regular DRAM Organization:
– N rows x N column x M-bit
– Read & Write M-bit at a time
– Each M-bit access requires
a RAS / CAS cycle
• Multiple CAS accesses: several names (page mode)
– Extended Data Out (EDO): 30% faster in page mode
• New DRAMs to address gap;
what will they cost, will they survive?
0
9
18
3
12
21
6
15
16
1
10
19
4
13
22
7
8
17
2
11
20
5
14
23
Column
Address
54
N cols
DRAM
• Fast Page Mode DRAM
– N x M “SRAM” to save a row
– RAMBUS: startup company; reinvent DRAM interface
» Each Chip a module vs. slice of memory
» Short bus between CPU and chips
» Does own refresh
» Variable amount of data returned
» 1 byte / 2 ns (500 MB/s per chip)
– Synchronous DRAM: 2 banks on chip, a clock signal to DRAM,
transfer synchronous to system clock (66 - 150 MHz)
– Intel claims RAMBUS Direct (16 b wide) is future PC memory
Row
Address
• After a row is read into the
register
– Only CAS is needed to access
other M-bit blocks on that row
– RAS_L remains asserted while
CAS_L is toggled
1st M-bit Access
N x M “SRAM”
M bits
M-bit Output
2nd M-bit
3rd M-bit
4th M-bit
Col Address
Col Address
Col Address
RAS_L
CAS_L
• Niche memory or main memory?
– e.g., Video RAM for frame buffers, DRAM + fast serial output
cs252-S07, Lecture 16
Seq. Interleaved
1
2
Fast Page Mode Operation
Fast Memory Systems: DRAM specific
3/19/2007
0
N rows
3/19/2007
Bank Number:
Address
within Bank: 0
1
2
3
4
5
6
7
3/19/2007
A
55
Row Address
3/19/2007
Col Address
cs252-S07, Lecture 16
56
Something new: Structure of Tunneling
Magnetic Junction
MEMS-based Storage
• Magnetic “sled” floats
on array of read/write
heads
– Approx 250 Gbit/in2
– Data rates:
IBM: 250 MB/s w 1000 heads
CMU: 3.1 MB/s w 400 heads
• Electrostatic actuators
move media around to
align it with heads
– Sweep sled ±50μm in < 0.5μs
• Capacity estimated to be in
the 1-10GB in 10cm2
• Tunneling Magnetic Junction RAM (TMJ-RAM)
– Speed of SRAM, density of DRAM, non-volatile (no refresh)
– “Spintronics”: combination quantum spin and electronics
– Same technology used in high-density disk-drives
cs252-S07, Lecture 16
3/19/2007
See Ganger et all: http://www.lcs.ece.cmu.edu/research/MEMS
57
Big storage (such as DRAM/DISK):
Potential for Errors!
cs252-S07, Lecture 16
3/19/2007
58
Need for Error Correction!
• Motivation:
• Motivation:
– DRAM is dense ⇒Signals are easily disturbed
– High Capacity ⇒ higher probability of failure
– Failures/time proportional to number of bits!
– As DRAM cells shrink, more vulnerable
• Approach: Redundancy
• Went through period in which failure rate was low
enough without error correction that people didn’t
do correction
– Add extra information so that we can recover from errors
– Can we do better than just create complete copies?
• Block Codes: Data Coded in blocks
–
–
–
–
– DRAM banks too large now
– Servers always corrected memory systems
k data bits coded into n encoded bits
Measure of overhead: Rate of Code: K/N
Often called an (n,k) code
Consider data as vectors in GF(2) [ i.e. vectors of bits ]
• Basic idea: add redundancy through parity bits
– Common configuration: Random error correction
» SEC-DED (single error correct, double error detect)
» One example: 64 data bits + 8 parity bits (11% overhead)
– Really want to handle failures of physical components as well
» Organization is multiple DRAMs/DIMM, multiple DIMMs
» Want to recover from failed DRAM and failed DIMM!
» “Chip kill” handle failures width of single DRAM chip
2n
• Code Space is set of all
vectors,
Data space set of 2k vectors
– Encoding function: C=f(d)
– Decoding function: d=f(C’)
– Not all possible code vectors, C, are valid!
3/19/2007
cs252-S07, Lecture 16
59
3/19/2007
cs252-S07, Lecture 16
60
General Idea:
Code Vector Space
Conclusion
Code Space
• Memory wall inspires optimizations since so much
performance lost there
– Reducing hit time: Small and simple caches, Way prediction,
Trace caches
– Increasing cache bandwidth: Pipelined caches, Multibanked
caches, Nonblocking caches
– Reducing Miss Penalty: Critical word first, Merging write buffers
– Reducing Miss Rate: Compiler optimizations
– Reducing miss penalty or miss rate via parallelism: Hardware
prefetching, Compiler prefetching
C0=f(d0)
Code Distance
(Hamming Distance)
d0
• “Auto-tuners” search replacing static compilation
to explore optimization space?
• Not every vector in the code space is valid
• Hamming Distance (d):
– Minimum number of bit flips to turn one code word into another
• Number of errors that we can detect: (d-1)
• Number of errors that we can fix: ½(d-1)
3/19/2007
cs252-S07, Lecture 16
61
3/19/2007
cs252-S07, Lecture 16
62
Download