18-447 Computer Architecture Lecture 18: Caches, Caches, Caches

advertisement
18-447
Computer Architecture
Lecture 18: Caches, Caches, Caches
Prof. Onur Mutlu
Carnegie Mellon University
Spring 2015, 2/27/2015
Assignment and Exam Reminders
n 
Lab 4: Due March 6
q 
n 
Control flow and branch prediction
Lab 5: Due March 22
q 
Data cache
n 
HW 4: March 18
Exam: March 20
n 
Advice: Finish the labs early
n 
q 
n 
You have almost a month for Lab 5
Advice: Manage your time well
2
Agenda for the Rest of 447
n 
n 
n 
n 
n 
n 
n 
n 
n 
n 
n 
The memory hierarchy
Caches, caches, more caches (high locality, high bandwidth)
Virtualizing the memory hierarchy
Main memory: DRAM
Main memory control, scheduling
Memory latency tolerance techniques
Non-volatile memory
Multiprocessors
Coherence and consistency
Interconnection networks
Multi-core issues
3
Readings for Today and Next Lecture
n 
Memory Hierarchy and Caches
Required
n  Cache chapters from P&H: 5.1-5.3
n  Memory/cache chapters from Hamacher+: 8.1-8.7
Required + Review:
n 
n 
Wilkes, “Slave Memories and Dynamic Storage Allocation,” IEEE
Trans. On Electronic Computers, 1965.
Qureshi et al., “A Case for MLP-Aware Cache Replacement,“
ISCA 2006.
4
Review: Caching Basics
n 
Caches are structures that exploit locality of reference in
memory
q 
q 
n 
Temporal locality
Spatial locality
They can be constructed in many ways
q 
Can exploit either temporal or spatial locality or both
5
Review: Caching Basics
n 
Block (line): Unit of storage in the cache
q 
n 
Memory is logically divided into cache blocks that map to
locations in the cache
When data referenced
q 
q 
HIT: If in cache, use cached data instead of accessing memory
MISS: If not in cache, bring block into cache
n 
n 
Maybe have to kick something else out to do it
Some important cache design decisions
q 
q 
q 
q 
q 
Placement: where and how to place/find a block in cache?
Replacement: what data to remove to make room in cache?
Granularity of management: large, small, uniform blocks?
Write policy: what do we do about writes?
Instructions/data: Do we treat them separately?
6
Cache Abstraction and Metrics
Address
Tag Store
Data Store
(is the address
in the cache?
+ bookkeeping)
(stores
memory
blocks)
Hit/miss?
Data
n 
Cache hit rate = (# hits) / (# hits + # misses) = (# hits) / (# accesses)
Average memory access time (AMAT)
n 
= ( hit-rate * hit-latency ) + ( miss-rate * miss-latency )
Aside: Can reducing AMAT reduce performance?
n 
7
A Basic Hardware Cache Design
n 
n 
We will start with a basic hardware cache design
Then, we will examine a multitude of ideas to make it
better
8
Blocks and Addressing the Cache
n 
n 
Memory is logically divided into fixed-size blocks
Each block maps to a location in the cache, determined by
the index bits in the address
tag index byte in block
q 
used to index into the tag and data stores
2b
3 bits 3 bits
8-bit address
n 
Cache access:
1) index into the tag and data stores with index bits in address
2) check valid bit in tag store
3) compare tag bits in address with the stored tag in tag store
n 
If a block is in the cache (cache hit), the stored tag should be
valid and match the tag of the block
9
Direct-Mapped Cache: Placement and Access
n 
n 
Assume byte-addressable memory:
256 bytes, 8-byte blocks à 32 blocks
Assume cache: 64 bytes, 8 blocks
q 
tag
Direct-mapped: A block can go to only one location
index
byte in block
Tag store
3 bits 3 bits
2b
Data store
Address
V
tag
=?
q 
MUX
byte in block
Hit?
Data
Addresses with same index contend for the same location
n 
Cause conflict misses
10
Direct-Mapped Caches
n 
Direct-mapped cache: Two blocks in memory that map to
the same index in the cache cannot be present in the cache
at the same time
q 
n 
One index à one entry
Can lead to 0% hit rate if more than one block accessed in
an interleaved manner map to the same index
q 
q 
q 
Assume addresses A and B have the same index bits but
different tag bits
A, B, A, B, A, B, A, B, … à conflict in the cache index
All accesses are conflict misses
11
Set Associativity
n 
n 
Addresses 0 and 8 always conflict in direct mapped cache
Instead of having one column of 8, have 2 columns of 4 blocks
Tag store
Data store
SET
V
tag
tag
V
=?
=?
Logic
Address
tag
3b
index
byte in block
2 bits 3 bits
MUX
MUX
byte in block
Hit?
Key idea: Associative memory within the set
+ Accommodates conflicts better (fewer conflict misses)
-- More complex, slower access, larger tag store
12
Higher Associativity
n 
4-way
Tag store
=?
=?
=?
Logic
=?
Hit?
Data store
MUX
MUX
byte in block
+ Likelihood of conflict misses even lower
-- More tag comparators and wider data mux; larger tags
13
Full Associativity
n 
Fully associative cache
q 
A block can be placed in any cache location
Tag store
=?
=?
=?
=?
=?
=?
=?
=?
Logic
Hit?
Data store
MUX
MUX
byte in block
14
Associativity (and Tradeoffs)
n 
n 
Degree of associativity: How many blocks can map to the
same index (or set)?
Higher associativity
++ Higher hit rate
-- Slower cache access time (hit latency and data access latency)
-- More expensive hardware (morehitcomparators)
rate
n 
Diminishing returns from higher
associativity
associativity
15
Issues in Set-Associative Caches
n 
Think of each block in a set having a “priority”
q 
n 
n 
Key issue: How do you determine/adjust block priorities?
There are three key decisions in a set:
q 
n 
Where to insert the incoming block, whether or not to insert the block
Promotion: What happens to priorities on a cache hit?
q 
n 
Insertion, promotion, eviction (replacement)
Insertion: What happens to priorities on a cache fill?
q 
n 
Indicating how important it is to keep the block in the cache
Whether and how to change block priority
Eviction/replacement: What happens to priorities on a cache
miss?
q 
Which block to evict and how to adjust priorities
16
Eviction/Replacement Policy
n 
Which block in the set to replace on a cache miss?
q 
q 
Any invalid block first
If all are valid, consult the replacement policy
n 
n 
n 
n 
n 
n 
Random
FIFO
Least recently used (how to implement?)
Not most recently used
Least frequently used?
Least costly to re-fetch?
q 
n 
n 
Why would memory accesses have different cost?
Hybrid replacement policies
Optimal replacement policy?
17
Implementing LRU
n 
Idea: Evict the least recently accessed block
Problem: Need to keep track of access ordering of blocks
n 
Question: 2-way set associative cache:
n 
q 
n 
What do you need to implement LRU perfectly?
Question: 4-way set associative cache:
q 
q 
q 
q 
What do you need to implement LRU perfectly?
How many different orderings possible for the 4 blocks in the
set?
How many bits needed to encode the LRU order of a block?
What is the logic needed to determine the LRU victim?
18
Approximations of LRU
n 
n 
Most modern processors do not implement “true LRU” (also
called “perfect LRU”) in highly-associative caches
Why?
q 
q 
n 
True LRU is complex
LRU is an approximation to predict locality anyway (i.e., not
the best possible cache management policy)
Examples:
q 
q 
q 
Not MRU (not most recently used)
Hierarchical LRU: divide the 4-way set into 2-way “groups”,
track the MRU group and the MRU way in each group
Victim-NextVictim Replacement: Only keep track of the victim
and the next victim
19
Hierarchical LRU (not MRU)
n 
Divide a set into multiple groups
Keep track of only the MRU group
Keep track of only the MRU block in each group
n 
On replacement, select victim as:
n 
n 
q 
A not-MRU block in one of the not-MRU groups (randomly pick
one of such blocks/groups)
20
Hierarchical LRU (not MRU) Example
21
Hierarchical LRU (not MRU) Example
22
Hierarchical LRU (not MRU): Questions
n 
n 
n 
n 
16-way cache
2 8-way groups
What is an access pattern that performs worse than true
LRU?
What is an access pattern that performs better than true
LRU?
23
Victim/Next-Victim Policy
n 
Only 2 blocks’ status tracked in each set:
q 
q 
n 
On a cache miss
q 
q 
q 
n 
victim (V), next victim (NV)
all other blocks denoted as (O) – Ordinary block
Replace V
Demote NV to V
Randomly pick an O block as NV
On a cache hit to V
q 
q 
q 
Demote NV to V
Randomly pick an O block as NV
Turn V to O
24
Victim/Next-Victim Policy (II)
n 
On a cache hit to NV
q 
q 
n 
Randomly pick an O block as NV
Turn NV to O
On a cache hit to O
q 
Do nothing
25
Victim/Next-Victim Example
26
Cache Replacement Policy: LRU or Random
n 
LRU vs. Random: Which one is better?
q 
Example: 4-way cache, cyclic references to A, B, C, D, E
n 
n 
Set thrashing: When the “program working set” in a set is
larger than set associativity
q 
n 
Random replacement policy is better when thrashing occurs
In practice:
q 
q 
n 
0% hit rate with LRU policy
Depends on workload
Average hit rate of LRU and Random are similar
Best of both Worlds: Hybrid of LRU and Random
q 
How to choose between the two? Set sampling
n 
See Qureshi et al., “A Case for MLP-Aware Cache Replacement,“
ISCA 2006.
27
What Is the Optimal Replacement Policy?
n 
Belady’s OPT
q 
q 
q 
n 
n 
Replace the block that is going to be referenced furthest in the
future by the program
Belady, “A study of replacement algorithms for a virtualstorage computer,” IBM Systems Journal, 1966.
How do we implement this? Simulate?
Is this optimal for minimizing miss rate?
Is this optimal for minimizing execution time?
q 
q 
q 
No. Cache miss latency/cost varies from block to block!
Two reasons: Remote vs. local caches and miss overlapping
Qureshi et al. “A Case for MLP-Aware Cache Replacement,“
ISCA 2006.
28
Aside: Cache versus Page Replacement
n 
Physical memory (DRAM) is a cache for disk
q 
Usually managed by system software via the virtual memory
subsystem
n 
Page replacement is similar to cache replacement
Page table is the “tag store” for physical memory data store
n 
What is the difference?
n 
q 
q 
q 
q 
Required speed of access to cache vs. physical memory
Number of blocks in a cache vs. physical memory
“Tolerable” amount of time to find a replacement candidate
(disk versus memory access latency)
Role of hardware versus software
29
What’s In A Tag Store Entry?
n 
Valid bit
Tag
Replacement policy bits
n 
Dirty bit?
n 
n 
q 
Write back vs. write through caches
30
Handling Writes (I)
n 
When do we write the modified data in a cache to the next level?
n 
n 
q 
Write through: At the time the write happens
Write back: When the block is evicted
Write-back
+ Can consolidate multiple writes to the same block before eviction
q 
Potentially saves bandwidth between cache levels + saves energy
-- Need a bit in the tag store indicating the block is “dirty/modified”
q 
Write-through
+ Simpler
+ All levels are up to date. Consistency: Simpler cache coherence because
no need to check lower-level caches
-- More bandwidth intensive; no coalescing of writes
31
Handling Writes (II)
n 
Do we allocate a cache block on a write miss?
q 
q 
n 
Allocate on write miss: Yes
No-allocate on write miss: No
Allocate on write miss
+ Can consolidate writes instead of writing each of them
individually to next level
+ Simpler because write misses can be treated the same way as
read misses
-- Requires (?) transfer of the whole cache block
n 
No-allocate
+ Conserves cache space if locality of writes is low (potentially
better cache hit rate)
32
Handling Writes (III)
n 
n 
n 
What if the processor writes to an entire block over a small
amount of time?
Is there any need to bring the block into the cache from
memory in the first place?
Ditto for a portion of the block, i.e., subblock
q 
E.g., 4 bytes out of 64 bytes
33
Sectored Caches
n 
Idea: Divide a block into subblocks (or sectors)
q 
q 
Have separate valid and dirty bits for each sector
When is this useful? (Think writes…)
++ No need to transfer the entire cache block into the cache
(A write simply validates and updates a subblock)
++ More freedom in transferring subblocks into the cache (a
cache block does not need to be in the cache fully)
(How many subblocks do you transfer on a read?)
-- More complex design
-- May not exploit spatial locality fully when used for reads
v d subblock v d subblock
v d subblock
tag
34
Instruction vs. Data Caches
n 
Separate or Unified?
n 
Unified:
+ Dynamic sharing of cache space: no overprovisioning that
might happen with static partitioning (i.e., split I and D
caches)
-- Instructions and data can thrash each other (i.e., no
guaranteed space for either)
-- I and D are accessed in different places in the pipeline. Where
do we place the unified cache for fast access?
n 
First level caches are almost always split
q 
n 
Mainly for the last reason above
Second and higher levels are almost always unified
35
Multi-level Caching in a Pipelined Design
n 
First-level caches (instruction and data)
q 
q 
q 
n 
Second-level caches
q 
q 
q 
n 
Decisions very much affected by cycle time
Small, lower associativity
Tag store and data store accessed in parallel
Decisions need to balance hit rate and access latency
Usually large and highly associative; latency not as important
Tag store and data store accessed serially
Serial vs. Parallel access of levels
q 
q 
Serial: Second level cache accessed only if first-level misses
Second level does not see the same accesses as the first
n 
n 
First level acts as a filter (filters some temporal and spatial locality)
Management policies are therefore different
36
Cache Performance
Cache Parameters vs. Miss/Hit Rate
n 
Cache size
n 
Block size
n 
Associativity
n 
n 
Replacement policy
Insertion/Placement policy
38
Cache Size
n 
Cache size: total data (not including tag) capacity
q 
q 
n 
Too large a cache adversely affects hit and miss latency
q 
q 
n 
smaller is faster => bigger is slower
access time may degrade critical path
Too small a cache
q 
q 
n 
bigger can exploit temporal locality better
not ALWAYS better
doesn’t exploit temporal locality well
useful data replaced often
Working set: the whole set of data
the executing application references
q 
Within a time interval
hit rate
“working set”
size
cache size
39
Block Size
n 
Block size is the data that is associated with an address tag
q 
not necessarily the unit of transfer between hierarchies
n 
Sub-blocking: A block divided into multiple pieces (each with V bit)
q 
n 
Too small blocks
q 
q 
n 
Can improve “write” performance
hit rate
don’t exploit spatial locality well
have larger tag overhead
Too large blocks
q 
q 
too few total # of blocks à less
temporal locality exploitation
waste of cache space and bandwidth/energy
if spatial locality is not high
block
size
40
Large Blocks: Critical-Word and Subblocking
n 
Large cache blocks can take a long time to fill into the cache
q 
q 
n 
fill cache line critical word first
restart cache access before complete fill
Large cache blocks can waste bus bandwidth
q 
q 
q 
divide a block into subblocks
associate separate valid bits for each subblock
When is this useful?
v d subblock v d subblock
v d subblock
tag
41
Associativity
n 
How many blocks can map to the same index (or set)?
n 
Larger associativity
q 
q 
lower miss rate, less variation among programs
diminishing returns, higher hit latency
hit rate
n 
Smaller associativity
q 
q 
lower cost
lower hit latency
n 
n 
Especially important for L1 caches
Power of 2 associativity required?
associativity
42
Classification of Cache Misses
n 
Compulsory miss
q 
q 
n 
Capacity miss
q 
q 
n 
first reference to an address (block) always results in a miss
subsequent references should hit unless the cache block is
displaced for the reasons below
cache is too small to hold everything needed
defined as the misses that would occur even in a fully-associative
cache (with optimal replacement) of the same capacity
Conflict miss
q 
defined as any miss that is neither a compulsory nor a capacity
miss
43
How to Reduce Each Miss Type
n 
Compulsory
q 
q 
n 
Caching cannot help
Prefetching
Conflict
q 
q 
More associativity
Other ways to get more associativity without making the
cache associative
n 
n 
n 
n 
Victim cache
Hashing
Software hints?
Capacity
q 
q 
Utilize cache space better: keep blocks that will be referenced
Software management: divide working set such that each
“phase” fits in cache
44
Improving Cache “Performance”
n 
Remember
q 
Average memory access time (AMAT)
= ( hit-rate * hit-latency ) + ( miss-rate * miss-latency )
n 
Reducing miss rate
q 
Caveat: reducing miss rate can reduce performance if more
costly-to-refetch blocks are evicted
n 
Reducing miss latency/cost
n 
Reducing hit latency/cost
45
Improving Basic Cache Performance
n 
Reducing miss rate
q 
q 
More associativity
Alternatives/enhancements to associativity
n 
q 
q 
n 
Victim caches, hashing, pseudo-associativity, skewed associativity
Better replacement/insertion policies
Software approaches
Reducing miss latency/cost
q 
q 
q 
q 
q 
q 
q 
Multi-level caches
Critical word first
Subblocking/sectoring
Better replacement/insertion policies
Non-blocking caches (multiple cache misses in parallel)
Multiple accesses per cycle
Software approaches
46
Download