CS203A Graduate Computer Architecture Lecture 14 Cache Design Taken from Prof. David Culler’s notes 1/31/02 CS252/Culler Lec 4.1 How to Improve Cache Performance? AMAT HitTime MissRate MissPenalty 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache. 1/31/02 CS252/Culler Lec 4.2 Where to misses come from? • Classifying Misses: 3 Cs – Compulsory—The first access to a block is not in the cache, so the block must be brought into the cache. Also called cold start misses or first reference misses. (Misses in even an Infinite Cache) – Capacity—If the cache cannot contain all the blocks needed during execution of a program, capacity misses will occur due to blocks being discarded and later retrieved. (Misses in Fully Associative Size X Cache) – Conflict—If block-placement strategy is set associative or direct mapped, conflict misses (in addition to compulsory & capacity misses) will occur because a block can be discarded and later retrieved if too many blocks map to its set. Also called collision misses or interference misses. (Misses in N-way Associative, Size X Cache) • 4th “C”: – Coherence 1/31/02 - Misses caused by cache coherence. CS252/Culler Lec 4.3 3Cs Absolute Miss Rate (SPEC92) 0.14 1-way Conflict Miss Rate per Type 0.12 2-way 0.1 4-way 0.08 8-way 0.06 Capacity 0.04 0.02 Cache Size (KB) 1/31/02 128 64 32 16 8 4 2 1 0 Compulsory CS252/Culler Lec 4.4 Cache Size 0.14 1-way Miss Rate per Type 0.12 2-way 0.1 4-way 0.08 8-way 0.06 Capacity 0.04 0.02 Cache Size (KB) 128 64 32 16 8 4 2 1 0 Compulsory • Old rule of thumb: 2x size => 25% cut in miss rate • What does it reduce? 1/31/02 CS252/Culler Lec 4.5 Cache Organization? • • Assume total cache size not changed: What happens if: 1) Change Block Size: 2) Change Associativity: 3) Change Compiler: Which of 3Cs is obviously affected? 1/31/02 CS252/Culler Lec 4.6 Larger Block Size (fixed size&assoc) 25% 1K 20% Miss Rate 4K 15% 16K 10% 64K 5% Block Size (bytes) 256 128 64 32 0% 16 Reduced compulsory misses 256K Increased Conflict Misses What else drives up block size? 1/31/02 CS252/Culler Lec 4.7 Associativity 0.14 1-way Conflict Miss Rate per Type 0.12 2-way 0.1 4-way 0.08 8-way 0.06 Capacity 0.04 0.02 Cache Size (KB) 1/31/02 128 64 32 16 8 4 2 1 0 Compulsory CS252/Culler Lec 4.8 3Cs Relative Miss Rate 100% Miss Rate per Type 1-way 80% Conflict 2-way 4-way 8-way 60% 40% Capacity 20% 1/31/02 128 64 Flaws: for fixed block size Good: insight => invention Cache Size (KB) 32 16 8 4 2 1 0% Compulsory CS252/Culler Lec 4.9 Fast Hit Time + Low Conflict => Victim Cache • How to combine fast hit time of direct mapped yet still avoid conflict misses? • Add buffer to place data discarded from cache • Jouppi [1990]: 4-entry victim cache removed 20% to 95% of conflicts for a 4 KB direct mapped data cache • Used in Alpha, HP machines TAGS DATA Tag and Comparator One Cache line of Data Tag and Comparator One Cache line of Data Tag and Comparator One Cache line of Data Tag and Comparator One Cache line of Data To Next Lower Level In Hierarchy 1/31/02 CS252/Culler Lec 4.10 Alpha 21264 Cache Organization 1/31/02 CS252/Culler Lec 4.11 Reducing Misses by Hardware Prefetching of Instructions & Data • E.g., Instruction Prefetching – Alpha 21064 fetches 2 blocks on a miss – Sequential prefetch – Cache Pollution if unused! – Unnecessarily replaced some useful data! – Solution: Put incoming block in “stream buffer” – On miss check stream buffer • Works with data blocks too: – Jouppi [1990] 1 data stream buffer got 25% misses from 4KB cache; 4 streams got 43% – Palacharla & Kessler [1994] for scientific programs for 8 streams got 50% to 70% of misses from 2 64KB, 4-way set associative caches – Data Prediction is difficult • Prefetching relies on having extra memory bandwidth that can be used without penalty • Question: What to prefetch and when to prefetch? Instruction prefetch is fine, but data? 1/31/02 CS252/Culler Lec 4.12 Leave it to the Programmer? Software Prefetching Data • Data Prefetch – Explicit prefetch instructions – Load data into register (HP PA-RISC loads) – Cache Prefetch: load into cache (MIPS IV, PowerPC, SPARC v. 9) – Special prefetching instructions cannot cause faults; a form of speculative execution • Prefetching comes in two flavors: – Binding prefetch: Requests load directly into register. » Must be correct address and register! – Non-Binding prefetch: Load into cache. » Can be incorrect. Faults? • Issuing Prefetch Instructions takes time – Is cost of prefetch issues < savings in reduced misses? – Higher superscalar reduces difficulty of issue bandwidth 1/31/02 CS252/Culler Lec 4.13 Summary: Miss Rate Reduction CPUtime IC CPI Execution Memory accesses Miss rate Miss penalty Clock cycle time Instruction • 3 Cs: Compulsory, Capacity, Conflict 0. 1. 2. 3. 4. 5. 6. 7. Larger cache Reduce Misses via Larger Block Size Reduce Misses via Higher Associativity Reducing Misses via Victim Cache Reducing Misses via Pseudo-Associativity Reducing Misses by HW Prefetching Instr, Data Reducing Misses by SW Prefetching Data Reducing Misses by Compiler Optimizations • Prefetching comes in two flavors: – Binding prefetch: Requests load directly into register. » Must be correct address and register! – Non-Binding prefetch: Load into cache. » Can be incorrect. Frees HW/SW to guess! 1/31/02 CS252/Culler Lec 4.14 Review: Improving Cache Performance 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache. 1/31/02 CS252/Culler Lec 4.15 1. Reducing Miss Penalty: Read Priority over Write on Miss • Write-through w/ write buffers => RAW conflicts with main memory reads on cache misses – If simply wait for write buffer to empty, might increase read miss penalty (old MIPS 1000 by 50% ) – Check write buffer contents before read; if no conflicts, let the memory access continue • Write-back want buffer to hold displaced blocks – Read miss replacing dirty block – Normal: Write dirty block to memory, and then do the read – Instead copy the dirty block to a write buffer, then do the read, and then do the write – CPU stall less since restarts as soon as do read 1/31/02 CS252/Culler Lec 4.16 2. Reduce Miss Penalty: Early Restart and Critical Word First • Don’t wait for full block to be loaded before restarting CPU – Early restart—As soon as the requested word of the block arrives, send it to the CPU and let the CPU continue execution – Critical Word First—Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block. Also called wrapped fetch and requested word first • Generally useful only in large blocks, • Spatial locality => tend to want next sequential word, so not clear if benefit by early restart block 1/31/02 CS252/Culler Lec 4.17 3. Reduce Miss Penalty: Nonblocking Caches to reduce stalls on misses • Non-blocking cache or lockup-free cache allow data cache to continue to supply cache hits during a miss – requires F/E bits on registers or out-of-order execution – requires multi-bank memories • “hit under miss” reduces the effective miss penalty by working during miss vs. ignoring CPU requests • “hit under multiple miss” or “miss under miss” may further lower the effective miss penalty by overlapping multiple misses – Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses – Requires muliple memory banks (otherwise cannot support) – Penium Pro allows 4 outstanding memory misses 1/31/02 CS252/Culler Lec 4.18 4: Add a second-level cache • L2 Equations AMAT = Hit TimeL1 + Miss RateL1 x Miss PenaltyL1 Miss PenaltyL1 = Hit TimeL2 + Miss RateL2 x Miss PenaltyL2 AMAT = Hit TimeL1 + Miss RateL1 x (Hit TimeL2 + Miss RateL2 + Miss PenaltyL2) • Definitions: – Local miss rate— misses in this cache divided by the total number of memory accesses to this cache (Miss rateL2) – Global miss rate—misses in this cache divided by the total number of memory accesses generated by the CPU – Global Miss Rate is what matters 1/31/02 CS252/Culler Lec 4.19 Comparing Local and Global Miss Rates • 32 KByte 1st level cache; Increasing 2nd level cache • Global miss rate close to single level cache rate provided L2 >> L1 • Don’t use local miss rate • L2 not tied to CPU clock cycle! • Cost & A.M.A.T. • Generally Fast Hit Times and fewer misses • Since hits are few, target miss reduction 1/31/02 Linear Cache Size Log Cache Size CS252/Culler Lec 4.20 Example • For every 1000 instructions, 40 misses in L1 and 20 misses in L2; Hit cycle in L1 is 1, L2 is 10; Miss penalty from L2 to memory is 100 cycles; there are 1.5 memory references per instruction. What is AMAT and average stall cycles per instruction? – AMAT = [1 + 40/1000 * (10 + 20/40 * 100) ] *cc = 3.4 cycles – AMAT without L2 = 1 + 40/1000 * 100 = 5 cycles => An improvement of 1.6 cycles due to L2 – Average stall cycles per instruction = 1.5 * 40/1000 * 10 + 1.5 * 20/1000 * 100 = 3.6 cycles • Note: We have not distinguished reads and writes. Access L2 only on L1 miss, i.e. write back cache 1/31/02 CS252/Culler Lec 4.21 Reducing Misses: Which apply to L2 Cache? • Reducing Miss Rate 1. 2. 3. 4. 5. 6. 7. 1/31/02 Reduce Misses via Larger Block Size Reduce Conflict Misses via Higher Associativity Reducing Conflict Misses via Victim Cache Reducing Conflict Misses via Pseudo-Associativity Reducing Misses by HW Prefetching Instr, Data Reducing Misses by SW Prefetching Data Reducing Capacity/Conf. Misses by Compiler Optimizations CS252/Culler Lec 4.22 L2 cache block size & A.M.A.T. Relative CPU Time 2 1.9 1.8 1.7 1.6 1.5 1.4 1.3 1.2 1.1 1 1.95 1.54 1.36 16 1.28 1.27 32 64 1.34 128 256 512 Block Size • 32KB L1, 8 byte path to memory 1/31/02 CS252/Culler Lec 4.23 Reducing Miss Penalty Summary CPUtime IC CPI Execution Memory accesses Miss rate Miss penalty Clock cycle time Instruction • Four techniques – – – – Read priority over write on miss Early Restart and Critical Word First on miss Non-blocking Caches (Hit under Miss, Miss under Miss) Second Level Cache • Can be applied recursively to Multilevel Caches – Danger is that time to DRAM will grow with multiple levels in between – First attempts at L2 caches can make things worse, since increased worst case is worse 1/31/02 CS252/Culler Lec 4.24 What is the Impact of What You’ve Learned About Caches? 1000 1/31/02 2000 1999 1998 DRAM 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 1986 1985 1984 1983 1982 1981 1980 • 1960-1985: Speed = ƒ(no. operations) • 1990 100 – Pipelined Execution & Fast Clock Rate 10 – Out-of-Order execution – Superscalar Instruction Issue 1 • 1998: Speed = ƒ(non-cached memory accesses) • Superscalar, Out-of-Order machines hide L1 data cache miss (5 clocks) but not L2 cache miss (50 clocks)? CPU CS252/Culler Lec 4.25 miss penalty miss rate Cache Optimization Summary 1/31/02 Technique Larger Block Size Higher Associativity Victim Caches Pseudo-Associative Caches HW Prefetching of Instr/Data Compiler Controlled Prefetching Compiler Reduce Misses Priority to Read Misses Early Restart & Critical Word 1st Non-Blocking Caches Second Level Caches MR + + + + + + + MP HT – – + + + + Complexity 0 1 2 2 2 3 0 1 2 3 2 CS252/Culler Lec 4.26 How to Improve Cache Performance? AMAT HitTime MissRate MissPenalty 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache. 1/31/02 CS252/Culler Lec 4.27 1. Small and simple caches • Small on-chip L1 caches – less access time • Direct mapped cache faster because no hardware comparison between blocks, but higher miss ratio • Compromise – Direct L1 cache and Setassociative L2 cache How to predict cache access time at the design stage? – Use CACTI program 1/31/02 CS252/Culler Lec 4.28 2. Virtual Caches • No address translation from virtual to physical address – no TLB Problems (Read section 5.7): • How to ensure protection? – Add a bit to distinguish between processes? • Add Process-identifier tag (PID) and flush the cache for another process – too much overhead!! Increases miss rate. • O/S can not use same physical address for two or more virtual processes – called aliasing problem! 1/31/02 CS252/Culler Lec 4.29 3. Pipeline the cache access – Does it not increase hit latency? 4. Trace Caches – What are they? Read pp. 447 - Homework 1/31/02 CS252/Culler Lec 4.30