18-447 Computer Architecture Lecture 8: Pipelining

18-447
Computer Architecture
Lecture 8: Pipelining
Prof. Onur Mutlu
Carnegie Mellon University
Spring 2013, 2/4/2013
Reminder: Homework 2

Homework 2 out



Due February 11 (next Monday)
LC-3b microcode
ISA concepts, ISA vs. microarchitecture, microcoded machines
2
Reminder: Lab Assignment 2

Lab Assignment 1.5



Verilog practice
Not to be turned in
Lab Assignment 2




Due Feb 15
Single-cycle MIPS implementation in Verilog
All labs are individual assignments
No collaboration; please respect the honor code
3
Lookahead: Extra Credit for Lab Assignment 2






Complete your normal (single-cycle) implementation first, and
get it checked off in lab.
Then, implement the MIPS core using a microcoded approach
similar to what we will discuss in class.
We are not specifying any particular details of the microcode
format or the microarchitecture; you can be creative.
For the extra credit, the microcoded implementation should
execute the same programs that your ordinary
implementation does, and you should demo it by the normal
lab deadline.
You will get maximum 4% of course grade
Document what you have done and demonstrate well
4
Readings for Today

Pipelining



P&H Chapter 4.5-4.8
Optional: Hamacher et al. book, Chapter 6, “Pipelining”
Pipelined LC-3b Microarchitecture

http://www.ece.cmu.edu/~ece447/s13/lib/exe/fetch.php?medi
a=18447-lc3b-pipelining.pdf
5
Today’s Agenda

Finish microprogrammed microarchitectures

Start pipelining
6
Review: Last Lecture

An exercise in microprogramming
7
Review: An Exercise in
Microprogramming
8
Handouts



7 pages of Microprogrammed LC-3b design
http://www.ece.cmu.edu/~ece447/s13/doku.php?id=manu
als
http://www.ece.cmu.edu/~ece447/s13/lib/exe/fetch.php?m
edia=lc3b-figures.pdf
9
A Simple LC-3b Control and Datapath
10
18, 19
MAR <! PC
PC <! PC + 2
33
MDR <! M
R
R
35
IR <! MDR
32
RTI
To 8
1011
BEN<! IR[11] & N + IR[10] & Z + IR[9] & P
To 11
1010
[IR[15:12]]
ADD
To 10
BR
AND
DR<! SR1+OP2*
set CC
1
0
XOR
JMP
TRAP
To 18
DR<! SR1&OP2*
set CC
[BEN]
JSR
SHF
LEA
LDB
STW
LDW
STB
1
PC<! PC+LSHF(off9,1)
12
DR<! SR1 XOR OP2*
set CC
15
4
MAR<! LSHF(ZEXT[IR[7:0]],1)
To 18
[IR[11]]
0
R
To 18
PC<! BaseR
To 18
MDR<! M[MAR]
R7<! PC
22
5
9
To 18
0
28
1
20
R7<! PC
PC<! BaseR
R
21
30
PC<! MDR
To 18
To 18
R7<! PC
PC<! PC+LSHF(off11,1)
13
DR<! SHF(SR,A,D,amt4)
set CC
To 18
14
2
DR<! PC+LSHF(off9, 1)
set CC
To 18
MAR<! B+off6
6
7
MAR<! B+LSHF(off6,1)
3
MAR<! B+LSHF(off6,1)
MAR<! B+off6
To 18
29
MDR<! M[MAR[15:1]’0]
NOTES
B+off6 : Base + SEXT[offset6]
PC+off9 : PC + SEXT[offset9]
*OP2 may be SR2 or SEXT[imm5]
** [15:8] or [7:0] depending on
MAR[0]
R
31
R
DR<! SEXT[BYTE.DATA]
set CC
MDR<! SR
MDR<! M[MAR]
27
R
R
MDR<! SR[7:0]
16
DR<! MDR
set CC
M[MAR]<! MDR
To 18
To 18
R
To 18
24
23
25
17
M[MAR]<! MDR**
R
R
To 19
R
State Machine for LDW
10APPENDIX C. THE MICROARCHITECTURE OF THE LC-3B, BASIC MACHINE
Microsequencer
COND1
BEN
R
J[4]
J[3]
J[2]
IR[11]
Ready
Branch
J[5]
COND0
J[1]
Addr.
Mode
J[0]
0,0,IR[15:12]
6
IRD
6
Address of Next State
Figure C.5: The microsequencer of the LC-3b base machine
State 18 (010010)
State 33 (100001)
State 35 (100011)
State 32 (100000)
State 6 (000110)
State 25 (011001)
State 27 (011011)
unused opcodes, the microarchitecture would execute a sequence of microinstructions,
starting at state 10 or state 11, depending on which illegal opcode was being decoded.
In both cases, the sequence of microinstructions would respond to the fact that an
instruction with an illegal opcode had been fetched.
Several signals necessary to control the data path and the microsequencer are not
among those listed in Tables C.1 and C.2. They are DR, SR1, BEN, and R. Figure C.6
shows the additional logic needed to generate DR, SR1, and BEN.
The remaining signal, R, is a signal generated by the memory in order to allow the
C.4. THE CONTROL STRUCTURE
11
IR[11:9]
IR[11:9]
DR
SR1
111
IR[8:6]
DRMUX
SR1MUX
(b)
(a)
IR[11:9]
N
Z
P
Logic
BEN
(c)
Figure C.6: Additional logic required to provide control signals
R
IR[15:11]
BEN
Microsequencer
6
Control Store
2 6 x 35
35
Microinstruction
9
(J, COND, IRD)
26
10APPENDIX C. THE MICROARCHITECTURE OF THE LC-3B, BASIC MACHINE
COND1
BEN
R
J[4]
J[3]
J[2]
IR[11]
Ready
Branch
J[5]
COND0
J[1]
0,0,IR[15:12]
6
IRD
6
Address of Next State
Figure C.5: The microsequencer of the LC-3b base machine
Addr.
Mode
J[0]
.M
LD A R
.M
LD D R
. IR
LD
. BE
LD N
. RE
LD G
. CC
LD
.PC
Ga
t eP
Ga C
t eM
Ga DR
t eA
Ga L U
t eM
Ga ARM
t eS
HF UX
PC
MU
X
DR
MU
SR X
1M
AD U X
DR
1M
UX
AD
DR
2M
UX
MA
RM
UX
AL
UK
MI
O.
E
R. W N
DA
TA
LS .SIZ
HF
E
1
J
nd
D
Co
IR
LD
000000 (State 0)
000001 (State 1)
000010 (State 2)
000011 (State 3)
000100 (State 4)
000101 (State 5)
000110 (State 6)
000111 (State 7)
001000 (State 8)
001001 (State 9)
001010 (State 10)
001011 (State 11)
001100 (State 12)
001101 (State 13)
001110 (State 14)
001111 (State 15)
010000 (State 16)
010001 (State 17)
010010 (State 18)
010011 (State 19)
010100 (State 20)
010101 (State 21)
010110 (State 22)
010111 (State 23)
011000 (State 24)
011001 (State 25)
011010 (State 26)
011011 (State 27)
011100 (State 28)
011101 (State 29)
011110 (State 30)
011111 (State 31)
100000 (State 32)
100001 (State 33)
100010 (State 34)
100011 (State 35)
100100 (State 36)
100101 (State 37)
100110 (State 38)
100111 (State 39)
101000 (State 40)
101001 (State 41)
101010 (State 42)
101011 (State 43)
101100 (State 44)
101101 (State 45)
101110 (State 46)
101111 (State 47)
110000 (State 48)
110001 (State 49)
110010 (State 50)
110011 (State 51)
110100 (State 52)
110101 (State 53)
110110 (State 54)
110111 (State 55)
111000 (State 56)
111001 (State 57)
111010 (State 58)
111011 (State 59)
111100 (State 60)
111101 (State 61)
111110 (State 62)
111111 (State 63)
10APPENDIX C. THE MICROARCHITECTURE OF THE LC-3B, BASIC MACHINE
COND1
BEN
R
J[4]
J[3]
J[2]
IR[11]
Ready
Branch
J[5]
COND0
J[1]
0,0,IR[15:12]
6
IRD
6
Address of Next State
Figure C.5: The microsequencer of the LC-3b base machine
Addr.
Mode
J[0]
Review: End of the Exercise in
Microprogramming
20
The Microsequencer: Some Questions

When is the IRD signal asserted?

What happens if an illegal instruction is decoded?

What are condition (COND) bits for?

How is variable latency memory handled?

How do you do the state encoding?



Minimize number of state variables
Start with the 16-way branch
Then determine constraint tables and states dependent on COND
21
The Control Store: Some Questions

What control signals can be stored in the control store?
vs.

What control signals have to be generated in hardwired
logic?

i.e., what signal cannot be available without processing in the
datapath?
22
Variable-Latency Memory

The ready signal (R) enables memory read/write to execute
correctly


Example: transition from state 33 to state 35 is controlled by
the R bit asserted by memory when memory data is available
Could we have done this in a single-cycle
microarchitecture?
23
The Microsequencer: Advanced Questions

What happens if the machine is interrupted?

What if an instruction generates an exception?

How can you implement a complex instruction using this
control structure?

Think REP MOVS
24
The Power of Abstraction



The concept of a control store of microinstructions enables
the hardware designer with a new abstraction:
microprogramming
The designer can translate any desired operation to a
sequence microinstructions
All the designer needs to provide is



The sequence of microinstructions needed to implement the
desired operation
The ability for the control logic to correctly sequence through
the microinstructions
Any additional datapath control signals needed (no need if the
operation can be “translated” into existing control signals)
25
Let’s Do Some More Microprogramming

Implement REP MOVS in the LC-3b microarchitecture

What changes, if any, do you make to the






state machine?
datapath?
control store?
microsequencer?
Show all changes and microinstructions
Coming up in Homework 3
26
Aside: Alignment Correction in Memory


Remember unaligned accesses
LC-3b has byte load and byte store instructions that move
data not aligned at the word-address boundary


Convenience to the programmer/compiler
How does the hardware ensure this works correctly?



Take a look at state 29 for LDB
States 24 and 17 for STB
Additional logic to handle unaligned accesses
27
Aside: Memory Mapped I/O



Address control logic determines whether the specified
address of LDx and STx are to memory or I/O devices
Correspondingly enables memory or I/O devices and sets
up muxes
Another instance where the final control signals (e.g.,
MEM.EN or INMUX/2) cannot be stored in the control store

Dependent on address
28
Advantages of Microprogrammed Control

Allows a very simple datapath to do powerful computation by
controlling the datapath (using a sequencer)




Enables easy extensibility of the ISA



High-level ISA translated into microcode (sequence of microinstructions)
Microcode enables a minimal datapath to emulate an ISA
Microinstructions can be thought of a user-invisible ISA
Can support a new instruction by changing the ucode
Can support complex instructions as a sequence of simple microinstructions
If I can sequence an arbitrary instruction then I can sequence
an arbitrary “program” as a microprogram sequence

will need some new state (e.g. loop counters) in the microcode for sequencing
more elaborate programs
29
Update of Machine Behavior

The ability to update/patch microcode in the field (after a
processor is shipped) enables



Ability to add new instructions without changing the processor!
Ability to “fix” buggy hardware implementations
Examples


IBM 370 Model 145: microcode stored in main memory, can be
updated after a reboot
IBM System z: Similar to 370/145.


Heller and Farrell, “Millicode in an IBM zSeries processor,” IBM
JR&D, May/Jul 2004.
B1700 microcode can be updated while the processor is running

User-microprogrammable machine!
30
An Example Microcoded Multi-Cycle MIPS Design

P&H, Appendix D

Any ISA can be implemented this way


We will not cover this in class
However, you can do the extra credit assignment for Lab 2


Partial credit even if your full design does not work
Maximum 4% of your grade in the course
31
A Microprogrammed MIPS
Design: Lab 2 Extra Credit
32
Microcoded Multi-Cycle MIPS Design
[Based on original figure from P&H CO&D, COPYRIGHT
2004 Elsevier. ALL RIGHTS RESERVED.]
33
Control Logic for MIPS FSM
[Based on original figure from P&H CO&D, COPYRIGHT
2004 Elsevier. ALL RIGHTS RESERVED.]
34
Microprogrammed Control for MIPS FSM
[Based on original figure from P&H CO&D, COPYRIGHT
2004 Elsevier. ALL RIGHTS RESERVED.]
35
End: A Microprogrammed MIPS
Design: Lab 2 Extra Credit
36
M ic ro c o d e
ALUSrcA
IorD
D a ta p a th
IRWrite
c o n tro l
PCWrite
ou
t p u ts
PCWriteCond
….
s t o ra g e
O u t p u ts
n-bit InmPC
input
pu t
1
M ic ro p ro g ra m c o u n te r
k-bit “control” output
Horizontal Microcode
S e q u e n c in g
c o n t ro l
A dder
A d d r e s s s e le c t lo g ic
In p u t s f ro m in s tru c t io n
re g is te r o p c o d e f ie ld
[Based on original figure from P&H CO&D, COPYRIGHT
2004 Elsevier. ALL RIGHTS RESERVED.]
Control Store: 2n k bit (not including sequencing)
37
Vertical Microcode
1-bit signal means do this RT
(or combination of RTs)
“PC  PC+4”
“PC  ALUOut”
D
a ta 
p a th
“PC
PC[ 31:28 ],IR[ 25:0 ],2’b00”
c o n tro l
“IR  MEM[ PC ]”
o u t p u ts
“A  RF[ IR[ 25:21 ] ]”
“B  RF[ IR[ 20:16 ] ]”
…….
………….
M ic ro c o d e
s t o ra g e
O u t p u ts
n-bit mPC
input
In p u t
m-bit input
1
M ic ro p ro g ra m c o u n te r
S e q u e n c in g
c o n t ro l
ROM
A dder
A d d r e s s s e le c t lo g ic
re g is te r o p c o d e f ie ld
ALUSrcA
IorD
IRWrite
PCWrite
PCWriteCond
….
[Based on original figure from P&H CO&D, COPYRIGHT
2004 Elsevier. ALL RIGHTS RESERVED.]
In p u t s f ro m in s tru c t io n
k-bit output
If done right (i.e., m<<n, and m<<k), two ROMs together
(2nm+2mk bit) should be smaller than horizontal microcode ROM (2nk bit)
38
Nanocode and Millicode

Nanocode: a level below traditional mcode


Millicode: a level above traditional mcode




mprogrammed control for sub-systems (e.g., a complicated floatingpoint module) that acts as a slave in a mcontrolled datapath
ISA-level subroutines that can be called by the mcontroller to handle
complicated operations and system functions
E.g., Heller and Farrell, “Millicode in an IBM zSeries processor,” IBM
JR&D, May/Jul 2004.
In both cases, we avoid complicating the main mcontroller
You can think of these as “microcode” at different levels of
abstraction
39
Nanocode Concept Illustrated
a “mcoded” processor implementation
ROM
mPC
processor
datapath
We refer to this
as “nanocode”
when a mcoded
subsystem is embedded
in a mcoded system
a “mcoded” FPU implementation
ROM
mPC
arithmetic
datapath
40
Multi-Cycle vs. Single-Cycle uArch

Advantages

Disadvantages

You should be very familiar with this right now
41
Microprogrammed vs. Hardwired Control

Advantages

Disadvantages

You should be very familiar with this right now
42
Can We Do Better?

What limitations do you see with the multi-cycle design?

Limited concurrency



Some hardware resources are idle during different phases of
instruction processing cycle
“Fetch” logic is idle when an instruction is being “decoded” or
“executed”
Most of the datapath is idle when a memory access is
happening
43
Can We Use the Idle Hardware to Improve Concurrency?


Goal: Concurrency  throughput (more “work” completed
in one cycle)
Idea: When an instruction is using some resources in its
processing phase, process other instructions on idle
resources not needed by that instruction




E.g., when an instruction is being decoded, fetch the next
instruction
E.g., when an instruction is being executed, decode another
instruction
E.g., when an instruction is accessing data memory (ld/st),
execute the next instruction
E.g., when an instruction is writing its result into the register
file, access data memory for the next instruction
44
Pipelining: Basic Idea

More systematically:



Pipeline the execution of multiple instructions
Analogy: “Assembly line processing” of instructions
Idea:



Divide the instruction processing cycle into distinct “stages” of
processing
Ensure there are enough hardware resources to process one
instruction in each stage
Process a different instruction in each stage



Instructions consecutive in program order are processed in
consecutive stages
Benefit: Increases instruction processing throughput (1/CPI)
Downside: Start thinking about this…
45
Example: Execution of Four Independent ADDs

Multi-cycle: 4 cycles per instruction
F
D
E
W
F
D
E
W
F
D
E
W
F
D
E
W
Time

Pipelined: 4 cycles per 4 instructions (steady state)
F
D
E
W
F
D
E
W
F
D
E
W
F
D
E
W
Time
46
The Laundry Analogy
6 PM
7
8
9
10
11
12
1
2 AM
T im e
T a sk
o rd e r
A
B
C
D




“place one dirty load of clothes in the washer”
“when the washer is finished, place the wet load in the dryer”
“when the dryer is finished, take out the dry load and fold”
“when folding is finished, ask your roommate (??) to put the clothes
away”
6 PM
7
8
9
10
11
12
1
2 AM
T im e
T a sk
o rd e r
A
B
C
- steps to do a load are sequentially dependent
- no dependence between different loads
- different steps do not share resources
D
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
47
Pipelining Multiple Loads of Laundry
6 PM
7
8
9
10
11
12
1
2 AM
6 PM
7
8
9
10
11
12
1
2 AM
6 PM
7
8
9
10
11
12
1
2 AM
6 PM
7
8
9
10
11
12
1
2 AM
T im e
T a sk
o rd e r
A
T im e
T a sk
o rd e r
B
A
C
D
B
C
D
T im e
T a sk
o rd e r
T im e
A
T a sk
o rd e r B
A
C
B
D
C
D
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
- 4 loads of laundry in parallel
- no additional resources
- throughput increased by 4
- latency per load is the same
48
Pipelining Multiple Loads of Laundry: In Practice
6 PM
7
8
9
10
11
12
1
2 AM
6 PM
7
8
9
10
11
12
1
2 AM
6 PM
7
8
9
10
11
12
1
2 AM
6 PM
7
8
9
10
11
12
1
2 AM
T im e
T a sk
o rd e r
A
T im e
B
T a sk
o rd e r
C
A
D
B
C
D
T im e
T a sk
o rd e r
T im eA
T a sk B
o rd e r
C
A
D
B
C
the slowest step decides throughput
D
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
49
Pipelining Multiple Loads of Laundry: In Practice
6 PM
7
8
9
10
11
12
1
2 AM
A 6 PM
T im e
7
8
9
10
11
12
1
2 AM
6 PM
7
8
9
10
11
12
1
2 AM
6 PM
7
8
9
10
11
12
1
2 AM
T im e
T a sk
o rd e r
T a sk B
o rd e r
C
A
D
B
C
D
T im e
T a sk
o rd e r
T im e
A
T a sk B
o rd e r
C
A
B
D
A
B
A
B
C
D
Throughput restored (2 loads per hour) using 2 dryers
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
50
An Ideal Pipeline


Goal: Increase throughput with little increase in cost
(hardware cost, in case of instruction processing)
Repetition of identical operations


Repetition of independent operations


No dependencies between repeated operations
Uniformly partitionable suboperations


The same operation is repeated on a large number of different
inputs
Processing can be evenly divided into uniform-latency
suboperations (that do not share resources)
Fitting examples: automobile assembly line, doing laundry

What about the instruction processing “cycle”?
51
Ideal Pipelining
combinational logic (F,D,E,M,W)
T psec
T/2 ps (F,D,E)
T/3
ps (F,D)
BW=~(1/T)
BW=~(2/T)
T/2 ps (M,W)
T/3
ps (E,M)
T/3
ps (M,W)
BW=~(3/T)
52
More Realistic Pipeline: Throughput

Nonpipelined version with delay T
BW = 1/(T+S) where S = latch delay
T ps

k-stage pipelined version
BWk-stage = 1 / (T/k +S )
BWmax = 1 / (1 gate delay + S )
T/k
ps
T/k
ps
53
More Realistic Pipeline: Cost

Nonpipelined version with combinational cost G
Cost = G+L where L = latch cost
G gates

k-stage pipelined version
Costk-stage = G + Lk
G/k
G/k
54
Pipelining Instruction Processing
55
Remember: The Instruction Processing Cycle
 Fetch fetch (IF)
1. Instruction
 Decodedecode and
2. Instruction
register
operand
fetch (ID/RF)
 Evaluate
Address
3. Execute/Evaluate
memory address (EX/AG)
 Fetch Operands
4. Memory operand fetch (MEM)
 Execute
5. Store/writeback
result (WB)
 Store Result
56
Remember the Single-Cycle Uarch
In s tru c tio n [2 5 – 0 ]
26
S h ift
le ft 2
J u m p a d d r e s s [3 1 – 0 ]
28
P C + 4 [3 1 – 2 8 ]
Add
Add
A LU
re s u lt
M
u
x
M
u
x
1
0
S h ift
le ft 2
Re g D st
Ju m p
4
PCSrc
1
1=Jump
0
B ra n c h
MemRead
In s tr u c tio n [3 1 – 2 6 ]
C o n tro l
PCSrc2=Br Taken
M e m to R e g
ALUOp
M e m W rite
A L U S rc
R e g W rite
In s tr u c tio n [2 5 – 2 1 ]
PC
R ea d
re g is te r 1
Read
a d d re s s
In s tr u c tio n [2 0 – 1 6 ]
In s tr u c tio n
[3 1 – 0 ]
In s tr u c tio n
m e m ory
0
In s tr u c tio n [1 5 – 1 1 ]
M
u
x
1
Read
d a ta 1
R ea d
re g is te r 2
R e g is te rs R e a d
W rite
d a ta 2
re g is te r
Z e ro
ALU
0
M
u
x
W rite
d a ta
ALU
re s u lt
D a ta
m e m o ry
1
bcond
In s tr u c tio n [1 5 – 0 ]
16
Read
d a ta
A d d re s s
W rite
d a ta
1
M
u
x
0
32
S ig n
e x te n d
ALU
c o n tro l
In s tru c tio n [5 – 0 ]
ALU operation
T
Based on original figure from [P&H CO&D, COPYRIGHT 2004
Elsevier. ALL RIGHTS RESERVED.]
BW=~(1/T)
57
Dividing Into Stages
200ps
100ps
200ps
IF : In s tru c tio n fe t c h
ID : In s tru c tio n d e c o d e /
E X : E x e c u te /
r e g is te r file re a d
a d d re s s c a lc u la tio n
200ps
100ps
M E M : M e m o ry a c c e s s
W B : W rite b a c k
0
M
u
x
ignore
for now
1
Ad d
4
A dd
A dd
re s u lt
S h if t
le ft 2
PC
R ea d
re g is te r 1
A d d re s s
In s tru c tio n
In s tr u c t io n
m e m o ry
R e ad
d ata 1
R ea d
re g is te r 2
R e g is te rs R e a d
W rite
d ata 2
re g is te r
W rite
d a ta
Z e ro
A LU
0
A LU
re s u lt
D a ta
m e m o ry
1
W ri te
data
16
R ead
d a ta
A d d re s s
M
u
x
1
M
u
x
0
RF
write
32
Sign
e x te n d
Is this the correct partitioning?
Why not 4 or 6 stages? Why not different boundaries?
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
58
Instruction Pipeline Throughput
P ro g ra m
o rd e r
2004
2
e x e c u t io n
400 6
600
8
800
1 01000
11200
2
14
1400
16
1600
18
1800
T im e
(in in s tr u c tio n s )
lw $ 1 , 1 0 0 ($ 0 )
In s tru c tio n
fe tc h
lw $ 2 , 2 0 0 ($ 0 )
R eg
D a ta
A LU
R eg
ac c ess
In s tru c tio n
8 ns
800ps
fe tc h
R eg
lw $ 3 , 3 0 0 ($ 0 )
D a ta
A LU
ac c ess
R eg
In s tru c tio n
8 ns
800ps
fe tc h
...
8 ns
800ps
P ro g ra m
e x e c u t io n
o rd e r
2
200
4
400
6
600
8
800
1000
10
1200
12
1400
14
T im e
( in in s t ru c tio n s )
lw $ 1 , 1 0 0 ($ 0 )
lw $ 2 , 2 0 0 ($ 0 )
lw $ 3 , 3 0 0 ($ 0 )
In s tr u c tio n
fe tc h
2 ns
200ps
R eg
In s tr u c tio n
fe tc h
2 ns
200ps
ALU
R eg
In s tr u c tio n
fe tc h
2 ns
200ps
D a ta
a cc es s
ALU
R eg
2 ns
200ps
R eg
D a ta
a cc e s s
ALU
2200ps
ns
R eg
D a ta
acc es s
2 ns
200ps
R eg
2 ns
200ps
5-stage speedup is 4, not 5 as predicated by the ideal model. Why?
59
Enabling Pipelined Processing: Pipeline Registers
IF : In s tru c tio n fe t c h
ID : In s tru c tio n d e c o d e /
E X : E x e c u te /
r e g is te r file re a d
a d d re s s c a lc u la tio n
M E M : M e m o ry a c c e s s
W B : W rite b a c k
No resource is used by more than 1 stage!
0 0
MM
u u
x x
1 1
M E M /W B
nPCM
4 4
E X/M E M
PCE+4
A dAdd d
ID /E X
PCD+4
IF /I D
A dAdd d
A dAdd d re s u lt
re s u lt
WW
riterite
d adtaa ta
AE
AoutM
A dAddred sres ss
Dta
a ta
Da
em
o ry
mm
em
o ry
Raeda d
Re
a ta
d adta
MDRW
m e m o ry
0 0
MM
u u
x x
e ro
Z eZro
A LAUL U A LAUL U
re re
s usltu lt
1 1
1 61 6
S iS
g ni g n
e xetexte
n dn d
3 23 2
T/k
ps
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
T
ri te
WW
ri te
d adt aa t a
11
MM
uu
x x
00
AoutW
In s tr u c t io n
R eRaeda d
a t 1a 1
d adt a
R eRaeda d
is rte2r 2
re re
g isgte
iste
R eRgeisgte
rs rs R eRaeda d
WW
riterite
a t 2a 2
d adt a
is rte r
re re
g isgte
BM
In s tr u c t io n
In s tru c tio n
m e m o ry
R eRaeda d
is rte1r 1
re re
g isgte
BE
d sres s s
A dAddre
ImmE
In s tru c tio n
P CP C
IRD
PCF
S hSifht if t
le ftle ft
2 2
T/k
ps
60
W rit e
d a ta
W ri te
data
16
0
32
Pipelined Operation Example
16
3 2S ig n
S ig n
e x te n d
e x te n d
lw
In s tru c tio n fe tc h
All instruction classes must follow the same path and timing
through the
pipeline stages.
Any performance impact?
lw
00
0
000
M
M
M
u
uuu
xxxx
111
I n s tru c tio n d e c o d e
lw
lw
E x e c u tio n
M e m o ry
lw
W rit e b a c k
IF
/I/ID
IF
IIF
IF
F/ID
/I/ID
DD
ID
/E
XX
ID/E
/EX
X
ID
ID
/E
ID
/E
X
E
X
///M
E
M
EX
X/M
/M
ME
EM
M
E
M
EE
XX
M
EE
M
M
E
M
/W
B
ME
EM
M/W
/WB
B
M
M
E
M
/W
B
M
E
M
/W
B
A d
A
AAddddddd
AAdd
dddd
dd
A
dd
AA
ddddd
Add
d A
A
A
re
uuultlt
ltlt
rre
eessssuu
rre
44
4
44
S h ifift
t
S
SShhhift
ififtt
le
ftft 222
leftft
lle
e
le
2
A dddddre
re
A
ressssssss
AAd
d d re
In
trtru
u c ttio
In
oio
Insssstru
onnnn
In
trtr
uuccctitti
io
m
m
o ry
m
yy
meeeem
mo
rry
m
m
oorry
R
eeeaaa
d
Ree
R
RR
aadddd
re
ggis
is
te
11
re
regg
iste
terrrr 11
re
is
te
tio
In
ssssstru
In
tructio
tionnnnn
IIn
n
ttru
ru
cccctio
In
tru
tio
PC
P
PPC
CC
R
ee
aaaddd
Ree
eaa
R
R
R
dd
aa
tta
a 11
daa
ata
ta
dddd
R
eeaaaddd
tta
a 111
R
e
R
R ea d
re
gg
is
te
22
regg
gis
iste
terrrrr 22
2
re
re
is
te
re
is
te
R e g is
te
rs
R
eeaaaddd
R
iis
ste
ters
rs R
RReeegggis
iste
rs
Ree
R
ad
W
rite
W
rite
aata
tta
a 2
W
W rite
rite
ddddaa
tta
a 222
re
ggis
is
te
rr
re
g
te
r
re
is
te
re g is te r
W
rite
Write
rite
W
W
rite
ddaaata
ta
dd
ta
a ta
11666
11
6
S iig
gn
S
SSig
i gnnn
eexxxte
te
nnddd
ee
tenn
xte
d
00
0
00
M
M
M
M
uu
uu
xxxx
11
11
ZZee
eero
ro
ZZ
rro
o
ro
AALLLLU
U
U A LU
A
U
AA
LU
A
L
A
L
U
A LU
U
re
uu
ltlt
resssssuu
ultlt
lt
re
re
re
A
ddd
reessssssss
A
dd
ddd
re
Ad
dre
A
A
rrre
ess
DDaata
ta
ta
DDaa
ata
ta
mmeD
em
moory
ry
m
m
ry
m
meeem
mooory
ry
R
R
Reeeeeaaaadddd
R
a ta
dd
tata
a
dddaaatta
1111
M
M
M
M
uuuu
xxxx
000
00
W
ririt
te
Writ
W
ete
W
e
W
ririte
a tta
a
dd
dddaaata
ttaa
33222
33
2
lw
0
0
Based on original figure from
I n s tru c tio n d e c o d e
M
u
M
x
u
[P&H
CO&D,
COPYRIGHT 2004 Elsevier.
x
1
ALL RIGHTS RESERVED.]
lw
W rit e b a c k
61
d
reagtai s te r
1M
uu
xx
Writ
ri te
W
e
ta
ddaata
11
Pipelined Operation Example
16
32
Sign
e x te n d
1166
SSi ig
g nn
tenndd
eexxte
W ri te
data
M
0M
uu
xx
D a ta
mem
m o rry
m
y
00
Wrrite
W
ite
ddaatta
a
3322
C lo c k 1
C lo cC
k lo
5 ck 3
s lw
u b $$1101, , 2$02($
, $13)
In sstru
tru cctio
tio nn fe
fe tc
tc hh
In
slw
u b$$1101, ,2$02($, 1$ )3
lw $ 1 0 , 2 0 ($ 1 )
In
In ss tru
tru cctio
tio nn ddee ccoo dd ee
E x e c u tio n
su b $11 , $2, $ 3
0
00
M
M
M
uuu
xxx
1
11
1 3)
slw
u b $$1101, ,2$02( ,$ $
E x e c u tio n
IF/I
/ID
IF
D
IF
/I/ID
D
IDD/E
/EXX
X
IID
ID
/E
slw
u b$$1101, ,2 $02( $, 1$ )3
W rite
rite bbaacckk
W
Me
em
mo
o ry
ry
M
EXX
X//M
/M
EM
M
//M
MEE
EE
M
M
MEE
EM
M/W
/W BB
B
M
M
M
/W
Addddd
d
AA
Ad d
Add
ddd
d AAdddd
AA
ressssuu
ult
ltt
lt
re
llt
re
444
PC
C
PP
C
Add
dddd
dre
ressssss
AA
re
Inssstr
tru
tio
ttio
ionnn
tru
In
uucccttio
io
In
tr
meeem
mooory
ry
m
m
m
ry
In
tru
titio
In
trtru
uccctitio
oonnn
Inssstru
Shh
hif
iift
ftfttt
SS
iifift
lefftft
ftt 22
2
le
le
Ree
eaa
add
d
R
R
regg
gis
terrr 11
1
re
te
iis
sste
re
iis
Reeeaa
add
d
R
R
ata
ta
1
ttta
aa 11
dddaa
Ree
eaa
add
d
R
R
regg
gis
terrr 22
2
re
te
iis
sste
re
iis
Ree
egggis
iste
te
rs R e a d
ttte
eers
R
R
is
rs
R
Reeaadd
W
ri
te
W rite
rite
ri
te
rit
ee
W
ata
ta
2
W
rit
ttta
aa 22
dddaa
regg
giis
is
te
r
re
is
te
r
te
re
s
r
is
Write
rite
te
rite
ri
rit
ee
W
W
rit
taa
tta
dd
ddaa
ata
ta
Zeeero
ro
ro
ZZ
0
00
M
M
M
M
uuu
xxx
ALL
LU
U
AA
U
ALL
LU
U
AA
U
resssuu
ultltlt
re
re
re
Addddddre
ressssss
AA
Ree
eaa
addd
R
R
daa
ata
ta
dd
ta
D
aaatta
a
ta
DD
Da
ta
m
ry
m
eeem
ooory
me
m
m
ry
mo
ry
1
11
Wri
ri
te
W
rrite
ite
te
W
rrite
ite
daa
atta
dd
aa
tta
166
11
66
Siig
n
S
gnn
n
SS
gg
iiig
tenn
eexxxte
te
ndd
dd
ee
te
1
11
M
M
u
uu
xxx
0
00
322
22
33
lo c k 3
C
lo
ccC
kC
C
lo
kk56 c21k 4
CC
lolo
kcclo
sub $11, $2, $3
62
lw $ 1 0 , 2 0 ($ 1 )
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
su b $11 , $2, $ 3
lw $ 1 0 , 2 0 ( $ 1 )
sub $11, $2, $ 3
Illustrating Pipeline Operation: Operation View
t0
t1
t2
t3
t4
Inst0 IF
Inst1
Inst2
Inst3
Inst4
ID
IF
EX
ID
IF
MEM
EX
ID
IF
WB
MEM
EX
ID
IF
t5
WB
MEM
EX
ID
IF
WB
MEM
EX
ID
IF
WB
MEM
EX
ID
IF
63
Illustrating Pipeline Operation: Resource View
t0
IF
ID
EX
MEM
WB
I0
t1
t2
t3
t4
t5
t6
t7
t8
t9
t10
I1
I2
I3
I4
I5
I6
I7
I8
I9
I10
I0
I1
I2
I3
I4
I5
I6
I7
I8
I9
I0
I1
I2
I3
I4
I5
I6
I7
I8
I0
I1
I2
I3
I4
I5
I6
I7
I0
I1
I2
I3
I4
I5
I6
64
Control Points in a Pipeline
P C S rc
0
M
u
x
1
IF /ID
ID /E X
E X /M E M
M E M /W B
A dd
Add
A dd
4
re s u lt
B ra n c h
S h ift
PC
A d d re s s
In s tru c tio n
m e m o ry
In s tru c tio n
R e g W rite
R ead
re g is te r 1
le ft 2
M e m W rite
R ea d
d a ta 1
R ead
re g is te r 2
R e g is te rs R e a d
W rite
d a ta 2
re g is te r
A L U S rc
M e m to R e g
e ro
Z eZro
ALU
0
ALU
re s u lt
A d d re s s
M
u
x
W rite
d a ta
R ead
d a ta
D a ta
m e m o ry
1
1
M
u
x
0
W rite
d a ta
In s tru c tio n
16
[1 5 – 0 ]
S ig n
e x te n d
32
6
ALU
c o n tro l
M em R ead
In s tru c tio n
[2 0 – 1 6 ]
0
In s tru c tio n
[1 5 – 1 1 ]
Based on original figure from [P&H CO&D,
COPYRIGHT 2004 Elsevier. ALL RIGHTS
RESERVED.]
M
u
x
A LU O p
1
R e gD s t
Identical set of control points as the single-cycle datapath!!
65
Control Signals in a Pipeline

For a given instruction



same control signals as single-cycle, but
control signals required at different cycles, depending on stage
decode once using the same logic as single-cycle and buffer control
signals until consumed
WB
In s t ru c tio n
C o n tro l
I F /ID

M
WB
EX
M
WB
ID /E X
E X /M E M
M E M /W B
or carry relevant “instruction word/field” down the pipeline and
decode locally within each stage (still same logic)
Which one is better?
66
Pipelined Control Signals
P C S rc
ID /E X
0
M
u
x
WB
E X /M E M
1
C o n tro l
IF /ID
M
WB
EX
M
M E M /W B
WB
Add
A dd
In s tru c tio n
m e m ory
B ran ch
S h ift
le ft 2
ALU Src
R e ad
re g is te r 1
R ea d
d a ta 1
R e ad
re g is te r 2
R e g is te rs R e a d
W rite
d a ta 2
re g is te r
M e m to R e g
A d d re ss
Add
re s u lt
M e m W rite
PC
In s tru c tio n
R e g W rite
4
Z e ro
ALU
0
M
u
x
W rite
d a ta
A LU
re s u lt
Read
d a ta
A d d re s s
D a ta
m e m o ry
1
32
M
u
x
0
W rite
d a ta
In s tru c tio n
16
[1 5 – 0 ]
1
6
S ig n
e x te n d
A LU
c o n tr o l
M e m R e ad
In s tru c tio n
[2 0 – 1 6 ]
0
In s tru c tio n
[1 5 – 1 1 ]
ALUOp
M
u
x
1
Re g D st
Based on original figure from [P&H CO&D,
COPYRIGHT 2004 Elsevier. ALL RIGHTS
RESERVED.]
67
An Ideal Pipeline


Goal: Increase throughput with little increase in cost
(hardware cost, in case of instruction processing)
Repetition of identical operations


Repetition of independent operations


No dependencies between repeated operations
Uniformly partitionable suboperations


The same operation is repeated on a large number of different
inputs
Processing an be evenly divided into uniform-latency
suboperations (that do not share resources)
Fitting examples: automobile assembly line, doing laundry

What about the instruction processing “cycle”?
68
Instruction Pipeline: Not An Ideal Pipeline

Identical operations ... NOT!
 different instructions do not need all stages
- Forcing different instructions to go through the same multi-function pipe
 external fragmentation (some pipe stages idle for some instructions)

Uniform suboperations ... NOT!
 difficult to balance the different pipeline stages
- Not all pipeline stages do the same amount of work
 internal fragmentation (some pipe stages are too-fast but take the
same clock cycle time)

Independent operations ... NOT!
 instructions are not independent of each other
- Need to detect and resolve inter-instruction dependencies to ensure the
pipeline operates correctly
 Pipeline is not always moving (it stalls)
69
Issues in Pipeline Design

Balancing work in pipeline stages


How many stages and what is done in each stage
Keeping the pipeline correct, moving, and full in the
presence of events that disrupt pipeline flow

Handling dependences




Data
Control
Handling resource contention
Handling long-latency (multi-cycle) operations

Handling exceptions, interrupts

Advanced: Improving pipeline throughput

Minimizing stalls
70
Causes of Pipeline Stalls

Resource contention

Dependences (between instructions)



Data
Control
Long-latency (multi-cycle) operations
71
Dependences and Their Types



Also called “dependency” or less desirably “hazard”
Dependencies dictate ordering requirements between
instructions
Two types



Data dependence
Control dependence
Resource contention is sometimes called resource
dependence

However, this is not fundamental to (dictated by) program
semantics, so we will treat it separately
72
Handling Resource Contention


Happens when instructions in two pipeline stages need the
same resource
Solution 1: Eliminate the cause of contention

Duplicate the resource or increase its throughput



E.g., use separate instruction and data memories (caches)
E.g., use multiple ports for memory structures
Solution 2: Detect the resource contention and stall one of
the contending stages


Which stage do you stall?
Example: What if you had a single read and write port for the
register file?
73
Data Dependences

Types of data dependences




Flow dependence (true data dependence – read after write)
Output dependence (write after write)
Anti dependence (write after read)
Which ones cause stalls in a pipelined machine?



For all of them, we need to ensure semantics of the program
are correct
Flow dependences always need to be obeyed because they
constitute true dependence on a value
Anti and output dependences exist due to limited number of
architectural registers


They are dependence on a name, not a value
We will later see what we can do about them
74
Data Dependence Types
Flow dependence
r3
 r1 op r2
r5
 r3 op r4
Read-after-Write
(RAW)
Anti dependence
r3
 r1 op r2
r1
 r4 op r5
Write-after-Read
(WAR)
Output-dependence
r3
 r1 op r2
r5
 r3 op r4
r3
 r6 op r7
Write-after-Write
(WAW)
75
How to Handle Data Dependences

Anti and output dependences are easier to handle

write to the destination in one stage and in program order

Flow dependences are more interesting

Five fundamental ways of handling flow dependences
76