18-447 Computer Architecture Lecture 7: Pipelining

advertisement
18-447
Computer Architecture
Lecture 7: Pipelining
Prof. Onur Mutlu
Carnegie Mellon University
Spring 2014, 1/29/2014
Can We Do Better?

What limitations do you see with the multi-cycle design?

Limited concurrency



Some hardware resources are idle during different phases of
instruction processing cycle
“Fetch” logic is idle when an instruction is being “decoded” or
“executed”
Most of the datapath is idle when a memory access is
happening
2
Can We Use the Idle Hardware to Improve Concurrency?


Goal: Concurrency  throughput (more “work” completed
in one cycle)
Idea: When an instruction is using some resources in its
processing phase, process other instructions on idle
resources not needed by that instruction




E.g., when an instruction is being decoded, fetch the next
instruction
E.g., when an instruction is being executed, decode another
instruction
E.g., when an instruction is accessing data memory (ld/st),
execute the next instruction
E.g., when an instruction is writing its result into the register
file, access data memory for the next instruction
3
Pipelining: Basic Idea

More systematically:



Pipeline the execution of multiple instructions
Analogy: “Assembly line processing” of instructions
Idea:



Divide the instruction processing cycle into distinct “stages” of
processing
Ensure there are enough hardware resources to process one
instruction in each stage
Process a different instruction in each stage



Instructions consecutive in program order are processed in
consecutive stages
Benefit: Increases instruction processing throughput (1/CPI)
Downside: Start thinking about this…
4
Example: Execution of Four Independent ADDs

Multi-cycle: 4 cycles per instruction
F
D
E
W
F
D
E
W
F
D
E
W
F
D
E
W
Time

Pipelined: 4 cycles per 4 instructions (steady state)
F
D
E
W
F
D
E
W
F
D
E
W
F
D
E
W
Time
5
The Laundry Analogy
Time
6 PM
7
8
9
10
11
12
1
2 AM
Task
order
A
B
C
D




“place one dirty load of clothes in the washer”
“when the washer is finished, place the wet load in the dryer”
1
6 PM
8
9
10
11 dry 12
2 AM
“when the
dryer
is7 finished,
take
out
the
load and
fold”
Time
“when folding is finished, ask your roommate (??) to put the clothes
Task
away” order
A
B
C
- steps to do a load are sequentially dependent
- no dependence between different loads
- different steps do not share resources
D
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
6
Pipelining Multiple Loads of Laundry
6 PM
7
8
9
10
11
12
1
2 AM
6 PM
TimeA
7
8
9
10
11
12
1
2 AM
7
8
9
10
11
12
1
2 AM
7
8
9
10
11
12
1
2 AM
Time
Task
order
Task
B
order
A
C
D
B
C
D
6 PM
Time
Task
order 6 PM
Time
A
Task
order B
A
C
B
D
C
D
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
- 4 loads of laundry in parallel
- no additional resources
- throughput increased by 4
- latency per load is the same
7
Pipelining Multiple Loads of Laundry: In Practice
Time
6 PM
7
8
9
10
11
12
1
2 AM
6 PM
7
8
9
10
11
12
1
2 AM
6 PM
7
8
9
10
11
12
1
2 AM
6 PM
7
8
9
10
11
12
1
2 AM
Task
order
A
Time
B
Task
order
C
A
D
B
C
D
Time
Task
order
TimeA
Task B
order
C
A
D
B
C
the slowest step decides throughput
D
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
8
Pipelining Multiple Loads of Laundry: In Practice
6 PM
7
8
9
10
11
12
1
2 AM
A 6 PM
Time
7
8
9
10
11
12
1
2 AM
6 PM
7
8
9
10
11
12
1
2 AM
Task
order 6 PM
Time
A
7
8
9
10
11
12
1
2 AM
Time
Task
order
Task B
order
C
A
D
B
C
D
Time
Task B
order
C
A
B
D
A
B
A
B
C
D
Throughput restored (2 loads per hour) using 2 dryers
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
9
An Ideal Pipeline


Goal: Increase throughput with little increase in cost
(hardware cost, in case of instruction processing)
Repetition of identical operations


Repetition of independent operations


No dependencies between repeated operations
Uniformly partitionable suboperations


The same operation is repeated on a large number of different
inputs
Processing can be evenly divided into uniform-latency
suboperations (that do not share resources)
Fitting examples: automobile assembly line, doing laundry

What about the instruction processing “cycle”?
10
Ideal Pipelining
combinational logic (F,D,E,M,W)
T psec
T/2 ps (F,D,E)
T/3
ps (F,D)
BW=~(1/T)
BW=~(2/T)
T/2 ps (M,W)
T/3
ps (E,M)
T/3
ps (M,W)
BW=~(3/T)
11
More Realistic Pipeline: Throughput

Nonpipelined version with delay T
BW = 1/(T+S) where S = latch delay
T ps

k-stage pipelined version
BWk-stage = 1 / (T/k +S )
BWmax = 1 / (1 gate delay + S )
T/k
ps
T/k
ps
12
More Realistic Pipeline: Cost

Nonpipelined version with combinational cost G
Cost = G+L where L = latch cost
G gates

k-stage pipelined version
Costk-stage = G + Lk
G/k
G/k
13
Pipelining Instruction Processing
14
Remember: The Instruction Processing Cycle
 Fetch fetch (IF)
1. Instruction
 Decodedecode and
2. Instruction
register
operand
fetch (ID/RF)
 Evaluate
Address
3. Execute/Evaluate
memory address (EX/AG)
 Fetch Operands
4. Memory operand fetch (MEM)
 Execute
5. Store/writeback
result (WB)
 Store Result
15
Remember the Single-Cycle Uarch
Instruction [25– 0]
26
Shift
left 2
Jump address [31– 0]
28
PC+4 [31– 28]
ALU
Add result
Add
4
Instruction [31– 26]
Control
Instruction [25– 21]
PC
Read
address
Instruction
memory
Instruction [15– 11]
M
u
x
M
u
x
1
0
Shift
left 2
RegDst
Jump
Branch
MemRead
MemtoReg
ALUOp
MemWrite
ALUSrc
RegWrite
PCSrc2=Br Taken
Read
register 1
Instruction [20– 16]
Instruction
[31– 0]
PCSrc
1
1=Jump
0
0
M
u
x
1
Read
data 1
Read
register 2
Registers Read
Write
data 2
register
0
M
u
x
1
Write
data
Zero
ALU ALU
result
Read
data
Address
Write
Data
memory
1
M
u
x
0
bcond data
Instruction [15– 0]
16
Sign
extend
32
ALU
control
Instruction [5– 0]
ALU operation
T
Based on original figure from [P&H CO&D, COPYRIGHT 2004
Elsevier. ALL RIGHTS RESERVED.]
BW=~(1/T)
16
Dividing Into Stages
200ps
100ps
200ps
IF: Instruction fetch
ID: Instruction decode/
register file read
EX: Execute/
address calculation
200ps
100ps
MEM: Memory access
WB: Write back
0
M
u
x
1
ignore
for now
Add
4
Add
Add
result
Shift
left 2
PC
Read
register 1
Address
Instruction
Instruction
memory
Read
data 1
Read
register 2
Registers Read
Write
data 2
register
Write
data
0
M
u
x
1
Zero
ALU ALU
result
Address
Data
memory
Write
data
16
Sign
extend
Read
data
1
M
u
x
0
RF
write
32
Is this the correct partitioning?
Why not 4 or 6 stages? Why not different boundaries?
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
17
Instruction Pipeline Throughput
Program
execution
Time
order
(in instructions)
lw $1, 100($0)
2004
2
Instruction
Reg
fetch
lw $2, 200($0)
400 6
ALU
600 8 800
Data
access
101000
12
1200
14
1400
16
1600
ALU
Data
access
Reg
18
1800
Reg
Instruction
Reg
fetch
8 ns
800ps
lw $3, 300($0)
Instruction
fetch
8800ps
ns
...
8 ns
800ps
Program
execution
Time
order
(in instructions)
2
lw $1, 100($0)
Instruction
fetch
lw $2, 200($0)
2 ns
200ps
lw $3, 300($0)
200 4
Reg
Instruction
fetch
2 ns
200ps
400 6 600
ALU
Reg
Instruction
fetch
2 ns
200ps
8800
Data
access
ALU
Reg
2 ns
200ps
1000
10
1200
12
1400
14
Reg
Data
access
Reg
ALU
Data
access
2200ps
ns
2 ns
200ps
Reg
2 ns
200ps
5-stage speedup is 4, not 5 as predicted by the ideal model. Why?
18
Enabling Pipelined Processing: Pipeline Registers
IF: Instruction fetch
ID: Instruction decode/
register file read
WB: Write back
PCE+4
EX/MEM
MEM/WB
nPCM
ID/EX
PCD+4
IF/ID
44
MEM: Memory access
No resource is used by more than 1 stage!
00
MM
uu
x x
11
Add
Add
EX: Execute/
address calculation
Add
Add
Add result
Add
result
1616
Sign
Sign
extend
extend
3232
T
Write
Write
data
data
Read
Read
data
data
MDRW
AoutM
AE
Data
Data
memory
memory
T/k
ps
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
Address
Address
11
MM
uu
xx
00
AoutW
Write
Write
data
data
00
MM
uu
xx
11
Zero
Zero
ALU ALU
ALU
ALU
result
result
BM
Instruction
memory
Read
Read
data
data
11
Read
Read
register
register
22
Registers Read
Registers
Read
Write
Write
data
data
22
register
register
BE
Instruction
Instruction
memory
Read
Read
register
register
11
ImmE
Address
Address
Instruction
PCPC
IRD
PCF
Shift
Shift
leftleft
22
T/k
ps
19
0
Write
data
Write
data
16
32
32Sign
Sign
extend
extend
Pipelined Operation Example
16
lw
Instruction fetch
All instruction classes must follow the same path and timing
through the
Any performance impact?
lw pipeline stages.
0
00
00
M
M
M
uuu
x
xxx
111
Instruction decode
IF/ID
IF/ID
IF/ID
IF/ID
IF/ID
lw
lw
Execution
Memory
ID/EX
ID/EX
ID/EX
ID/EX
lw
Write back
EX/MEM
EX/MEM
EX/MEM
EX/MEM
MEM/WB
MEM/WB
MEM/WB
MEM/WB
Add
Add
Add
Add
Add
Add
Add
Add
Add result
Add
Add
Add
result
result
result
4
444
PC
PC
PC
Address
Address
Address
Instruction
Instruction
Instruction
memory
memory
memory
Instruction
Instruction
Instruction
Instruction
Instruction
Shift
Shift
Shift
Shift
left
left222
left
left
Read
Read
Read
Read
register
register111
register
Read
Read
Read
data
data111
data
Read
data
Read
Read
Read
Read
register
register
register222
Registers
Registers Read
Registers
Read
Read
Write
Write
data
Write
data222
data
data
register
register
register
register
Write
Write
Write
data
data
data
16
16
16
000
M
M
M
uuu
x
xx
111
Zero
Zero
Zero
Zero
ALU
ALU
ALU
ALU ALU
ALU
ALU
ALU
ALU
result
result
result
result
Address
Address
Address
Address
Data
Data
Data
Data
memory
memory
memory
memory
memory
Write
Write
Write
Write
data
data
data
Read
Read
Read
Read
data
data
data
data
data
11
11
M
M
M
M
uuu
xxxx
0000
32
32
32
Sign
Sign
Sign
extend
extend
extend
lw
0
0
M
u
M
x
u
Based on original figure from [P&H
CO&D,
x
1 COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
Instruction decode
lw
Write back
20
data
register
1M
uu
xx
11
Write
Write
data
data
Pipelined Operation Example
16
16
16
Sign
extend
Sign
Sign
extend
extend
32
Write
data
M
0M
uu
xx
00
Data
memory
memory
Write
Write
data
data
32
32
Clock 1
Clock
Clock
5 3
lw $10,
sub
$11,20($1)
$2, $3
Instruction fetch
lw $10,
sub
$11,20($1)
$2, $3
lw $10, 20($1)
Instruction decode
Execution
sub $11, $2, $3
0
00
M
M
M
u
uu
xxx
1
11
lw $10,
sub
$11,20($1)
$2, $3
Memory
Memory
Execution
IF/ID
IF/ID
IF/ID
ID/EX
ID/EX
ID/EX
EX/MEM
EX/MEM
EX/MEM
sub
$11,20($1)
$2, $3
lw $10,
Write back
back
Write
MEM/WB
MEM/WB
MEM/WB
Add
Add
Add
Add
Add
Add Add
Add
Add
result
result
result
444
PC
PC
PC
Address
Address
Address
Instruction
Instruction
Instruction
memory
memory
memory
Instruction
Instruction
Shift
Shift
Shift
left 22
2
left
left
Read
Read
Read
register 11
1
register
register
Read
Read
Read
data 11
1
data
data
Read
Read
Read
register 22
2
register
register
Registers Read
Registers
Registers
Read
Read
Write
Write
2
Write
data 22
data
register
register
register
Write
Write
Write
data
data
16
16
16
0
00
M
M
uuu
xxx
1
11
Zero
Zero
Zero
ALU ALU
ALU
ALU
ALU
ALU
result
result
result
Address
Address
Address
Read
Read
Read
data
data
data
Data
Data
Data
Data
memory
memory
memory
Write
Write
Write
data
data
data
1
11
M
M
u
uu
xxx
0
00
32
32
Sign 32
Sign
Sign
extend
extend
extend
Clock
Clock
56 21 43
Clock
Clock
Clock
sub $11, $2, $3
21
lw $10, 20($1)
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
sub $11, $2, $3
lw $10, 20($1)
sub $11, $2, $3
Illustrating Pipeline Operation: Operation View
t0
t1
t2
t3
t4
Inst0 IF
Inst1
Inst2
Inst3
Inst4
ID
IF
EX
ID
IF
MEM
EX
ID
IF
WB
MEM
EX
ID
IF
t5
WB
MEM
EX
ID
IF
WB
MEM
EX
ID
IF
WB
MEM
EX
ID
IF
22
Illustrating Pipeline Operation: Resource View
t0
IF
ID
EX
MEM
WB
I0
t1
t2
t3
t4
t5
t6
t7
t8
t9
t10
I1
I2
I3
I4
I5
I6
I7
I8
I9
I10
I0
I1
I2
I3
I4
I5
I6
I7
I8
I9
I0
I1
I2
I3
I4
I5
I6
I7
I8
I0
I1
I2
I3
I4
I5
I6
I7
I0
I1
I2
I3
I4
I5
I6
23
Control Points in a Pipeline
PCSrc
0
M
u
x
1
IF/ID
ID/EX
EX/MEM
MEM/WB
Add
Add
result
Add
4
Branch
Shift
left 2
PC
Address
Instruction
memory
Instruction
RegWrite
Read
register 1
MemWrite
Read
data 1
Read
register 2
Registers Read
Write
data 2
register
Write
data
ALUSrc
Zero
Zero
ALU ALU
result
0
M
u
x
1
MemtoReg
Address
Data
memory
Write
Read
data
1
M
u
x
0
data
Instruction
16
[15– 0]
Sign
extend
32
6
ALU
control
MemRead
Instruction
[20– 16]
Instruction
[15– 11]
Based on original figure from [P&H CO&D,
COPYRIGHT 2004 Elsevier. ALL RIGHTS
RESERVED.]
0
M
u
x
1
ALUOp
RegDst
Identical set of control points as the single-cycle datapath!!
24
Control Signals in a Pipeline

For a given instruction



same control signals as single-cycle, but
control signals required at different cycles, depending on stage
decode once using the same logic as single-cycle and buffer control
signals until consumed
WB
Instruction
IF/ID

Control
M
WB
EX
M
WB
ID/EX
EX/MEM
MEM/WB
or carry relevant “instruction word/field” down the pipeline and
decode locally within each or in a previous stage
Which one is better?
25
Pipelined Control Signals
PCSrc
ID/EX
0
M
u
x
1
WB
Control
IF/ID
EX/MEM
M
WB
EX
M
MEM/WB
WB
Add
Add
Add result
Instruction
memory
ALUSrc
Read
register 1
Read
data 1
Read
register 2
Registers Read
Write
data 2
register
Write
data
Zero
ALU ALU
result
0
M
u
x
1
MemtoReg
Address
Branch
Shift
left 2
MemWrite
PC
Instruction
RegWrite
4
Address
Data
memory
Read
data
Write
data
Instruction 16
[15– 0]
Instruction
[20– 16]
Instruction
[15– 11]
Sign
extend
32
6
ALU
control
0
M
u
x
1
1
M
u
x
0
MemRead
ALUOp
RegDst
Based on original figure from [P&H CO&D,
COPYRIGHT 2004 Elsevier. ALL RIGHTS
RESERVED.]
26
An Ideal Pipeline


Goal: Increase throughput with little increase in cost
(hardware cost, in case of instruction processing)
Repetition of identical operations


Repetition of independent operations


No dependencies between repeated operations
Uniformly partitionable suboperations


The same operation is repeated on a large number of different
inputs
Processing an be evenly divided into uniform-latency
suboperations (that do not share resources)
Fitting examples: automobile assembly line, doing laundry

What about the instruction processing “cycle”?
27
Instruction Pipeline: Not An Ideal Pipeline

Identical operations ... NOT!
 different instructions do not need all stages
- Forcing different instructions to go through the same multi-function pipe
 external fragmentation (some pipe stages idle for some instructions)

Uniform suboperations ... NOT!
 difficult to balance the different pipeline stages
- Not all pipeline stages do the same amount of work
 internal fragmentation (some pipe stages are too-fast but take the
same clock cycle time)

Independent operations ... NOT!
 instructions are not independent of each other
- Need to detect and resolve inter-instruction dependencies to ensure the
pipeline operates correctly
 Pipeline is not always moving (it stalls)
28
Issues in Pipeline Design

Balancing work in pipeline stages


How many stages and what is done in each stage
Keeping the pipeline correct, moving, and full in the
presence of events that disrupt pipeline flow

Handling dependences




Data
Control
Handling resource contention
Handling long-latency (multi-cycle) operations

Handling exceptions, interrupts

Advanced: Improving pipeline throughput

Minimizing stalls
29
Causes of Pipeline Stalls

Resource contention

Dependences (between instructions)



Data
Control
Long-latency (multi-cycle) operations
30
Dependences and Their Types



Also called “dependency” or less desirably “hazard”
Dependencies dictate ordering requirements between
instructions
Two types



Data dependence
Control dependence
Resource contention is sometimes called resource
dependence

However, this is not fundamental to (dictated by) program
semantics, so we will treat it separately
31
Handling Resource Contention


Happens when instructions in two pipeline stages need the
same resource
Solution 1: Eliminate the cause of contention

Duplicate the resource or increase its throughput



E.g., use separate instruction and data memories (caches)
E.g., use multiple ports for memory structures
Solution 2: Detect the resource contention and stall one of
the contending stages


Which stage do you stall?
Example: What if you had a single read and write port for the
register file?
32
Data Dependences

Types of data dependences




Flow dependence (true data dependence – read after write)
Output dependence (write after write)
Anti dependence (write after read)
Which ones cause stalls in a pipelined machine?



For all of them, we need to ensure semantics of the program
are correct
Flow dependences always need to be obeyed because they
constitute true dependence on a value
Anti and output dependences exist due to limited number of
architectural registers


They are dependence on a name, not a value
We will later see what we can do about them
33
Data Dependence Types
Flow dependence
r3
 r1 op r2
r5
 r3 op r4
Read-after-Write
(RAW)
Anti dependence
r3
 r1 op r2
r1
 r4 op r5
Write-after-Read
(WAR)
Output-dependence
r3
 r1 op r2
r5
 r3 op r4
r3
 r6 op r7
Write-after-Write
(WAW)
34
How to Handle Data Dependences

Anti and output dependences are easier to handle

write to the destination in one stage and in program order

Flow dependences are more interesting

Five fundamental ways of handling flow dependences
35
Readings for Next Few Lectures


P&H Chapter 4.9-4.11
Smith and Sohi, “The Microarchitecture of Superscalar
Processors,” Proceedings of the IEEE, 1995



More advanced pipelining
Interrupt and exception handling
Out-of-order and superscalar execution concepts
36
Review: Pipelining: Basic Idea

Idea:



Divide the instruction processing cycle into distinct “stages” of
processing
Ensure there are enough hardware resources to process one
instruction in each stage
Process a different instruction in each stage



Instructions consecutive in program order are processed in
consecutive stages
Benefit: Increases instruction processing throughput (1/CPI)
Downside: ???
37
Review: Execution of Four Independent ADDs

Multi-cycle: 4 cycles per instruction
F
D
E
W
F
D
E
W
F
D
E
W
F
D
E
W
Time

Pipelined: 4 cycles per 4 instructions (steady state)
F
D
E
W
F
D
E
W
F
D
E
F
D
Is life always this beautiful?
W
E
W
Time
38
data
register
1M
uu
xx
11
Write
Write
data
data
Write
data
M
0M
uu
xx
00
Data
memory
memory
Review: Pipelined Operation Example
16
16
16
Sign
extend
Sign
Sign
extend
extend
32
Write
Write
data
data
32
32
Clock 1
Clock
Clock
5 3
lw $10,
sub
$11,20($1)
$2, $3
Instruction fetch
lw $10,
sub
$11,20($1)
$2, $3
lw $10, 20($1)
Instruction decode
Execution
sub $11, $2, $3
0
00
M
M
M
u
uu
xxx
1
11
lw $10,
sub
$11,20($1)
$2, $3
Memory
Memory
Execution
IF/ID
IF/ID
IF/ID
ID/EX
ID/EX
ID/EX
EX/MEM
EX/MEM
EX/MEM
sub
$11,20($1)
$2, $3
lw $10,
Write back
back
Write
MEM/WB
MEM/WB
MEM/WB
Add
Add
Add
Add
Add
Add Add
Add
Add
result
result
result
444
PC
PC
PC
Address
Address
Address
Instruction
Instruction
Instruction
memory
memory
memory
Instruction
Instruction
Shift
Shift
Shift
left 22
2
left
left
Read
Read
Read
register 11
1
register
register
Read
Read
Read
data 11
1
data
data
Read
Read
Read
register 22
2
register
register
Registers Read
Registers
Registers
Read
Read
Write
Write
2
Write
data 22
data
register
register
register
Write
Write
Write
data
data
0
00
M
M
uuu
xxx
1
11
Zero
Zero
Zero
ALU ALU
ALU
ALU
ALU
ALU
result
result
result
Is life always this beautiful?
16
16
16
Address
Address
Address
Read
Read
Read
data
data
data
Data
Data
Data
Data
memory
memory
memory
Write
Write
Write
data
data
data
1
11
M
M
u
uu
xxx
0
00
32
32
Sign 32
Sign
Sign
extend
extend
extend
Clock
Clock
56 21 43
Clock
Clock
Clock
sub $11, $2, $3
39
lw $10, 20($1)
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
sub $11, $2, $3
lw $10, 20($1)
sub $11, $2, $3
Review: Instruction Pipeline: Not An Ideal Pipeline

Identical operations ... NOT!
 different instructions do not need all stages
- Forcing different instructions to go through the same multi-function pipe
 external fragmentation (some pipe stages idle for some instructions)

Uniform suboperations ... NOT!
 difficult to balance the different pipeline stages
- Not all pipeline stages do the same amount of work
 internal fragmentation (some pipe stages are too-fast but take the
same clock cycle time)

Independent operations ... NOT!
 instructions are not independent of each other
- Need to detect and resolve inter-instruction dependencies to ensure the
pipeline operates correctly
 Pipeline is not always moving (it stalls)
40
Review: Fundamental Issues in Pipeline Design

Balancing work in pipeline stages


How many stages and what is done in each stage
Keeping the pipeline correct, moving, and full in the
presence of events that disrupt pipeline flow

Handling dependences




Data
Control
Handling resource contention
Handling long-latency (multi-cycle) operations

Handling exceptions, interrupts

Advanced: Improving pipeline throughput

Minimizing stalls
41
Review: Data Dependences

Types of data dependences




Flow dependence (true data dependence – read after write)
Output dependence (write after write)
Anti dependence (write after read)
Which ones cause stalls in a pipelined machine?



For all of them, we need to ensure semantics of the program
is correct
Flow dependences always need to be obeyed because they
constitute true dependence on a value
Anti and output dependences exist due to limited number of
architectural registers


They are dependence on a name, not a value
We will later see what we can do about them
42
Data Dependence Types
Flow dependence
r3
 r1 op r2
r5
 r3 op r4
Read-after-Write
(RAW)
Anti dependence
r3
 r1 op r2
r1
 r4 op r5
Write-after-Read
(WAR)
Output-dependence
r3
 r1 op r2
r5
 r3 op r4
r3
 r6 op r7
Write-after-Write
(WAW)
43
data
register
1M
uu
xx
11
Write
Write
data
data
Pipelined Operation Example
16
16
16
Sign
extend
Sign
Sign
extend
extend
32
Write
data
M
0M
uu
xx
00
Data
memory
memory
Write
Write
data
data
32
32
Clock 1
Clock
Clock
5 3
lw $10,
sub
$11,20($1)
$2, $3
Instruction fetch
lw $10,
sub
$11,20($1)
$2, $3
lw $10, 20($1)
Instruction decode
Execution
sub $11, $2, $3
0
00
M
M
M
u
uu
xxx
1
11
lw $10,
sub
$11,20($1)
$2, $3
Memory
Memory
Execution
IF/ID
IF/ID
IF/ID
ID/EX
ID/EX
ID/EX
EX/MEM
EX/MEM
EX/MEM
sub
$11,20($1)
$2, $3
lw $10,
Write back
back
Write
MEM/WB
MEM/WB
MEM/WB
Add
Add
Add
Add
Add
Add Add
Add
Add
result
result
result
444
PC
PC
PC
Address
Address
Address
Instruction
Instruction
Instruction
memory
memory
memory
Instruction
Instruction
Shift
Shift
Shift
left 22
2
left
left
Read
Read
Read
register 11
1
register
register
Read
Read
Read
data 11
1
data
data
Read
Read
Read
register 22
2
register
register
Registers Read
Registers
Registers
Read
Read
Write
Write
2
Write
data 22
data
register
register
register
Write
Write
Write
data
data
0
00
M
M
uuu
xxx
1
11
Zero
Zero
Zero
ALU ALU
ALU
ALU
ALU
ALU
result
result
result
Address
Address
Address
Read
Read
Read
data
data
data
Data
Data
Data
Data
memory
memory
memory
What if the SUB were dependent on LW?
16
16
16
Write
Write
Write
data
data
data
1
11
M
M
u
uu
xxx
0
00
32
32
Sign 32
Sign
Sign
extend
extend
extend
Clock
Clock
56 21 43
Clock
Clock
Clock
sub $11, $2, $3
44
lw $10, 20($1)
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
sub $11, $2, $3
lw $10, 20($1)
sub $11, $2, $3
How to Handle Data Dependences

Anti and output dependences are easier to handle

write to the destination in one stage and in program order

Flow dependences are more interesting

Five fundamental ways of handling flow dependences



Detect and wait until value is available in register file
Detect and forward/bypass data to dependent instruction
Detect and eliminate the dependence at the software level



No need for the hardware to detect dependence
Predict the needed value(s), execute “speculatively”, and verify
Do something else (fine-grained multithreading)

No need to detect
45
Interlocking

Detection of dependence between instructions in a
pipelined processor to guarantee correct execution

Software based interlocking
vs.
Hardware based interlocking

MIPS acronym?

46
Approaches to Dependence Detection (I)

Scoreboarding



Each register in register file has a Valid bit associated with it
An instruction that is writing to the register resets the Valid bit
An instruction in Decode stage checks if all its source and
destination registers are Valid



Advantage:


Yes: No need to stall… No dependence
No: Stall the instruction
Simple. 1 bit per register
Disadvantage:

Need to stall for all types of dependences, not only flow dep.
47
Not Stalling on Anti and Output Dependences

What changes would you make to the scoreboard to enable
this?
48
Approaches to Dependence Detection (II)

Combinational dependence check logic




Advantage:


Special logic that checks if any instruction in later stages is
supposed to write to any source register of the instruction that
is being decoded
Yes: stall the instruction/pipeline
No: no need to stall… no flow dependence
No need to stall on anti and output dependences
Disadvantage:


Logic is more complex than a scoreboard
Logic becomes more complex as we make the pipeline deeper
and wider (flash-forward: think superscalar execution)
49
Once You Detect the Dependence in Hardware





What do you do afterwards?
Observation: Dependence between two instructions is
detected before the communicated data value becomes
available
Option 1: Stall the dependent instruction right away
Option 2: Stall the dependent instruction only when
necessary  data forwarding/bypassing
Option 3: …
50
Data Forwarding/Bypassing





Problem: A consumer (dependent) instruction has to wait in
decode stage until the producer instruction writes its value
in the register file
Goal: We do not want to stall the pipeline unnecessarily
Observation: The data value needed by the consumer
instruction can be supplied directly from a later stage in the
pipeline (instead of only from the register file)
Idea: Add additional dependence check logic and data
forwarding paths (buses) to supply the producer’s value to
the consumer right after the value is available
Benefit: Consumer can move in the pipeline until the point
the value can be supplied  less stalling
51
A Special Case of Data Dependence

Control dependence

Data dependence on the Instruction Pointer / Program Counter
52
Control Dependence


Question: What should the fetch PC be in the next cycle?
Answer: The address of the next instruction


If the fetched instruction is a non-control-flow instruction:



Next Fetch PC is the address of the next-sequential instruction
Easy to determine if we know the size of the fetched instruction
If the instruction that is fetched is a control-flow instruction:


All instructions are control dependent on previous ones. Why?
How do we determine the next Fetch PC?
In fact, how do we know whether or not the fetched
instruction is a control-flow instruction?
53
Data Dependence Handling:
More Depth & Implementation
54
Remember: Data Dependence Types
Flow dependence
r3
 r1 op r2
r5
 r3 op r4
Read-after-Write
(RAW)
Anti dependence
r3
 r1 op r2
r1
 r4 op r5
Write-after-Read
(WAR)
Output-dependence
r3
 r1 op r2
r5
 r3 op r4
r3
 r6 op r7
Write-after-Write
(WAW)
55
How to Handle Data Dependences

Anti and output dependences are easier to handle

write to the destination in one stage and in program order

Flow dependences are more interesting

Five fundamental ways of handling flow dependences



Detect and wait until value is available in register file
Detect and forward/bypass data to dependent instruction
Detect and eliminate the dependence at the software level



No need for the hardware to detect dependence
Predict the needed value(s), execute “speculatively”, and verify
Do something else (fine-grained multithreading)

No need to detect
56
RAW Dependence Handling

Following flow dependences lead to conflicts in the 5-stage
pipeline
addi
ra r- -
addi
r- ra -
addi
r- ra -
addi
r- ra -
addi
r- ra -
addi
r- ra -
IF
ID
EX
MEM WB
IF
ID
EX
MEM WB
IF
ID
EX
MEM
IF
ID
EX
IF
?ID
IF
57
Register Data Dependence Analysis
R/I-Type
LW
SW
Br
read RF
read RF
read RF
read RF
write RF
write RF
J
Jr
IF
ID
read RF
EX
MEM
WB

For a given pipeline, when is there a potential conflict
between 2 data dependent instructions?



dependence type: RAW, WAR, WAW?
instruction types involved?
distance between the two instructions?
58
Safe and Unsafe Movement of Pipeline
j:_rk
stage X
Reg Read
iOj
i:rk_
j:rk_
Reg Write
iAj
stage Y
Reg Write
RAW Dependence
i:_rk
j:rk_
Reg Write
iDj
Reg Read
WAR Dependence
i:rk_
Reg Write
WAW Dependence
dist(i,j)  dist(X,Y)  Unsafe
??
to keep j moving
dist(i,j) > dist(X,Y)  Safe
??
59
RAW Dependence Analysis Example
R/I-Type
LW
SW
Br
read RF
read RF
read RF
read RF
write RF
write RF
J
Jr
IF
ID
read RF
EX
MEM
WB

Instructions IA and IB (where IA comes before IB) have RAW
dependence iff


IB (R/I, LW, SW, Br or JR) reads a register written by IA (R/I or LW)
dist(IA, IB)  dist(ID, WB) = 3
What about WAW and WAR dependence?
What about memory data dependence?
60
Pipeline Stall: Resolving Data Dependence
t0
Insth IF
i
Insti
Instj
Instk
Instl
i: rx  _
j: _  rx
bubble
j: _  rx
bubble
j: _  rx
bubble
j: _  rx
t1
t2
t3
t4
ID
IF
ALU
ID
IF
MEM
ALU
ID
IF
WB
MEM
ALU
ID
ID
IF
IF
j
t5
WB
MEM
ALU
ID
ALU
ID
IF
ID
IF
IF
ID
WB
MEM
ALU
WB
MEM
ALU
MEM ID
ALU
ID
IF
WB
MEM
ALU
ALU IF
ID
IF
MEM
ALU
ID
ID
IF
ALU
ID
IF
IF
ID
IF
dist(i,j)=1
Stall==make the dependent instruction
IF
dist(i,j)=2wait until its source data value is available
dist(i,j)=3 1. stop all up-stream stages
dist(i,j)=4 2. drain all down-stream stages
61
How to Implement Stalling
PCSrc
ID/EX
0
M
u
x
1
WB
Control
IF/ID
EX/MEM
M
WB
EX
M
MEM/WB
WB
Add
Add
Add result
Instruction
memory
ALUSrc
Read
register 1
Read
data 1
Read
register 2
Registers Read
Write
data 2
register
Write
data
Zero
ALU ALU
result
0
M
u
x
1
MemtoReg
Address
Branch
Shift
left 2
MemWrite
PC
Instruction
RegWrite
4
Address
Data
memory
Read
data
Write
data
Instruction 16
[15– 0]
Sign
extend
Instruction
[20– 16]
Instruction
[15– 11]
32
6
ALU
control
0
M
u
x
1
1
M
u
x
0
MemRead
ALUOp
RegDst

Stall


disable PC and IR latching; ensure stalled instruction stays in its stage
Insert “invalid” instructions/nops into the stage following the stalled one
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
62
Stall Conditions

Instructions IA and IB (where IA comes before IB) have RAW
dependence iff



IB (R/I, LW, SW, Br or JR) reads a register written by IA (R/I or LW)
dist(IA, IB)  dist(ID, WB) = 3
In other words, must stall when IB in ID stage wants to read a
register to be written by IA in EX, MEM or WB stage
63
Stall Conditions

Helper functions



Stall when







rs(I) returns the rs field of I
use_rs(I) returns true if I requires RF[rs] and rs!=r0
(rs(IRID)==destEX) && use_rs(IRID) && RegWriteEX or
(rs(IRID)==destMEM) && use_rs(IRID) && RegWriteMEM
(rs(IRID)==destWB) && use_rs(IRID) && RegWriteWB or
(rt(IRID)==destEX) && use_rt(IRID) && RegWriteEX
or
(rt(IRID)==destMEM) && use_rt(IRID) && RegWriteMEM
(rt(IRID)==destWB) && use_rt(IRID) && RegWriteWB
or
or
It is crucial that the EX, MEM and WB stages continue to advance
normally during stall cycles
64
Impact of Stall on Performance

Each stall cycle corresponds to 1 lost ALU cycle

For a program with N instructions and S stall cycles,
Average CPI=(N+S)/N

S depends on



frequency of RAW dependences
exact distance between the dependent instructions
distance between dependences
suppose i1,i2 and i3 all depend on i0, once i1’s dependence is
resolved, i2 and i3 must be okay too
65
Sample Assembly (P&H)

for (j=i-1; j>=0 && v[j] > v[j+1]; j-=1) { ...... }
for2tst:
addi
slti
bne
sll
add
lw
lw
slt
beq
.........
addi
j
$s1, $s0, -1
$t0, $s1, 0
$t0, $zero, exit2
$t1, $s1, 2
$t2, $a0, $t1
$t3, 0($t2)
$t4, 4($t2)
$t0, $t4, $t3
$t0, $zero, exit2
3 stalls
3 stalls
3 stalls
3 stalls
3 stalls
3 stalls
$s1, $s1, -1
for2tst
exit2:
66
Data Forwarding (or Data Bypassing)

It is intuitive to think of RF as state


But, RF is just a part of a computing abstraction


“add rx ry rz” literally means get values from RF[ry] and RF[rz]
respectively and put result in RF[rx]
“add rx ry rz” means 1. get the results of the last instructions to
define the values of RF[ry] and RF[rz], respectively, and 2. until
another instruction redefines RF[rx], younger instructions that
refers to RF[rx] should use this instruction’s result
What matters is to maintain the correct “dataflow” between
operations, thus
add
ra r- r-
addi
r- ra r-
IF
ID
EX
MEM WB
IF
ID
ID
EX
ID
MEM
ID
WB
67
Resolving RAW Dependence with Forwarding

Instructions IA and IB (where IA comes before IB) have RAW
dependence iff



IB (R/I, LW, SW, Br or JR) reads a register written by IA (R/I or LW)
dist(IA, IB)  dist(ID, WB) = 3
In other words, if IB in ID stage reads a register written by IA in
EX, MEM or WB stage, then the operand required by IB is not yet
in RF
 retrieve operand from datapath instead of the RF
 retrieve operand from the youngest definition if multiple
definitions are outstanding
68
Data Forwarding Paths (v1)
a. No forwarding
ID/EX
EX/MEM
MEM/WB
dist(i,j)=3
M
u
x
Registers
ForwardA
M
u
x
internal
forward?
Rs
Rt
Rt
Rd
ALU
dist(i,j)=1
dist(i,j)=2
Data
memory
M
u
x
ForwardB
M
u
x
EX/MEM.RegisterRd
Forwarding
unit
MEM/WB.RegisterRd
dist(i,j)=3
b. With forwarding
[Based on original figure from P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
69
Data Forwarding Paths (v2)
a. No forwarding
ID/EX
EX/MEM
MEM/WB
dist(i,j)=3
M
u
x
Registers
ForwardA
M
u
x
Rs
Rt
Rt
Rd
ALU
dist(i,j)=1
dist(i,j)=2
Data
memory
M
u
x
ForwardB
M
u
x
EX/MEM.RegisterRd
Forwarding
unit
MEM/WB.RegisterRd
b. With forwarding
[Based on original figure from P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
70
Assumes RF forwards internally
Data Forwarding Logic (for v2)
if (rsEX!=0) && (rsEX==destMEM) && RegWriteMEM then
forward operand from MEM stage
// dist=1
else if (rsEX!=0) && (rsEX==destWB) && RegWriteWB then
forward operand from WB stage // dist=2
else
use AEX (operand from register file)
// dist >= 3
Ordering matters!! Must check youngest match first
Why doesn’t use_rs( ) appear in the forwarding logic?
What does the above not take into account?
71
Data Forwarding (Dependence Analysis)
R/I-Type
LW
SW
Br
J
Jr
IF
ID
EX
MEM
use
use
produce
use
use
produce
(use)
use
WB

Even with data-forwarding, RAW dependence on an immediately
preceding LW instruction requires a stall
72
Sample Assembly, Revisited (P&H)

for (j=i-1; j>=0 && v[j] > v[j+1]; j-=1) { ...... }
addi $s1, $s0, -1
for2tst: slti
$t0, $s1, 0
bne $t0, $zero, exit2
sll
$t1, $s1, 2
add $t2, $a0, $t1
lw
$t3, 0($t2)
lw
$t4, 4($t2)
nop
slt
$t0, $t4, $t3
beq $t0, $zero, exit2
.........
addi $s1, $s1, -1
j
for2tst
exit2:
73
Pipelining the LC-3b
74
Pipelining the LC-3b

Let’s remember the single-bus datapath

We’ll divide it into 5 stages






Fetch
Decode/RF Access
Address Generation/Execute
Memory
Store Result
Conservative handling of data and control dependences


Stall on branch
Stall on flow dependence
75
An Example LC-3b Pipeline
77
78
79
80
81
82
Control of the LC-3b Pipeline

Three types of control signals

Datapath Control Signals


Control Store Signals


Control signals that control the operation of the datapath
Control signals (microinstructions) stored in control store to be
used in pipelined datapath (can be propagated to stages later
than decode)
Stall Signals

Ensure the pipeline operates correctly in the presence of
dependencies
83
84
Control Store in a Pipelined Machine
85
Stall Signals



Pipeline stall: Pipeline does not move because an operation
in a stage cannot complete
Stall Signals: Ensure the pipeline operates correctly in the
presence of such an operation
Why could an operation in a stage not complete?
86
Download