18-447 Computer Architecture Lecture 6: Multi-Cycle and Microprogrammed Microarchitectures

advertisement
18-447
Computer Architecture
Lecture 6: Multi-Cycle and
Microprogrammed Microarchitectures
Prof. Onur Mutlu
Carnegie Mellon University
Spring 2014, 1/27/2014
Assignments

Lab 2 due next Friday (start early)

HW1 due next week

HW0

Make sure you submitted this!
2
Extra Credit for Lab Assignment 2






Complete your normal (single-cycle) implementation first, and
get it checked off in lab.
Then, implement the MIPS core using a microcoded approach
similar to what we will discuss in class.
We are not specifying any particular details of the microcode
format or the microarchitecture; you can be creative.
For the extra credit, the microcoded implementation should
execute the same programs that your ordinary
implementation does, and you should demo it by the normal
lab deadline.
You will get maximum 4% of course grade
Document what you have done and demonstrate well
3
Readings for Today

P&P, Revised Appendix C



P&H, Appendix D


Microarchitecture of the LC-3b
Appendix A (LC-3b ISA) will be useful in following this
Mapping Control to Hardware
Optional

Maurice Wilkes, “The Best Way to Design an Automatic
Calculating Machine,” Manchester Univ. Computer Inaugural
Conf., 1951.
4
Readings for Next Lecture

Pipelining


P&H Chapter 4.5-4.8
Pipelined LC-3b Microarchitecture

http://www.ece.cmu.edu/~ece447/s13/lib/exe/fetch.php?medi
a=18447-lc3b-pipelining.pdf
5
Quick Recap of Past Five Lectures

Basics







ISA Tradeoffs
Single-Cycle Microarchitectures
Multi-Cycle Microarchitectures
Performance Analysis


Why Computer Architecture
Levels of Transformation
Memory Topics: DRAM Refresh and Memory Performance
Attacks
Amdahl’s Law
Microarchitecture Design Principles
6
Microarchitecture Design Principles

Critical path design


Find the maximum combinational logic delay and decrease it
Bread and butter (common case) design

Spend time and resources on where it matters



Common case vs. uncommon case
Balanced design



i.e., improve what the machine is really designed to do
Balance instruction/data flow through hardware components
Balance the hardware needed to accomplish the work
How does a single-cycle microarchitecture fare in light of
these principles?
7
Multi-Cycle Microarchitectures


Goal: Let each instruction take (close to) only as much time
it really needs
Idea


Determine clock cycle time independently of instruction
processing time
Each instruction takes as many clock cycles as it needs to take


Multiple state transitions per instruction
The states followed by each instruction is different
8
A Multi-Cycle Microarchitecture
A Closer Look
9
How Do We Implement This?

Maurice Wilkes, “The Best Way to Design an Automatic
Calculating Machine,” Manchester Univ. Computer
Inaugural Conf., 1951.

The concept of microcoded/microprogrammed machines

Realization



One can implement the “process instruction” step as a finite
state machine that sequences between states and eventually
returns back to the “fetch instruction” state
A state is defined by the control signals asserted in it
Control signals for the next state determined in current state
10
The Instruction Processing Cycle






Fetch
Decode
Evaluate Address
Fetch Operands
Execute
Store Result
11
A Basic Multi-Cycle Microarchitecture

Instruction processing cycle divided into “states”


A multi-cycle microarchitecture sequences from state to
state to process an instruction


A stage in the instruction processing cycle can take multiple states
The behavior of the machine in a state is completely determined by
control signals in that state
The behavior of the entire processor is specified fully by a
finite state machine

In a state (clock cycle), control signals control


How the datapath should process the data
How to generate the control signals for the next clock cycle
12
Microprogrammed Control Terminology

Control signals associated with the current state


Act of transitioning from one state to another



Determining the next state and the microinstruction for the
next state
Microsequencing
Control store stores control signals for every possible state


Microinstruction
Store for microinstructions for the entire FSM
Microsequencer determines which set of control signals will
be used in the next clock cycle (i.e., next state)
13
What Happens In A Clock Cycle?

The control signals (microinstruction) for the current state
control





Processing in the data path
Generation of control signals (microinstruction) for the next
cycle
See Supplemental Figure 1 (next slide)
Datapath and microsequencer operate concurrently
Question: why not generate control signals for the current
cycle in the current cycle?



This will lengthen the clock cycle
Why would it lengthen the clock cycle?
See Supplemental Figure 2
14
A Clock Cycle
15
A Bad Clock Cycle!
16
A Simple LC-3b Control and Datapath
17
What Determines Next-State Control Signals?

What is happening in the current clock cycle

See the 9 control signals coming from “Control” block


The instruction that is being executed


IR[15:11] coming from the Data Path
Whether the condition of a branch is met, if the instruction
being processed is a branch


What are these for?
BEN bit coming from the datapath
Whether the memory operation is completing in the current
cycle, if one is in progress

R bit coming from memory
18
A Simple LC-3b Control and Datapath
19
The State Machine for Multi-Cycle Processing

The behavior of the LC-3b uarch is completely determined by




the 35 control signals and
additional 7 bits that go into the control logic from the datapath
35 control signals completely describe the state of the control
structure
We can completely describe the behavior of the LC-3b as a
state machine, i.e. a directed graph of


Nodes (one corresponding to each state)
Arcs (showing flow from each state to the next state(s))
20
An LC-3b State Machine



Patt and Patel, App C, Figure C.2
Each state must be uniquely specified
 Done by means of state variables
31 distinct states in this LC-3b state machine


Encoded with 6 state variables
Examples



State 18,19 correspond to the beginning of the instruction
processing cycle
Fetch phase: state 18, 19  state 33  state 35
Decode phase: state 32
21
18, 19
MAR <! PC
PC <! PC + 2
33
MDR <! M
R
R
35
IR <! MDR
32
RTI
To 8
1011
BEN<! IR[11] & N + IR[10] & Z + IR[9] & P
To 11
1010
[IR[15:12]]
ADD
To 10
BR
AND
DR<! SR1+OP2*
set CC
1
0
XOR
JMP
TRAP
To 18
DR<! SR1&OP2*
set CC
[BEN]
JSR
SHF
LEA
LDB
STW
LDW
STB
1
PC<! PC+LSHF(off9,1)
12
DR<! SR1 XOR OP2*
set CC
15
4
MAR<! LSHF(ZEXT[IR[7:0]],1)
To 18
[IR[11]]
0
R
To 18
PC<! BaseR
To 18
MDR<! M[MAR]
R7<! PC
22
5
9
To 18
0
28
1
20
R7<! PC
PC<! BaseR
R
21
30
PC<! MDR
To 18
To 18
R7<! PC
PC<! PC+LSHF(off11,1)
13
DR<! SHF(SR,A,D,amt4)
set CC
To 18
14
2
DR<! PC+LSHF(off9, 1)
set CC
To 18
MAR<! B+off6
6
7
MAR<! B+LSHF(off6,1)
3
MAR<! B+LSHF(off6,1)
MAR<! B+off6
To 18
29
MDR<! M[MAR[15:1]’0]
NOTES
B+off6 : Base + SEXT[offset6]
PC+off9 : PC + SEXT[offset9]
*OP2 may be SR2 or SEXT[imm5]
** [15:8] or [7:0] depending on
MAR[0]
R
31
R
DR<! SEXT[BYTE.DATA]
set CC
MDR<! SR
MDR<! M[MAR]
27
R
R
MDR<! SR[7:0]
16
DR<! MDR
set CC
M[MAR]<! MDR
To 18
To 18
R
To 18
24
23
25
17
M[MAR]<! MDR**
R
R
To 19
R
LC-3b State Machine: Some Questions

How many cycles does the fastest instruction take?

How many cycles does the slowest instruction take?

Why does the BR take as long as it takes in the FSM?

What determines the clock cycle?
23
LC-3b Datapath

Patt and Patel, App C, Figure C.3

Single-bus datapath design




At any point only one value can be “gated” on the bus (i.e.,
can be driving the bus)
Advantage: Low hardware cost: one bus
Disadvantage: Reduced concurrency – if instruction needs the
bus twice for two different things, these need to happen in
different states
Control signals (26 of them) determine what happens in the
datapath in one clock cycle

Patt and Patel, App C, Table C.1
24
C.4. THE CONTROL STRUCTURE
11
IR[11:9]
IR[11:9]
DR
SR1
111
IR[8:6]
DRMUX
SR1MUX
(b)
(a)
IR[11:9]
N
Z
P
Logic
BEN
(c)
Figure C.6: Additional logic required to provide control signals
LC-3b Datapath: Some Questions

How does instruction fetch happen in this datapath
according to the state machine?

What is the difference between gating and loading?

Is this the smallest hardware you can design?
28
LC-3b Microprogrammed Control Structure

Patt and Patel, App C, Figure C.4

Three components:




Microinstruction: control signals that control the datapath
(26 of them) and help determine the next state (9 of them)
Each microinstruction is stored in a unique location in the
control store (a special memory structure)
Unique location: address of the state corresponding to the
microinstruction


Microinstruction, control store, microsequencer
Remember each state corresponds to one microinstruction
Microsequencer determines the address of the next
microinstruction (i.e., next state)
29
R
IR[15:11]
BEN
Microsequencer
6
Control Store
2 6 x 35
35
Microinstruction
9
(J, COND, IRD)
26
10APPENDIX C. THE MICROARCHITECTURE OF THE LC-3B, BASIC MACHINE
COND1
BEN
R
J[4]
J[3]
J[2]
IR[11]
Ready
Branch
J[5]
COND0
J[1]
0,0,IR[15:12]
6
IRD
6
Address of Next State
Figure C.5: The microsequencer of the LC-3b base machine
Addr.
Mode
J[0]
.M
LD A R
.M
LD D R
. IR
LD
. BE
LD N
. RE
LD G
. CC
LD
.PC
Ga
t eP
Ga C
t eM
Ga DR
t eA
Ga L U
t eM
Ga ARM
t eS
HF UX
PC
MU
X
DR
MU
SR X
1M
AD U X
DR
1M
UX
AD
DR
2M
UX
MA
RM
UX
AL
UK
MI
O.
E
R. W N
DA
TA
LS .SIZ
HF
E
1
J
nd
D
Co
IR
LD
000000 (State 0)
000001 (State 1)
000010 (State 2)
000011 (State 3)
000100 (State 4)
000101 (State 5)
000110 (State 6)
000111 (State 7)
001000 (State 8)
001001 (State 9)
001010 (State 10)
001011 (State 11)
001100 (State 12)
001101 (State 13)
001110 (State 14)
001111 (State 15)
010000 (State 16)
010001 (State 17)
010010 (State 18)
010011 (State 19)
010100 (State 20)
010101 (State 21)
010110 (State 22)
010111 (State 23)
011000 (State 24)
011001 (State 25)
011010 (State 26)
011011 (State 27)
011100 (State 28)
011101 (State 29)
011110 (State 30)
011111 (State 31)
100000 (State 32)
100001 (State 33)
100010 (State 34)
100011 (State 35)
100100 (State 36)
100101 (State 37)
100110 (State 38)
100111 (State 39)
101000 (State 40)
101001 (State 41)
101010 (State 42)
101011 (State 43)
101100 (State 44)
101101 (State 45)
101110 (State 46)
101111 (State 47)
110000 (State 48)
110001 (State 49)
110010 (State 50)
110011 (State 51)
110100 (State 52)
110101 (State 53)
110110 (State 54)
110111 (State 55)
111000 (State 56)
111001 (State 57)
111010 (State 58)
111011 (State 59)
111100 (State 60)
111101 (State 61)
111110 (State 62)
111111 (State 63)
LC-3b Microsequencer



Patt and Patel, App C, Figure C.5
The purpose of the microsequencer is to determine the
address of the next microinstruction (i.e., next state)
Next address depends on 9 control signals
33
10APPENDIX C. THE MICROARCHITECTURE OF THE LC-3B, BASIC MACHINE
COND1
BEN
R
J[4]
J[3]
J[2]
IR[11]
Ready
Branch
J[5]
COND0
J[1]
0,0,IR[15:12]
6
IRD
6
Address of Next State
Figure C.5: The microsequencer of the LC-3b base machine
Addr.
Mode
J[0]
The Microsequencer: Some Questions

When is the IRD signal asserted?

What happens if an illegal instruction is decoded?

What are condition (COND) bits for?

How is variable latency memory handled?

How do you do the state encoding?



Minimize number of state variables
Start with the 16-way branch
Then determine constraint tables and states dependent on COND
35
An Exercise in
Microprogramming
36
Handouts



7 pages of Microprogrammed LC-3b design
http://www.ece.cmu.edu/~ece447/s14/doku.php?id=techd
ocs
http://www.ece.cmu.edu/~ece447/s14/lib/exe/fetch.php?m
edia=lc3b-figures.pdf
37
A Simple LC-3b Control and Datapath
38
18, 19
MAR <! PC
PC <! PC + 2
33
MDR <! M
R
R
35
IR <! MDR
32
RTI
To 8
1011
BEN<! IR[11] & N + IR[10] & Z + IR[9] & P
To 11
1010
[IR[15:12]]
ADD
To 10
BR
AND
DR<! SR1+OP2*
set CC
1
0
XOR
JMP
TRAP
To 18
DR<! SR1&OP2*
set CC
[BEN]
JSR
SHF
LEA
LDB
STW
LDW
STB
1
PC<! PC+LSHF(off9,1)
12
DR<! SR1 XOR OP2*
set CC
15
4
MAR<! LSHF(ZEXT[IR[7:0]],1)
To 18
[IR[11]]
0
R
To 18
PC<! BaseR
To 18
MDR<! M[MAR]
R7<! PC
22
5
9
To 18
0
28
1
20
R7<! PC
PC<! BaseR
R
21
30
PC<! MDR
To 18
To 18
R7<! PC
PC<! PC+LSHF(off11,1)
13
DR<! SHF(SR,A,D,amt4)
set CC
To 18
14
2
DR<! PC+LSHF(off9, 1)
set CC
To 18
MAR<! B+off6
6
7
MAR<! B+LSHF(off6,1)
3
MAR<! B+LSHF(off6,1)
MAR<! B+off6
To 18
29
MDR<! M[MAR[15:1]’0]
NOTES
B+off6 : Base + SEXT[offset6]
PC+off9 : PC + SEXT[offset9]
*OP2 may be SR2 or SEXT[imm5]
** [15:8] or [7:0] depending on
MAR[0]
R
31
R
DR<! SEXT[BYTE.DATA]
set CC
MDR<! SR
MDR<! M[MAR]
27
R
R
MDR<! SR[7:0]
16
DR<! MDR
set CC
M[MAR]<! MDR
To 18
To 18
R
To 18
24
23
25
17
M[MAR]<! MDR**
R
R
To 19
R
State Machine for LDW
10APPENDIX C. THE MICROARCHITECTURE OF THE LC-3B, BASIC MACHINE
Microsequencer
COND1
BEN
R
J[4]
J[3]
J[2]
IR[11]
Ready
Branch
J[5]
COND0
J[1]
Addr.
Mode
J[0]
0,0,IR[15:12]
6
IRD
6
Address of Next State
Figure C.5: The microsequencer of the LC-3b base machine
State 18 (010010)
State 33 (100001)
State 35 (100011)
State 32 (100000)
State 6 (000110)
State 25 (011001)
State 27 (011011)
unused opcodes, the microarchitecture would execute a sequence of microinstructions,
starting at state 10 or state 11, depending on which illegal opcode was being decoded.
In both cases, the sequence of microinstructions would respond to the fact that an
instruction with an illegal opcode had been fetched.
Several signals necessary to control the data path and the microsequencer are not
among those listed in Tables C.1 and C.2. They are DR, SR1, BEN, and R. Figure C.6
shows the additional logic needed to generate DR, SR1, and BEN.
The remaining signal, R, is a signal generated by the memory in order to allow the
C.4. THE CONTROL STRUCTURE
11
IR[11:9]
IR[11:9]
DR
SR1
111
IR[8:6]
DRMUX
SR1MUX
(b)
(a)
IR[11:9]
N
Z
P
Logic
BEN
(c)
Figure C.6: Additional logic required to provide control signals
R
IR[15:11]
BEN
Microsequencer
6
Control Store
2 6 x 35
35
Microinstruction
9
(J, COND, IRD)
26
10APPENDIX C. THE MICROARCHITECTURE OF THE LC-3B, BASIC MACHINE
COND1
BEN
R
J[4]
J[3]
J[2]
IR[11]
Ready
Branch
J[5]
COND0
J[1]
0,0,IR[15:12]
6
IRD
6
Address of Next State
Figure C.5: The microsequencer of the LC-3b base machine
Addr.
Mode
J[0]
.M
LD A R
.M
LD D R
. IR
LD
. BE
LD N
. RE
LD G
. CC
LD
.PC
Ga
t eP
Ga C
t eM
Ga DR
t eA
Ga L U
t eM
Ga ARM
t eS
HF UX
PC
MU
X
DR
MU
SR X
1M
AD U X
DR
1M
UX
AD
DR
2M
UX
MA
RM
UX
AL
UK
MI
O.
E
R. W N
DA
TA
LS .SIZ
HF
E
1
J
nd
D
Co
IR
LD
000000 (State 0)
000001 (State 1)
000010 (State 2)
000011 (State 3)
000100 (State 4)
000101 (State 5)
000110 (State 6)
000111 (State 7)
001000 (State 8)
001001 (State 9)
001010 (State 10)
001011 (State 11)
001100 (State 12)
001101 (State 13)
001110 (State 14)
001111 (State 15)
010000 (State 16)
010001 (State 17)
010010 (State 18)
010011 (State 19)
010100 (State 20)
010101 (State 21)
010110 (State 22)
010111 (State 23)
011000 (State 24)
011001 (State 25)
011010 (State 26)
011011 (State 27)
011100 (State 28)
011101 (State 29)
011110 (State 30)
011111 (State 31)
100000 (State 32)
100001 (State 33)
100010 (State 34)
100011 (State 35)
100100 (State 36)
100101 (State 37)
100110 (State 38)
100111 (State 39)
101000 (State 40)
101001 (State 41)
101010 (State 42)
101011 (State 43)
101100 (State 44)
101101 (State 45)
101110 (State 46)
101111 (State 47)
110000 (State 48)
110001 (State 49)
110010 (State 50)
110011 (State 51)
110100 (State 52)
110101 (State 53)
110110 (State 54)
110111 (State 55)
111000 (State 56)
111001 (State 57)
111010 (State 58)
111011 (State 59)
111100 (State 60)
111101 (State 61)
111110 (State 62)
111111 (State 63)
10APPENDIX C. THE MICROARCHITECTURE OF THE LC-3B, BASIC MACHINE
COND1
BEN
R
J[4]
J[3]
J[2]
IR[11]
Ready
Branch
J[5]
COND0
J[1]
0,0,IR[15:12]
6
IRD
6
Address of Next State
Figure C.5: The microsequencer of the LC-3b base machine
Addr.
Mode
J[0]
End of the Exercise in
Microprogramming
48
Homework 2

You will write the microcode for the entire LC-3b as
specified in Appendix C
49
Lab 2 Extra Credit

Microprogrammed ARM implementation

Exercise your creativity!
50
The Microsequencer: Some Questions

When is the IRD signal asserted?

What happens if an illegal instruction is decoded?

What are condition (COND) bits for?

How is variable latency memory handled?

How do you do the state encoding?



Minimize number of state variables
Start with the 16-way branch
Then determine constraint tables and states dependent on COND
51
The Control Store: Some Questions

What control signals can be stored in the control store?
vs.

What control signals have to be generated in hardwired
logic?

i.e., what signal cannot be available without processing in the
datapath?
52
Variable-Latency Memory

The ready signal (R) enables memory read/write to execute
correctly


Example: transition from state 33 to state 35 is controlled by
the R bit asserted by memory when memory data is available
Could we have done this in a single-cycle
microarchitecture?
53
The Microsequencer: Advanced Questions

What happens if the machine is interrupted?

What if an instruction generates an exception?

How can you implement a complex instruction using this
control structure?

Think REP MOVS
54
The Power of Abstraction



The concept of a control store of microinstructions enables
the hardware designer with a new abstraction:
microprogramming
The designer can translate any desired operation to a
sequence microinstructions
All the designer needs to provide is



The sequence of microinstructions needed to implement the
desired operation
The ability for the control logic to correctly sequence through
the microinstructions
Any additional datapath control signals needed (no need if the
operation can be “translated” into existing control signals)
55
Let’s Do Some More Microprogramming

Implement REP MOVS in the LC-3b microarchitecture

What changes, if any, do you make to the






state machine?
datapath?
control store?
microsequencer?
Show all changes and microinstructions
Coming up in Homework 3?
56
Aside: Alignment Correction in Memory


Remember unaligned accesses
LC-3b has byte load and byte store instructions that move
data not aligned at the word-address boundary


Convenience to the programmer/compiler
How does the hardware ensure this works correctly?



Take a look at state 29 for LDB
States 24 and 17 for STB
Additional logic to handle unaligned accesses
57
Aside: Memory Mapped I/O



Address control logic determines whether the specified
address of LDx and STx are to memory or I/O devices
Correspondingly enables memory or I/O devices and sets
up muxes
Another instance where the final control signals (e.g.,
MEM.EN or INMUX/2) cannot be stored in the control store

Dependent on address
58
Advantages of Microprogrammed Control

Allows a very simple design to do powerful computation by
controlling the datapath (using a sequencer)




Enables easy extensibility of the ISA



High-level ISA translated into microcode (sequence of microinstructions)
Microcode (ucode) enables a minimal datapath to emulate an ISA
Microinstructions can be thought of a user-invisible ISA
Can support a new instruction by changing the ucode
Can support complex instructions as a sequence of simple microinstructions
If I can sequence an arbitrary instruction then I can sequence
an arbitrary “program” as a microprogram sequence

will need some new state (e.g. loop counters) in the microcode for sequencing
more elaborate programs
59
Update of Machine Behavior

The ability to update/patch microcode in the field (after a
processor is shipped) enables



Ability to add new instructions without changing the processor!
Ability to “fix” buggy hardware implementations
Examples


IBM 370 Model 145: microcode stored in main memory, can be
updated after a reboot
IBM System z: Similar to 370/145.


Heller and Farrell, “Millicode in an IBM zSeries processor,” IBM
JR&D, May/Jul 2004.
B1700 microcode can be updated while the processor is running

User-microprogrammable machine!
60
Microcode
storage
ALUSrcA
IorD
Datapath
IRWrite
control
PCWrite
outputs
PCWriteCond
….
Outputs
n-bit Input
mPC input
1
Microprogram counter
k-bit “control” output
Horizontal Microcode
Sequencing
control
Adder
Address select logic
Inputs from instruction
register opcode field
[Based on original figure from P&H CO&D, COPYRIGHT
2004 Elsevier. ALL RIGHTS RESERVED.]
Control Store: 2n k bit (not including sequencing)
61
Vertical Microcode
1-bit signal means do this RT
(or combination of RTs)
“PC  PC+4”
“PC  ALUOut”
Datapath
“PC  PC[ 31:28 ],IR[ 25:0 ],2’b00”
control
“IR  MEM[ PC ]”
outputs
“A  RF[ IR[ 25:21 ] ]”
“B  RF[ IR[ 20:16 ] ]”
…….
………….
Microcode
storage
Outputs
n-bit mPC
input
Input
m-bit input
1
Microprogram counter
Adder
Address select logic
Inputs from instruction
register opcode field
ROM
k-bit output
ALUSrcA
IorD
IRWrite
PCWrite
PCWriteCond
….
[Based on original figure from P&H CO&D, COPYRIGHT
2004 Elsevier. ALL RIGHTS RESERVED.]
Sequencing
control
If done right (i.e., m<<n, and m<<k), two ROMs together
(2nm+2mk bit) should be smaller than horizontal microcode ROM (2nk bit)
62
Nanocode and Millicode

Nanocode: a level below traditional mcode


Millicode: a level above traditional mcode




mprogrammed control for sub-systems (e.g., a complicated floatingpoint module) that acts as a slave in a mcontrolled datapath
ISA-level subroutines that can be called by the mcontroller to handle
complicated operations and system functions
E.g., Heller and Farrell, “Millicode in an IBM zSeries processor,” IBM
JR&D, May/Jul 2004.
In both cases, we avoid complicating the main mcontroller
You can think of these as “microcode” at different levels of
abstraction
63
Nanocode Concept Illustrated
a “mcoded” processor implementation
ROM
mPC
processor
datapath
We refer to this
as “nanocode”
when a mcoded
subsystem is embedded
in a mcoded system
a “mcoded” FPU implementation
ROM
mPC
arithmetic
datapath
64
Multi-Cycle vs. Single-Cycle uArch

Advantages

Disadvantages

You should be very familiar with this right now
65
Microprogrammed vs. Hardwired Control

Advantages

Disadvantages

You should be very familiar with this right now
66
Can We Do Better?

What limitations do you see with the multi-cycle design?

Limited concurrency



Some hardware resources are idle during different phases of
instruction processing cycle
“Fetch” logic is idle when an instruction is being “decoded” or
“executed”
Most of the datapath is idle when a memory access is
happening
67
Can We Use the Idle Hardware to Improve Concurrency?


Goal: Concurrency  throughput (more “work” completed
in one cycle)
Idea: When an instruction is using some resources in its
processing phase, process other instructions on idle
resources not needed by that instruction




E.g., when an instruction is being decoded, fetch the next
instruction
E.g., when an instruction is being executed, decode another
instruction
E.g., when an instruction is accessing data memory (ld/st),
execute the next instruction
E.g., when an instruction is writing its result into the register
file, access data memory for the next instruction
68
Pipelining: Basic Idea

More systematically:



Pipeline the execution of multiple instructions
Analogy: “Assembly line processing” of instructions
Idea:



Divide the instruction processing cycle into distinct “stages” of
processing
Ensure there are enough hardware resources to process one
instruction in each stage
Process a different instruction in each stage



Instructions consecutive in program order are processed in
consecutive stages
Benefit: Increases instruction processing throughput (1/CPI)
Downside: Start thinking about this…
69
Example: Execution of Four Independent ADDs

Multi-cycle: 4 cycles per instruction
F
D
E
W
F
D
E
W
F
D
E
W
F
D
E
W
Time

Pipelined: 4 cycles per 4 instructions (steady state)
F
D
E
W
F
D
E
W
F
D
E
W
F
D
E
W
Time
70
The Laundry Analogy
Time
6 PM
7
8
9
10
11
12
1
2 AM
Task
order
A
B
C
D




“place one dirty load of clothes in the washer”
“when the washer is finished, place the wet load in the dryer”
1
6 PM
8
9
10
11 dry 12
2 AM
“when the
dryer
is7 finished,
take
out
the
load and
fold”
Time
“when folding is finished, ask your roommate (??) to put the clothes
Task
away” order
A
B
C
- steps to do a load are sequentially dependent
- no dependence between different loads
- different steps do not share resources
D
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
71
Pipelining Multiple Loads of Laundry
6 PM
7
8
9
10
11
12
1
2 AM
6 PM
TimeA
7
8
9
10
11
12
1
2 AM
7
8
9
10
11
12
1
2 AM
7
8
9
10
11
12
1
2 AM
Time
Task
order
Task
B
order
A
C
D
B
C
D
6 PM
Time
Task
order 6 PM
Time
A
Task
order B
A
C
B
D
C
D
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
- 4 loads of laundry in parallel
- no additional resources
- throughput increased by 4
- latency per load is the same
72
Pipelining Multiple Loads of Laundry: In Practice
Time
6 PM
7
8
9
10
11
12
1
2 AM
6 PM
7
8
9
10
11
12
1
2 AM
6 PM
7
8
9
10
11
12
1
2 AM
6 PM
7
8
9
10
11
12
1
2 AM
Task
order
A
Time
B
Task
order
C
A
D
B
C
D
Time
Task
order
TimeA
Task B
order
C
A
D
B
C
the slowest step decides throughput
D
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
73
Pipelining Multiple Loads of Laundry: In Practice
6 PM
7
8
9
10
11
12
1
2 AM
A 6 PM
Time
7
8
9
10
11
12
1
2 AM
6 PM
7
8
9
10
11
12
1
2 AM
Task
order 6 PM
Time
A
7
8
9
10
11
12
1
2 AM
Time
Task
order
Task B
order
C
A
D
B
C
D
Time
Task B
order
C
A
D
B
A
B
A
B
C
D
Throughput restored (2 loads per hour) using 2 dryers
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
74
An Ideal Pipeline


Goal: Increase throughput with little increase in cost
(hardware cost, in case of instruction processing)
Repetition of identical operations


Repetition of independent operations


No dependencies between repeated operations
Uniformly partitionable suboperations


The same operation is repeated on a large number of different
inputs
Processing can be evenly divided into uniform-latency
suboperations (that do not share resources)
Fitting examples: automobile assembly line, doing laundry

What about the instruction processing “cycle”?
75
Ideal Pipelining
combinational logic (F,D,E,M,W)
T psec
T/2 ps (F,D,E)
T/3
ps (F,D)
BW=~(1/T)
BW=~(2/T)
T/2 ps (M,W)
T/3
ps (E,M)
T/3
ps (M,W)
BW=~(3/T)
76
More Realistic Pipeline: Throughput

Nonpipelined version with delay T
BW = 1/(T+S) where S = latch delay
T ps

k-stage pipelined version
BWk-stage = 1 / (T/k +S )
BWmax = 1 / (1 gate delay + S )
T/k
ps
T/k
ps
77
More Realistic Pipeline: Cost

Nonpipelined version with combinational cost G
Cost = G+L where L = latch cost
G gates

k-stage pipelined version
Costk-stage = G + Lk
G/k
G/k
78
Pipelining Instruction Processing
79
Remember: The Instruction Processing Cycle
 Fetch fetch (IF)
1. Instruction
 Decodedecode and
2. Instruction
register
operand
fetch (ID/RF)
 Evaluate
Address
3. Execute/Evaluate
memory address (EX/AG)
 Fetch Operands
4. Memory operand fetch (MEM)
 Execute
5. Store/writeback
result (WB)
 Store Result
80
Remember the Single-Cycle Uarch
Instruction [25– 0]
26
Shift
left 2
Jump address [31– 0]
28
PC+4 [31– 28]
ALU
Add result
Add
4
Instruction [31– 26]
Control
Instruction [25– 21]
PC
Read
address
Instruction
memory
Instruction [15– 11]
M
u
x
M
u
x
1
0
Shift
left 2
RegDst
Jump
Branch
MemRead
MemtoReg
ALUOp
MemWrite
ALUSrc
RegWrite
PCSrc2=Br Taken
Read
register 1
Instruction [20– 16]
Instruction
[31– 0]
PCSrc
1
1=Jump
0
0
M
u
x
1
Read
data 1
Read
register 2
Registers Read
Write
data 2
register
0
M
u
x
1
Write
data
Zero
ALU ALU
result
Read
data
Address
Write
Data
memory
1
M
u
x
0
bcond data
Instruction [15– 0]
16
Sign
extend
32
ALU
control
Instruction [5– 0]
ALU operation
T
Based on original figure from [P&H CO&D, COPYRIGHT 2004
Elsevier. ALL RIGHTS RESERVED.]
BW=~(1/T)
81
Dividing Into Stages
200ps
100ps
200ps
IF: Instruction fetch
ID: Instruction decode/
register file read
EX: Execute/
address calculation
200ps
100ps
MEM: Memory access
WB: Write back
0
M
u
x
1
ignore
for now
Add
4
Add
Add
result
Shift
left 2
PC
Read
register 1
Address
Instruction
Instruction
memory
Read
data 1
Read
register 2
Registers Read
Write
data 2
register
Write
data
0
M
u
x
1
Zero
ALU ALU
result
Address
Data
memory
Write
data
16
Sign
extend
Read
data
1
M
u
x
0
RF
write
32
Is this the correct partitioning?
Why not 4 or 6 stages? Why not different boundaries?
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
82
Instruction Pipeline Throughput
Program
execution
Time
order
(in instructions)
lw $1, 100($0)
2004
2
Instruction
Reg
fetch
lw $2, 200($0)
400 6
ALU
600 8 800
Data
access
101000
12
1200
14
1400
16
1600
ALU
Data
access
Reg
18
1800
Reg
Instruction
Reg
fetch
8 ns
800ps
lw $3, 300($0)
Instruction
fetch
8800ps
ns
...
8 ns
800ps
Program
execution
Time
order
(in instructions)
2
lw $1, 100($0)
Instruction
fetch
lw $2, 200($0)
2 ns
200ps
lw $3, 300($0)
200 4
Reg
Instruction
fetch
2 ns
200ps
400 6 600
ALU
Reg
Instruction
fetch
2 ns
200ps
8 800
Data
access
ALU
Reg
2 ns
200ps
1000
10
1200
12
1400
14
Reg
Data
access
Reg
ALU
Data
access
2200ps
ns
2 ns
200ps
Reg
2 ns
200ps
5-stage speedup is 4, not 5 as predicated by the ideal model. Why?
83
Enabling Pipelined Processing: Pipeline Registers
IF: Instruction fetch
ID: Instruction decode/
register file read
WB: Write back
PCE+4
EX/MEM
MEM/WB
nPCM
ID/EX
PCD+4
IF/ID
44
MEM: Memory access
No resource is used by more than 1 stage!
00
MM
uu
x x
11
Add
Add
EX: Execute/
address calculation
Add
Add Add
Add
result
result
1616
Sign
Sign
extend
extend
3232
T
Write
Write
data
data
Read
Read
data
data
MDRW
AoutM
AE
Data
Data
memory
memory
T/k
ps
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
Address
Address
11
MM
uu
xx
00
AoutW
Write
Write
data
data
00
MM
uu
xx
11
Zero
Zero
ALU ALU
ALU
ALU
result
result
BM
Instruction
memory
Read
Read
data
data
11
Read
Read
register
register
22
RegistersRead
Registers
Read
Write
Write
data
data
22
register
register
BE
Instruction
Instruction
memory
Read
Read
register
register
11
ImmE
Address
Address
Instruction
PCPC
IRD
PCF
Shift
Shift
leftleft
22
T/k
ps
84
0
Write
data
Write
data
16
32
32Sign
Sign
extend
extend
Pipelined Operation Example
16
lw
Instruction fetch
All instruction classes must follow the same path and timing
through the
Any performance impact?
lw pipeline stages.
0
00
00
M
M
M
M
uu
uu
x
xxxx
11
11
Instruction decode
IF/ID
IF/ID
IF/ID
IF/ID
IF/ID
lw
lw
Execution
Memory
ID/EX
ID/EX
ID/EX
ID/EX
ID/EX
lw
Write back
EX/MEM
EX/MEM
EX/MEM
EX/MEM
EX/MEM
EX/MEM
MEM/WB
MEM/WB
MEM/WB
MEM/WB
MEM/WB
Add
Add
Add
Add
Add
Add
Add
Add
Add result
Add
Add
result
result
result
4
444
PC
PC
PC
Address
Address
Address
Instruction
Instruction
Instruction
memory
memory
memory
Instruction
Instruction
Instruction
Instruction
Shift
Shift
Shift
Shift
left
left
left 222
left
Read
Read
Read
Read
register
register
register 111
Read
Read
Read
data
data111
data
Read
data
Read
Read
Read
Read
register
register
register 222
Registers
Read
Registers
Registers Read
Read
Write
Write
data
Write
data
data222
data
register
register
register
register
Write
Write
Write
data
data
data
16
16
16
000
M
M
M
uuu
x
xx
111
Zero
Zero
Zero
Zero
ALU
ALU
ALU
ALU ALU
ALU
ALU
ALU
ALU
result
result
result
result
Address
Address
Address
Address
Data
Data
Data
Data
Data
memory
memory
memory
memory
memory
Write
Write
Write
Write
data
data
data
Read
Read
Read
Read
data
data
data
data
data
11
11
M
M
M
M
uuu
xxxx
0000
32
32
32
Sign
Sign
Sign
extend
extend
extend
lw
0
0
M
u
M
x
u
Based on original figure from [P&H
CO&D,
x
1 COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
Instruction decode
lw
Write back
85
data
register
1M
uu
xx
11
Write
Write
data
data
Pipelined Operation Example
16
16
16
Sign
extend
Sign
Sign
extend
extend
32
Write
data
M
0M
uu
xx
00
Data
memory
memory
Write
Write
data
data
32
32
Clock 1
Clock
Clock
5 3
lw $10,
sub
$11,20($1)
$2, $3
Instruction fetch
lw $10,
sub
$11,20($1)
$2, $3
lw $10, 20($1)
Instruction decode
Execution
sub $11, $2, $3
0
00
M
M
M
u
uu
xxx
1
11
lw $10,
sub
$11,20($1)
$2, $3
Memory
Memory
Execution
IF/ID
IF/ID
IF/ID
ID/EX
ID/EX
ID/EX
EX/MEM
EX/MEM
EX/MEM
sub
$11,20($1)
$2, $3
lw $10,
Write back
back
Write
MEM/WB
MEM/WB
MEM/WB
Add
Add
Add
Add
Add
Add Add
Add
Add
result
result
result
444
PC
PC
PC
Address
Address
Address
Instruction
Instruction
Instruction
memory
memory
memory
Instruction
Instruction
Shift
Shift
Shift
left 22
2
left
left
Read
Read
Read
register 11
1
register
register
Read
Read
Read
data 11
1
data
data
Read
Read
Read
register 22
2
register
register
Registers Read
Registers
Read
Registers
Read
Write
Write
2
Write
data 22
data
register
register
register
Write
Write
Write
data
data
16
16
16
0
00
M
M
uuu
xxx
1
11
Zero
Zero
Zero
ALU ALU
ALU
ALU
ALU
ALU
result
result
result
Address
Address
Address
Read
Read
Read
data
data
data
Data
Data
Data
Data
memory
memory
memory
Write
Write
Write
data
data
data
1
11
M
M
u
uu
xxx
0
00
32
32
Sign 32
Sign
Sign
extend
extend
extend
Clock
Clock
56 21 43
Clock
Clock
Clock
sub $11, $2, $3
86
lw $10, 20($1)
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
sub $11, $2, $3
lw $10, 20($1)
sub $11, $2, $3
Illustrating Pipeline Operation: Operation View
t0
t1
t2
t3
t4
Inst0 IF
Inst1
Inst2
Inst3
Inst4
ID
IF
EX
ID
IF
MEM
EX
ID
IF
WB
MEM
EX
ID
IF
t5
WB
MEM
EX
ID
IF
WB
MEM
EX
ID
IF
WB
MEM
EX
ID
IF
87
Illustrating Pipeline Operation: Resource View
t0
IF
ID
EX
MEM
WB
I0
t1
t2
t3
t4
t5
t6
t7
t8
t9
t10
I1
I2
I3
I4
I5
I6
I7
I8
I9
I10
I0
I1
I2
I3
I4
I5
I6
I7
I8
I9
I0
I1
I2
I3
I4
I5
I6
I7
I8
I0
I1
I2
I3
I4
I5
I6
I7
I0
I1
I2
I3
I4
I5
I6
88
Control Points in a Pipeline
PCSrc
0
M
u
x
1
IF/ID
ID/EX
EX/MEM
MEM/WB
Add
Add
result
Add
4
Branch
Shift
left 2
PC
Address
Instruction
memory
Instruction
RegWrite
Read
register 1
MemWrite
Read
data 1
Read
register 2
Registers Read
Write
data 2
register
Write
data
ALUSrc
Zero
Zero
ALU ALU
result
0
M
u
x
1
MemtoReg
Address
Data
memory
Write
Read
data
1
M
u
x
0
data
Instruction
16
[15– 0]
Sign
extend
32
6
ALU
control
MemRead
Instruction
[20– 16]
Instruction
[15– 11]
Based on original figure from [P&H CO&D,
COPYRIGHT 2004 Elsevier. ALL RIGHTS
RESERVED.]
0
M
u
x
1
ALUOp
RegDst
Identical set of control points as the single-cycle datapath!!
89
Control Signals in a Pipeline

For a given instruction



same control signals as single-cycle, but
control signals required at different cycles, depending on stage
decode once using the same logic as single-cycle and buffer control
signals until consumed
WB
Instruction
IF/ID

Control
M
WB
EX
M
WB
ID/EX
EX/MEM
MEM/WB
or carry relevant “instruction word/field” down the pipeline and
decode locally within each stage (still same logic)
Which one is better?
90
Pipelined Control Signals
PCSrc
ID/EX
0
M
u
x
1
WB
Control
IF/ID
EX/MEM
M
WB
EX
M
MEM/WB
WB
Add
Add
Add result
Instruction
memory
ALUSrc
Read
register 1
Read
data 1
Read
register 2
Registers Read
Write
data 2
register
Write
data
Zero
ALU ALU
result
0
M
u
x
1
MemtoReg
Address
Branch
Shift
left 2
MemWrite
PC
Instruction
RegWrite
4
Address
Data
memory
Read
data
Write
data
Instruction 16
[15– 0]
Instruction
[20– 16]
Instruction
[15– 11]
Sign
extend
32
6
ALU
control
0
M
u
x
1
1
M
u
x
0
MemRead
ALUOp
RegDst
Based on original figure from [P&H CO&D,
COPYRIGHT 2004 Elsevier. ALL RIGHTS
RESERVED.]
91
An Ideal Pipeline


Goal: Increase throughput with little increase in cost
(hardware cost, in case of instruction processing)
Repetition of identical operations


Repetition of independent operations


No dependencies between repeated operations
Uniformly partitionable suboperations


The same operation is repeated on a large number of different
inputs
Processing an be evenly divided into uniform-latency
suboperations (that do not share resources)
Fitting examples: automobile assembly line, doing laundry

What about the instruction processing “cycle”?
92
Instruction Pipeline: Not An Ideal Pipeline

Identical operations ... NOT!
 different instructions do not need all stages
- Forcing different instructions to go through the same multi-function pipe
 external fragmentation (some pipe stages idle for some instructions)

Uniform suboperations ... NOT!
 difficult to balance the different pipeline stages
- Not all pipeline stages do the same amount of work
 internal fragmentation (some pipe stages are too-fast but take the
same clock cycle time)

Independent operations ... NOT!
 instructions are not independent of each other
- Need to detect and resolve inter-instruction dependencies to ensure the
pipeline operates correctly
 Pipeline is not always moving (it stalls)
93
Issues in Pipeline Design

Balancing work in pipeline stages


How many stages and what is done in each stage
Keeping the pipeline correct, moving, and full in the
presence of events that disrupt pipeline flow

Handling dependences




Data
Control
Handling resource contention
Handling long-latency (multi-cycle) operations

Handling exceptions, interrupts

Advanced: Improving pipeline throughput

Minimizing stalls
94
Causes of Pipeline Stalls

Resource contention

Dependences (between instructions)



Data
Control
Long-latency (multi-cycle) operations
95
Dependences and Their Types



Also called “dependency” or less desirably “hazard”
Dependencies dictate ordering requirements between
instructions
Two types



Data dependence
Control dependence
Resource contention is sometimes called resource
dependence

However, this is not fundamental to (dictated by) program
semantics, so we will treat it separately
96
Handling Resource Contention


Happens when instructions in two pipeline stages need the
same resource
Solution 1: Eliminate the cause of contention

Duplicate the resource or increase its throughput



E.g., use separate instruction and data memories (caches)
E.g., use multiple ports for memory structures
Solution 2: Detect the resource contention and stall one of
the contending stages


Which stage do you stall?
Example: What if you had a single read and write port for the
register file?
97
Data Dependences

Types of data dependences




Flow dependence (true data dependence – read after write)
Output dependence (write after write)
Anti dependence (write after read)
Which ones cause stalls in a pipelined machine?



For all of them, we need to ensure semantics of the program
are correct
Flow dependences always need to be obeyed because they
constitute true dependence on a value
Anti and output dependences exist due to limited number of
architectural registers


They are dependence on a name, not a value
We will later see what we can do about them
98
Data Dependence Types
Flow dependence
r3
 r1 op r2
r5
 r3 op r4
Read-after-Write
(RAW)
Anti dependence
r3
 r1 op r2
r1
 r4 op r5
Write-after-Read
(WAR)
Output-dependence
r3
 r1 op r2
r5
 r3 op r4
r3
 r6 op r7
Write-after-Write
(WAW)
99
How to Handle Data Dependences

Anti and output dependences are easier to handle

write to the destination in one stage and in program order

Flow dependences are more interesting

Five fundamental ways of handling flow dependences
100
Download