IBM Introduction to Verification

advertisement
Hardware Functional
Verification Class
Non Confidential Version
Verification
October, 2000
Contents
Introduction
 Verification "Theory"
 Secret of Verification
 Verification Environment
 Verification Methodology
 Tools
 Future Outlook

Introduction
What is functional verification?
Act of ensuring correctness of the logic design
 Also called:

Simulation
logic verification
What is Verification
Architecture
CPI
High Level
Design
Performance
Verification
Cycle Time
Functional
Verification
Implementation
in VHDL
Timing
Verification
Logic Equival.
Verification
Tape-Out
(Fabrication)
Verification Challenge
How do we know that a design is correct?
 How do we know that the design behaves as
expected?
 How do we know we have checked everything?
 How do we deal with size increases of designs
faster than tools performance?
 How do we get correct Hardware for the first
RIT?

Answer: Functional Verification

Also called:
Testpattern
Simulation
logic verification
Design
under
Test

Reference
Model
Verification is based on
Testpattern Generation
Reference Model
Development
Result Checking
Results Checking
Why do functional verification?

Product time-to-market
hardware turn-around time
volume of "bugs"

Development costs
"Early User Hardware" (EUH)
Some lingo
Facilities: a general term for named wires (or signals) an
latches. Facilities feed gates (and/or/nand/nor/invert, etc
which feed other facilities.
 EDA: Engineering Design Automation--Tool vendors. IBM
has an internal EDA organization that supplies tools. We
also procure tools from external companies.

More lingo
Behavioral: Code written to perform the function of logic o
the interface of the design-under-test
 Macro: 1. A behavioral 2. A piece of logic
 Driver: Code written to manipulate the inputs of the
design-under-test. The driver understands the interface
protocols.
 Checker: Code written to verify the outputs of the design
under-test. A checker may have some knowledge of wha
the driver has done. A check must also verify interface
protocol compliance.

Still more lingo
Snoop/Monitor: Code that watches interfaces or internal
signals to help the checkers perform correctly. Also used
to help drivers be more devious.
 Architecture: Design criteria as seen by the customer. T
design's architecture is specified in documents (e.g. POP
Book 4, Infiniband, etc), and the design must be complian
with this specification.
 Microarchitecture: The design's implementation.
Microarchitecture refers to the constructs that are used in
the design, such as pipelines, caches, etc.

Verification "Theory"
Verification Cycle
Create Testplan
Develop environment
Debug hardware
Escape Analysis
Regression
Hardware debug
Fabrication
Verification Testplan

Team leaders work with design leaders to create a
verification testplan. The testplan includes:
Schedule
Specific tests and methods by simulation level
Required tools
Input criteria
Completion criteria
What is expected to be found with each test/level
What's not covered by each test/level
Hierarchical Design
System
Chip
...
Unit
Macro
Allows design team to break system down into
logical and comprehendable components.
Also allows for repeatable components.
Hierarchical design

Only lowest level macros contain latches and
combinatorial logic (gates)
Work gets done at these levels

All upper layers contain wiring connections only
Off chip connections are C4 pins
Current Practices for Verifying a
System

Designer Level sim
Verification of a macro (or a few small macros)

Unit Level sim
Verification of a group of macros

Element Level sim
Verification of a entire logical function such as a processo
storage controller or I/O control
Currently synonomous with a chip

System Level sim
Multiple chip verification
Often utilizes a mini operating system
The Black Box
Inputs
Some piece of logic
design written in
VHDL
Outputs
The black box has inputs, outputs, and performs some function.
 The function may be well documented...or not.
 To verify a black box, you need to understand the function and be able
to predict the outputs based on the inputs.
 The black box can be a full system, a chip, a unit of a chip, or a single
macro.

White box/Grey box

White box verification means that the internal
facilities are visible and utilized by the testcase
driver.
Examples: 0-in (vendor) methods

Grey box verification means that a limited
number of facilities are utilized in a mostly black
box environment.
Example: Most environments! Prediction of
correct results on the interface is occasionally
impossible without viewing and internal signal.
Perfect Verification
To fully verify a black box, you must show that the
logic works correctly for all combinations of inputs.
This entails:
Driving all permutations on the input lines
Checking for proper results in all cases
Full verification is not practical on large pieces of
designs...but the principles are valid across all
verification.
In an Ideal World....

Every macro would have perfect verification performed
All permutations would be verified based on legal inputs
All outputs checked on the small chunks of the design

Unit, chip, and system level would then only need to veri
interconnections
Ensure that designers used correct Input/Output
assumptions and protocols
Reality Check

Macro verification across an entire system is not feasible
for the business
There may be over 400 macros on a chip, which would
require about 200 verification engineers!
That number of skilled verification engineers does not ex
The business can't support the development expense

Verification Leaders must make reasonable trade-offs
Concentrate on Unit level
Designer level on riskiest macros
Typical Bug rates per level
Tape-Out Criteria

Checklist of items that must be completed befo
RIT
Verification items, along with Physical/Circuit
design criteria, etc
Verification criteria is based on
– Function tested
– Bug rates
– Coverage data
– Clean regression
Escape Analysis
Escape analysis is a critical part of the
verification process
 Important data:

Fully understand bug! Reproduce in sim if
possible
– Lack of repro means fix cannot be verified
– Could misunderstand the bug
Why did the bug escape simulation?
Process update to avoid similar escapes in futur
(plug the hole!)
Escape Analysis: Classification

We currently classify all escapes under two views
Verification view
What areas are the complexities that allowed the
escape?
– Cache Set-up, Cycle dependency, Configuration
dependency, Sequence complexity, and expected resu
Design View
– What was wrong with the logic?
– Logic hole, data/logic out of synch, bad control reset,
wrong spec, Bad logic
–
Cost of Bugs Over Time

The longer a bug goes undetected, the more expensive the fix
A bug found early (designer sim) has little cost
Finding a bug at Chip or System Sim has moderate cost
Requires more debug time and problem isolation
– Could require new algorithm, which could effect schedule and
cause rework of physical design
Finding a bug in System Test (testfloor) requires new hardware RIT
Finding a bug in the customer's environment can cost hundreds of
millions in hardware and brand image
–
$
Time
Secret of Verification
(Verification Mindset)
The Art of Verification

Two simple questions
Am I driving all possible input
scenarios?
How will I know when it fails?
Three Simulation Commandments
Thou shalt stress
thine logic harder
than it will ever be
stressed again
Thou shalt place
checking upon all
things
Thou shalt not move
onto a higher platform
until the bug rate has
dropped off
Need for Independent Verification

The verification engineer should not be an
individual who participated in logic design of the
DUT
Blinders: If a designer didn't think of a failing scenario when
creating the logic, how will he/she create a test for that case?
However, a designer should do some verification on his/her desi
before exposing it to the verification team

Independent Verification Engineer needs to
understand the intended function and the
interface protocols, but not necessarily the
implementation
Verification Do's and Don'ts

DO:
Talk to designers about the function and
understand the design first, but then
Try to think of situations the designer might have
missed
Focus on exotic scenarios and situations
– e.g try to fill all queues while the design is don
in a way to avoid any buffer full conditions
Focus on multiple events at the same time
Verification Do's and Don'ts
(continued)
Try everything that is not explicitly forbidden
Spend time thinking about all the pieces that you
need to verify
Talk to "other" designers about the signals that
interface to your design-under-test

Don't:
Rely on the designer's word for input/output
specification
Allow RIT Criteria to bend for sake of schedule
Typical Verification diagram
Checking framework
Scoreboard
Struct:
Header
Payload
checking
xlate
predict
DUT
(bridge chip)
Bus
gen packet
drive packet
post packet
Coverage Data
Stimulus
types
latency
address
sequences
Device
FSMs
conditions
transactions
transitions
Conversation
Sequence
Packet
Protocol
The Line Delete Escape
Escape: A problem that is found on the test floo
and therefore has escaped the verification
process
 The Line Delete escape was a problem on the
H2 machine

S/390 Bipolar, 1991
Escape shows example of how a verification
engineer needs to think
The Line Delete Escape
(pg 2)

Line Delete is a method of circumventing bad
cells of a large memory array or cache array
An array mapping allows for removal of defectiv
cells for usable space
The Line Delete Escape
(pg 3)
If a line in an array has multiple bad
bits (a single bit usually goes
unnoticed due to ECC-error
correction codes), the line can be
taken "out of service".
In the array pictured, row 05 has a
bad congruence class entry.
05
.
.
.
The Line Delete Escape
(pg 4)
Data in
ECC Logic
Data enters ECC creation logic prior to storage int
the array. When read out, the ECC logic corrects
single bit errors and tags Uncorrectable Errors
(UEs), and increments a counter corresponding to
the row and congruence class.
05
.
.
.
ECC Logic
Data out
Counters
The Line Delete Escape
(pg 5)
When a preset threshhold of UEs are detected
from a array cell, the service controller is informed
that a line delete operation is needed.
Data in
ECC Logic
05
.
.
.
ECC Logic
Counters
Data out
Threshhold
Service
Controller
Data in
The Line Delete Escape
(pg 6)
ECC Logic
Line delete
control
Storage
Controller
configuration
registers
05
.
.
.
ECC Logic
The Service controller can update the configuratio
registers, ordering a line delete to occur. When the
configuration registers are written, the line delete
controls are engaged and writes to row 5,
congruence class 'C' cease.
However, because three other cells remain good in
this congruence class, the sole repercussion of the
line delete is a slight decline in performance.
Counters
Data out
Threshhold
Service
Controller
Data in
The Line Delete Escape
(pg 7)
ECC Logic
How would we test this logic?
Line delete
control
Storage
Controller
configuration
registers
What must occur in the testcase?
What checking must we implement?
05
.
.
.
ECC Logic
Counters
Data out
Threshhold
Service
Controller
Verification Environment
General Simulation Environment
Testcase
C/C++
HDL Testbenches
Specman e
Synopsis' VERA
Testcase
Driver
Compiler
(not always required)
Environment
Data
Simulator
Output
Event simulator
Cycle simulator
Emulator
Initialization
Run-time requirements
Design
Source
VHDL
Verilog
Model
Event Simulation compiler
Cycle simulation compiler
....
Emulator Compiler
Testcase results
Run
Foreground
Simulation
Run
Background
Simulation
Configure
Environment
Release
Environment
Debug Fail
Debug
Environment
Logic Designer
Environment Developer
View Trace
Verification
Engineer
Monitor
Batch
Simulation
Specify
Batch
Simulation
Transfer
Testcase
Answer
Defect
Redirect
Defect
Release
Model
Model Builder
Regress
Fails
Create
Defect
Verify
Defect Fix
Define
Project
Goals
Project
Status
Report
Project Manager
Types of Simulators

Event Simulators
Model Technology's (MTI) VSIM is most common
capable of simulating analog logic and delays

Cycle Simulators
For clocked, digital designs only
Model is compiled and signals are "ordered". Infinite loop
are flagged during compile as "signal ordering deadlocks"
Each signal is evaluated once per cycle, and latches are s
for the next cycle based on the final signal value.
Types of Simulators
(con't)

Simulation Farm
Multiple computers are used in parallel for simulation

Acceleration Engines/Emulators
Quickturn, IKOS, AXIS.....
Custom designed for simulation speed (parallelized)
Accel vs. Emulation
–
–
True emulation connects to some real, in-line hardware
Real software eliminates need for special testcase
Speed compare

Influencing Factors:
Hardware Platform
Frequency, Memory, ...
Model content
– Size, Activity, ...
Interaction with
Environment
Model load time
Testpattern
Network utilization
Relative Speed of
different Simulators
–
Event Simulator
1
Cycle Simulator
20
Event driven cycle
Simulator
50
Acceleration
1000
Emulation
100000
Speed - What is fast?

Cycle Sim for one processor chip
 1 sec realtime = 6 month

Sim Farm with a few hundred computers
 1 sec realtime = ~ 1 day

Accelerator/Emulator
 1 sec realtime = ~ 1 hour
Basic Testcase/Model Interface: Clocking

Clocking cycles
A simulator has the concept of time.
Event sim uses the smallest increment of time in the
target technology
– All other sim environments use a single cycle
A testcase controls the clocking of cycles (movement of
time)
– All APIs include a clock statement
– Example: "Clock(n)", where n is the increment to clock
(usually '1')
–
Cycle 0
Cycle 1
Cycle 2 ...
....Cycle n
Basic Testcase/Model Interface: Setfac/Putfac

Setting facilities
A simulator API allows you to alter the value of facilities
Used most often for driving inputs
Can be used to alter internal latches or signals
Can set a single bit or multi-bit facility
values can be 0,1, or possibly X, high impedence, etc
Example syntax: "Setfac facility_name value"
–
Setfac address_bus(0:31) "0F3D7249"x
Cycle 0
Cycle 1
Cycle 2 ...
....Cycle n
Basic Testcase/Model Interface: Getfac

Reading facilities values
A simulator API allows you to read the value of a facility
Used most often checking outputs
Can be used to read internal latches or signals
Example syntax: "Getfac facility_name varname"
Getfac adder_sum checksum
Cycle 0
Cycle 1
Cycle 2 ...
....Cycle n
Basic Testcase/Model Interface: Putting it
together

Clocking, setfacs and putfacs occur at set times during a
cycle
Setting of facilities must be done at the beginning of the
cycle.
Getfacs must occur at the end of a cycle
In between, control goes to the simulation engine, where
the logic under test is "run" (evaluated)
Setfac address_bus(0:31) "0F3D7249"x
Getfac adder_sum checksum
Cycle 0
Cycle 1
Cycle 2 ...
....Cycle n
Running Simulation
Basic steps:
Create a testcase
Build a model
– Different
model build programs for different
simulation engines
Run the simulation engine
Check results. If testcase fails
– do preliminary debug (create AET, view scans)
– Get fix from designer and repeat from step 2
Calculator Design

Calculator has 4 functions:
Add
Subtract
Shift left
Shift right

Calculator can handle 4 requests in parallel
All 4 requestors use separate input signals
All requestors have equal priority
Calculator design

Input/Output description
c_clk
req1_cmd_in<0:3>
out_resp1<0:1>
req1_data_in<0:31>
out_data1<0:31>
out_resp2<0:1>
req2_cmd_in<0:3>
req2_data_in<0:31>
req3_cmd_in<0:3>
req3_data_in<0:31>
req4_cmd_in<0:3>
req4_data_in<0:31>
reset<0:7>
calc_top
out_data2<0:31>
out_resp3<0:1>
out_data3<0:31>
out_resp4<0:1>
out_data4<0:31>
Calculator Design

I/O Description
Input commands:
–0
- No-op
– 1 - Add operand1 and operand2
– 2 - Subtract operand2 from operand1
– 5 - Shift left operand1 by operand2 places
– 6 - Shift right operand1 by operand2 places
Input Data
– Operand1 data arrives with command
– Operand2 data arrives on the following cycle
Calculator Design

Outputs
Response line definition
–0
- no response
– 1 - successful operation completion
– 2 - invalid command or overflow/underflow erro
– 3 - Internal error
Data
– Valid result data on output lines accompanies
response (same cycle)
Calculator Design

Other information
Clocking
–
–
When using a cycle simulator, the clock should be held
high (c_clk in the calculator model)
The clock should be toggled when using an event
simulator
Calculator priority logic
– Priority
logic works on first come first serve
algorithm
– Priority logic allows for 1 add or subtract at a
time and one shift operation at a time
Calculator Design

Input/Output timing
req1_cmd_in<0:3>
req1_data_in<0:31>
out_resp1<0:1>
out_data1<0:31>
Calculator Exercise part 1

Build the model
make a directory
– mkdir
calc_test
– cd calc_test
–

../calc_build
Run the model
– calc_run

Check the AET
– scope
tool
– use calc4.wave for input/output facility names
Calculator Exercise Part 2

There are 5+ bugs in the design!
How many can you find by altering the simple
testcase?
Verification Methodology
Verification Methodology Evolution
Hand Generated
Hand Checked
Hardcoded
Test Patterns
Hand Generated
Self Checking
Hardcoded
AVPs, IVPs
Time
Testcases
Testcase
Drivers
Testcase
Generators
Hardcoded
AVPGEN, GENIE/GENESYS
SAK
Tool Generated
Self Checking
Interactive on-the-fly generation
On-the-fly checking
Random SMP, C/C++
More Stress per Cycle
Coverage tools
Formal Verification
Reference Model
Abstraction of design implementation
 Could be a

complete behavior description of the design usin
a standard programming language
formal specification using math. languages
complete state transition graph
detailed testplan in english language for
handwritten testpattern
part of a random driver or checker
....
Behavioral Design

One of the most difficult concepts for new
verification engineers is that your behavioral ca
"cheat".
The behavioral only needs to make the design-
under-test think that the real logic is hanging off i
interface
The behavioral can:
– predetermine answers
– return random data
– look ahead in time
Behavioral Design

Cheating examples
Return random data in Memory modeling
– A memory controller does not know what data wa
stored into the memory cards (behavioral).
Therefore, upon fetching the data back, the
memory behavioral can return random data.
Branch prediction
– A behavioral can look ahead in the instruction
stream and know which way a branch will be
resolved. This can halve the required work of a
behavioral!
Hardcoded Testcases and IVPs

IVP (Implementation Verification Program)
A testcase that is written to verify a specific
scenario
Appropriate usage:
– during initial verification
– as specified by the designer/verification
engineer to ensure that important or hard-toreach scenarios are verified.
Other hardcoded testcases are done for simple
designs
 Hardcoded indicates a single scenario

Testbenches
Testbench is a generic term that is used
differently across locations/teams/industry
 It always refers to a testcase
 Most commonly (and appropriately), a testbenc
refers to code written in the design language (e
VHDL) at the top level of the hierarchy. The
testbench is often simple, but may have some
elements of randomness.

Testcase Generators
Software that creates multiple testcases
 Parameters control the generator in order to
focus the testcases on more specific arch/
microarchitectural components.

Ex: If branch intensive testcases are desired, th
parameters would be set to increase the probabi
of creating branch instructions.

Can create "tons" of testcases which have
desired level of randomness.
broad-brush approach complements IVP plan
Randomness can be in data or control
Random Environments

"Random" is used to describe many
environments
Some teams call testcase generators "random"
(they have randomness in the generation proces
The two major differentiators are:
– Pre-determined vs. on-the-fly generation
– Post processing vs. on-the-fly checking
Random Drivers/checkers

The most robust random environments use onthe-fly drivers and on-the-fly checking
On-the-fly drivers will give more flexibility and
more control, along with the cabability to stress th
logic to the micro-architecture's limit
On-the-fly checkers will flag interim errors. The
testcase is stopped upon hitting an error.

However, the overall quality is determined by
how good the verification engineer is! If
scenarios aren't driven or checks are missing,
the environment is incomplete!
Random Drivers/Checkers

Costs of optimal random environment
Code intensive
Need an experienced verification engineer to
oversee effort to ensure quality

Benefits of optimal random environment
More stress on the logic than any other
environment, including the real hardware
It will find nearly all of the most devious bugs an
all of the easy ones.
Random Drivers/Checkers

Sometimes too much randomness will prevent
drivers from uncovering design flaws.
"Un-randomizing the random drivers" needs to b
built into the environment depending upon the
design
– Hangs due to looping
– Low activity scenarios

"Micro-modes" can be built into the drivers
Allows user to drive very specific scenarios
Random Example: Cache model
Cache coherency is a problem for multiprocess
designs
 Cache must keep track of ownership and data o
a predetermined boundary (quad-word, line,
double-line, etc)

Cache Coherency example

High stress environment requires limiting size o
data used in testcase
A limited number of
congruence classes are
chosen at the start of the
testcase to ensure stress.
Only these addresses will be
used by the drivers to
generate requests.
.
.
.
Cache Coherency example

Multiprocessor Environment
I/O
macro
Cache
Storage
Controller
.
.
.
Checking
program
proc1
macro
proc2
macro
...
procN
macro
Cache coherency example

Start
Driver algorithm
For
each
cycle
N
Protocol
allows
command
to be sent?
Y
Choose
command
using parm
table
Create
random
data as
required
Request
address
from addr
space
Send
command
Cache Coherency example

This environment drives more stress than with
the real processors in a system environment
Micro-architectural level on the interfaces vs.
architectural instruction stream
Real processor and I/O will add delays based on
it's own microarchitecture
Random Seeds
Testcase seed is randomly chosen at the start o
simulation
 The initial seed is used to seed decision-makin
driver logic
 Watch out for seed synchronization across
drivers

Formal Verification
Formal Verification employs mathematic
algorithms to prove correctness or compliance
 Formal applications fall under the following:

Model Checking (used for logic verification)
Equivelence Checking (ex: VHDL vs. Synthesis
output)
Theorem Proving
Symbolic Trajectory Analysis (STE)
Simulation vs. Model Checking

If the overall State space of a design is the
universe, then
Model checking is like a bulb and
Simulation is like a laser beam
Formal Verification-Model Checking

IBM's "Rulebase" is used for Model Checking
Checks properties against the logic
– Uses
EDL and Sugar to express environment
and properties
Limit of about 300 latches after reduction
– State space size explosion is biggest challeng
in FV
Formal Verification-Model Checking

Rulebase
Coverage

Coverage techniques give feedback on how
much the testcase or driver is exercising the log
Coverage makes no claim on proper checking

All coverage techniques monitor the design
during simulation and collect information about
desired facilities or relationships between
facilities
Coverage Goals
Measure the "quality" of a set of tests
 Supplement test specifications by pointing to
untested areas
 Help create regression suites
 Provide a stopping criteria for unit testing
 Better understanding of the design

Coverage Techniques

People use coverage for multiple reasons
Designer wants to know how much of his/her macro is
exercised
Unit/Chip leader wants to know if relationships between
state machine/microarchitectural components have been
exercised
Sim team wants to know if areas of past escapes are bei
tested
Program manager wants feedback on overall quality of
verification effort
Sim team can use coverage to tune regression buckets
Coverage Techniques

Coverage methods include:
Line-by-line coverage
– Has
each line of VHDL been exercised?
(If/Then/Else, Cases, states, etc)
Microarchitectural cross products
– Allows for multiple cycle relationships
– Coverage models can be large or small
Functional Coverage
Coverage is based on the functionality of the
design
 Coverage models are specific to a given design
 Models cover

The inputs and the outputs
Internal states
Scenarios
Parallel properties
Bug Models
Interdependency-Architectural Level
The Model:
We want to test all dependency types of a resource (register) relating to all
instructions


The attributes
I - Instruction: add, add., sub, sub.,...
R - Register (resource): G1, G2,...
DT - Dependency Type: WW, WR, RW, RR and None
The coverage tasks semantics
A coverage instance is a quadruplet <Ij, Ik, Rl, DT>, where Instruction
follows Instruction Ij, and both share Resource Rl with Dependency Ty
DT.
Interdependency-Architectural Level (2)

Additional semantics
The distance between the instructions is no more
than 5
The first instruction is at least the 6th

Restrictions
Not all combinations are valid
Fixed point instructions cannot share FP registe
Interdependency-Architectural Level (3)
Size and grouping:
Original size: ~400 x 400 x 100 x 5 = 8*107
Let the Instructions be divided into disjoint groups
... In
 Let the Resources be divided into disjoint groups R
... Rk

After grouping: ~60 x 60 x 10 x 5 = 180000
The Coverage Process

Defining the domains of coverage
Where do we want to measure coverage
What attributes (variables) to put in the trace

Defining models
Defining tuples and semantic on the tuples
Restrictions on legal tasks

Collecting data
Inserting traces to the database
Processing the traces to measure coverage

Coverage analysis and feedback
Monitoring progress and detecting holes
Refining the coverage models
Generating regression suites
Coverage Model Hints
Look for the most complex, error prone part of the application
 Create the coverage models at high level design
Improve the understanding of the design
Automate some of the test plan
 Create the coverage model hierarchically
Start with small simple models
Combine the models to create larger models.
 Before you measure coverage check that your rules are correct on some
sample tests.
 Use the database to "fish" for hard to create conditions.
Try to generalize as much as possible from the data:
– X was never 3 is much more useful than the task (3,5,1,2,2,2,4,5) was
never covered.

Future Coverage Usage

One area of research is automated coverage
directed feedback
If testcases/drivers can be automatically tuned t
go after more diverse scenarios based on
knowledge about what has been covered, then
bugs can be encountered much sooner in design
cycle
Difficulty lies in the expert system knowing how
alter the inputs to raise the level of coverage.
How do I pick a methodology?

Components to help guide you are in the design
Amount of work required to verify is often
proportional to the complexity of the design-unde
test
– Simple macro may need only IVPs
– Is design dataflow or control?
FV works well on control macros
 Random works on dataflow intensive macros

How do I pick a methodology?

Experience!
Each design-under-test has a best-fit
methodology
It is human nature to use the techniques in whic
you're familiar
Gaining experience with multiple techniques will
increase your ability to properly choose a
methodology
How would you test a Branch
History Table?

BHT looks ahead in the instruction stream in
order to prefetch branch target addresses
Large performance benefit
BHT Array keeps track of previous branch targe
address
 BHT uses current instruction address to look
forward for known branch addresses
 BHT uses "taken" or "not-taken" branch
execution results to update array

Tools
Tools are targeted for specific levels
Most testcase drivers/checkers are targeted for a specific
level
 There may be some usage by related levels

Tool target level
Potential Usage
Designer
Unit
Potential Usage
Element
System
Bringup
Examples of tools targeted for specific
levels

Formal Verification
Designer sim level
Cannot handle large pieces of design

Architectural Testcase Generators
AVPGEN, GENIE/GENESYS-PRO, SAK, TnK
Intended for Microprocessor or System levels
Some usage at neighboring levels

There are no drivers/checkers that are used at all levels
Mainline vs. Pervasive: definitions


Mainline function refers to testing of the logic under normal
running conditions. For example, the processor is running
instructions streams, the storage controller is accessing memory,
and the I/O is processing transactions.
Pervasive function refers to testing of logic that is used for nonmainline functions, such as power-on-reset (POR), hardware deb
error injection/recovery, scanning, BIST or instrumentation.
Mainline testing examples
Architectural testcase generators (processor)
 Random drivers

Storage control verification
Data moving devices

System level testcase generators
Some Pervasive Testing targets
Trace arrays
 Scan Rings
 Power-on-reset
 Recovery and bad machine paths
 BIST (Built-in Self Test)
 Instrumentation

And at the end ...
At the end the verification engineer
understands the design better than
anybody else !
Future Outlook
Reasons for Evolution
Increasing Complexity
 Increasing Modelsize
 Exploding State spaces
 Increasing number of functions
... but ...
 Reduced timeframe
 Reduced development budget

Evolution of Problem Debug
Analysis of simulation results (no tools support)
 Interactive observation of model facilities
 Tracing of certain model facilities
 Trace postprocessing to reduce amount of data
 On the fly checking by writing programs


Intelligent agents, knowledge based systems
:
Evolution of Functional Verification
Architecture
MicroArchitecture
Testpattern
Simulation
RTL-level Model
Chip
Evolution of Functional Verification
Architecture
Manual, labor intensive, too
expensive for increasing
complexity
MicroArchitecture
Testpattern
Simulation
RTL-level Model
Chip
Evolution of Functional Verification
Architecture
MicroArchitecture
Testcase
Generator
Testpattern
Simulation
RTL-level Model
Chip
Evolution of Functional Verification
Architecture
MicroArchitecture
Testcase
Generator
Testpattern
Simulation
RTL-level Model
Chip
Unit
Covers only small subset of total state
space, often finds one bug in a
problem area but not all related ones
Evolution of Functional Verification
Architecture
MicroArchitecture
Testcase
Generator
Testpattern
Formal
Rules
Simulation
RTL-level Model
Chip
Formal
Verification
Unit
Evolution of Functional Verification
Architecture
Manual definition of
rules, limited to small
design pieces
MicroArchitecture
Testcase
Generator
Testpattern
Formal
Rules
Simulation
RTL-level Model
Chip
Formal
Verification
Unit
Evolution of Functional Verification
Architecture
MicroArchitecture
Testcase
Generator
Testpattern
Formal
Rules
Simulation
Coverage
Models
RTL-level Model
Chip
Formal
Verification
Unit
Statistics
Analysis
Evolution of Functional Verification
High effort for
environment setup and Architecture
further design complexity
increase
MicroArchitecture
Testcase
Generator
Testpattern
Formal
Rules
Simulation
Coverage
Models
RTL-level Model
Chip
Formal
Verification
Unit
Statistics
Analysis
Evolution of Functional Verification
Architecture
MicroArchitecture
Testcase
Generator
High Level
Model
Formal
Rules
Testpattern
Simulation
Coverage
Models
RTL-level Model
Chip
Formal
Verification
Unit
Statistics
Analysis
Evolution of Functional Verification
Architecture
MicroArchitecture
Testcase
Generator
High Level
Model
Formal
Rules
Manual effort to reflect
Testpattern
Coverage Analysis in
testcase generation
Simulation
Coverage
Models
RTL-level Model
Chip
Formal
Verification
Unit
Statistics
Analysis
Evolution of Functional Verification
Architecture
MicroArchitecture
Testcase
Generator
High Level
Model
Formal
Rules
Testpattern
Simulation
RTL-level Model
Chip
Formal
Verification
Unit
Statistics
Coverage
Models
Data
Mining
New ways / New development

Combination of formal methods and simulation
First tools available today
New algorithms in formal methods to solve size
problems
 Verification of specification and formal proof tha
implementation is logically correct

requires formal specification language
Coverage directed testcase generation
 HW/SW coverification

Download