Verisity VA Notes

advertisement
Verisity VA Notes
Derived from VA by Steven Levitan
Steve@ee.pitt.edu
Execution Flow Chart
Test Phases and their Methods
Initialization
Pre-Run
Generation
global.init()
global.setup_test()
global.generate_test()
general preparations
configuration
memory allocation, instantiation,
generation of data elements
before simulation
time consuming activity
activates TCMs and events
Simulation Run global.start_test()
global.run_test()
global.stop_run()
Post-Run Checking global.extract_test() preparations for checking
global.check_test()
checking
Finalizing
global.finalize_test()
clean-up, coverage, reporting
Execution Flow
Flow Notes
The recursive call for run(), quit() and check() is done depthfirst for every struct in the tree of structs, starting form sys.

The parent's function call is done before its child (pre-order).
The methods of the generate_test() phase are called recursively
for all structs in the tree of structs in the following order
(starting from sys):


init(), pre_generate(), post_generate().
The recursive call is done between pre_generate() and
post_generate().
Note that if an struct member is marked as do not generate then
none of the above will be called for that struct.
Only stop_run() and quit() can be called explicitly.
Extendable Methods
global.init() - actions that relate to interactive work before test execution
global.setup_test() - configuration and preparation for the run
struct.init() - initialization within structs that are instantiated using the new
expression
struct.pre_generate() - preparations for generation
struct.post_generate() - actions that concern generation and require the
struct's generated data
global.start_test() - actions at the beginning of simulation that require all
the generated data
struct.run() - invocation of TCMs and explicit emission of events
global.extract_test() - global preparation of data after simulation and
before checking
struct.extract() - struct-specific preparation of data after simulation and
before checking
struct.check() - post-run checking
global.finalize_test() - post-test actions
Non Extendable Methods
global.generate_test() - generates the tree of structs
struct.generate() - generates a specific struct
global.run_test() - executes the run() method and starts
event threads for all the structs
global.stop_run() - ends the simulation run and calls the
quit() method for all the structs
struct.quit() - deactivates all TCMs and events of the
struct
global.check_test() - executes the check() method for
all the structs
2 Getting Started
Verification Process
The recommended process of constructing a verification environment
based on Specman Elite comprises the following steps:
1. Design the verification system surrounding your DUT and determine
how you would model it (2.1 Outlining the Test Environment).
2. Determine your overall verification strategy, i.e. how a typical test
should look (2.2 Defining a Typical Test).
3. Define what should be tested (2.3 Defining a Test Plan).
4. Model your basic environment in e (3.0 Modeling).
Start your implementation with generation, especially non-error generation (4.2
Recommended Process).
Add end-to-end checking for the simple cases (7.0 Checking). Then, your test
engineers can start writing random tests for these cases.
5.
Finally, gradually add the rest of your verification environment:
checks for complex cases, coverage, generation of special cases, etc.
To learn more about Generation, Checking, and Coverage, see the
corresponding patterns.
2.1 Outlining the Test
Environment
2.1.1 Start by Drawing
Draw the verification environment in a way
that you can see the big picture as well as all
important details.
Differentiate



DUT
Stubs written in HDL
Stubs written in e.
Distinguish inputs, outputs, inout, busses
Separate functions on pins that are used for
multiple, unrelated functions
Example: ATM Switch
Input Port
One 8-bits input port (named Port A).
This port is usually connected to a controller that
programs the ATM switch by sending commands.
Through this port the connection table can be filled.
The connection table defines the routing rules of the
switch. This port is also used for device configuration.
In addition to the configuration and control commands
that write to the ATM switch, there are also read
commands that enable reading back the registers of
the ATM switch.
Data Ports
Four additional ports (named C1..C4).
These ports are where the actual data is received or
sent.
Each of these ports is 4 bits wide, and each has its
own clock.
Each port can transfer and receive ATM data cells (53
bytes each) at a rate of 4 bits every clock cycle.
Each port is "full-duplex", i.e. it can receive and
transmit data simultaneously. (This is reflected by the
fact that each port has separate clocks for input and
output, i.e. iclk-n and oclk-n.)
Test Environment
Stubs
Data Ports:




Each port is "full-duplex"
Define 2 stubs for each port: one for input and one for
output
The input stubs generate ATM cells and inject them into the
port
The output stubs collect cells from port and log them (for
later use in checking, reporting, and possibly coverage
analysis)
Stubs
Control Ports (port A):



All commands are initiated by the external controller and are
executed immediately via a simple bus protocol.
Therefore: define only one stub that sends all commands,
collecting the result in case of a read command.
The most important task of this stub is to inject the
connection table and some configuration options right after
reset.
2.1.2 Determining the
Abstraction Level
Abstraction of A DUT Interface has three
components:



A struct that represents the stub (the DUT port counterpart
entity).
A struct that represents the data elements transferred
from/to the DUT.
A TCM that injects or collects the data elements.
 This TCM also converts data from e abstraction levels to HDLs
Abstraction/Data Types
It is important to use higher levels of abstraction for test definition,
checking, and almost every other verification sub-task.
Use packing/unpacking and bit extraction and insertion to convert from
higher level e-structs and fields

This conversion is usually done by the TCM.
What is the right level of abstraction that one should choose for the
data items that are injected into or collected from the DUT?
The decision regarding data abstraction level is very important. It
affects effectiveness and efficiency in using the test environment.
The right abstraction level is one that makes it easy for you to:



Define interesting test cases using sequences of the data item
Generate meaningful and concise test reports
Analyze the test results by looking at a list of data items
Abstraction: Bus Example
If we define a data item to be the sampling of all the bus
signals at every clock cycle, then the control and manipulation
of such data items will be long and complex.


It would take many data items to describe an interesting case
happening on the bus.
This level of detail is what you usually get when looking at the DUT
interface using signal viewer.
Rather use bus transaction as the basic data item transferred
over the bus.


A transaction can have 3-5 main attributes, e.g. kind
(read/write/control/config, etc), burst or single data transfer, size of
the burst, time the transaction was started, etc.
With these attributes, you can define interesting sequences of data
items or easily read reports of bus transactions.
Abstraction: More Examples
Use instruction as the basic data item for CPU/DSP types of
devices as opposed to using all stages of instruction execution,
e.g. fetch, operands read, calculate, write result, etc.
Data communication devices usually define many layers of data
abstractions, e.g. frames that contains cells, etc.


If no need for the lower levels of representation, choose the one
that satisfies the above conditions of ease of definition/analysis.
If lower layers are required for verification and debugging
implement a layered modeling scheme.
 Define an abstraction level for every data layer specified by the
protocol and connect the data layers using a struct hierarchy (e.g.
frames that contain cells, cells that contain ...,
 Add to each data item all the checks you want to do at that level as
well as the TCMs that do the composing/decomposing of data
coming/going from/to the lower layer.
Structs
The data item struct is the most important one. It represents
the data element transferred to and from the port.

The fields of this struct correspond to the logical parts of the data
element (e.g. headers, arrays of bits, checksum) as described by
the pattern Modeling I/O Stubs.
In struct definition, ignore timing/synchronization or protocol
implementation.

The main purpose of the data item is to contain the data, whereas
the tasks of synchronizing to the DUT and reading/writing to its
ports are done by TCMs.
Note that in some cases the data item struct contains
information that is used by the stub's TCM, e.g. delays in cycles
(see Modeling Intervals).

This kind of data can be added to the data item struct at a later
time (when you design the TCM).
Injection TCMs
Packing the data items fields into the bit
representation required by the DUT.

This could be done using the in pack construct, or by bits
extraction from existing fields.
Implementing the required protocol for transferring
data through the interface.


Wait for a "ready" signal before setting a write control signal
and then asserting data.
A state machine that watches the bus and writes the data at
predefined states
Possibly adding error assertions when the DUT does
not seem to adhere to the predefined protocol.
Collection TCMs
Inserting the bits stream coming from the DUT into a
new, empty struct.

This could be done using the unpack construct, or by bits
insertion from existing fields.
Implementing the required protocol for transferring
data through the interface.


Wait for a "ready" signal before setting a read control signal
and then sampling data.
A state machine that watches the bus and reads the data at
predefined states.
Possibly adding error assertions when the DUT does
not seem to adhere to the predefined protocol.
Stub Container Struct
The stub struct is simply a container for all the entities that
represent the stub.
Typically this top level struct includes:





A list containing all data items sent or received by this stub
The TCM that injects or collects data
Configuration fields, e.g. speed, maximum number of data items,
etc.
Coverage points
The definition of events shared by all TCMs of the stub
ATM example
ATM
Stubs

Every input port is modeled by a struct named port_in. The corresponding output ports
are modeled by structs named port_out. There is one instance of both port_in and
port_out for each ATM port.
Data items

Cell structs represent data elements. Each input port has a list of cells, signifying the
stream of cells that are injected into the port. Similarly, each output port has a list of
cells, signifying the cells that are collected from the port.
TCM

The input ports contain a TCM named send(), which is responsible for injecting the
data into the port. Similarly the output ports contain a TCM named receive(), which is
responsible for collecting all the cells emerging from the DUT and logging them in the
port's list of cells.
Configuration and Control

The configuration and control stub is also represented by a struct named config, which
contains all the configuration parameters. The config struct includes a TCM named
control(), which sets the configuration of the DUT and controls all communication to
and from the interface channel (in our example Port A).
I/O Stubs
A typical test environment interfaces each I/O port of the DUT
with a dedicated entity in e, referred to as I/O-stub.
An I/O-stub should be represented by a unit in e. Each I/O-stub
communicates with a single port of the DUT.
A typical I/O stub contains the following elements:

TCM(s) communicating with the DUT according to a predefined
protocol
 sampling events, i.e. clocks, for these TCMs are also defined in this unit




Data structures containing the data items injected into and
collected from the DUT for checking and/or coverage purposes
Configuration parameters
Event definitions (e.g. the clocks used by the DUT port). See more
on this in
Other elements such as coverage groups, reporting utilities, etc.
I/O Stub Struct Example
2.2 Defining a Typical Test
Implementation
Four basic operations
are done in almost
every test:




Initialization
Input generation &
injection
Output collection
Checking
Initialization
Are there any memories, caches, registers or
buffers that must be initialized?
What DUT configurations should you test?
What configurations of the test environment
will you use?
Generation and Injection
What is generated and injected into each
input port?
How many data items should be injected into
each port?
What relations among data items in a stream
(sequence) are required?
What correlations among ports are required?
Collection
What should be collected?
What should be recorded?
How should the expected results be prepared
(calculated)?
Checking
What should be checked?

Awareness of Checking Requirements
What kind of data should be compared?
What should be counted?
Are any temporal checks required?
Do you want to measure the performance of
the DUT?
Example Test ATM
Initialization



Initialize the connection table; generate a different mapping with every test
Enable DMA cell transfer; bus width is 16 or 32 bits (DUT configuration)
Allow generation of erroneous input; level of reporting is 1-3 (test environment configuration)
Generation and Injection



Average 30 cells per channel (around 4x30=120 cells per test)
A mixture of cells implements 1-3 virtual channels per port; selection of the port and virtual channel
number should correspond to information written to the configuration table
Some tests simultaneously inject two cells (to different input ports) that should come out through
the same output port
Collection


Each output stub should maintain a list of cells emerging from the DUT
Each injected cell should be logged at the corresponding output port according to the connection
table (to be used for implementing a Scoreboard Checking).
Checking





Compare each output cell to its corresponding input cell
Confirm that injected cells come out at the right output port ( Scoreboard Checking)
Confirm that cells come out in the same order as injected
Confirm that input to output delay is less than 16 clock cycles
Verify the HEC (header checksum) of each output cell
Notes
Use DUT initialization to improve coverage: start at a
known state
DUT configuration can be either synthesis
configurations, or run-time configurations
Test Environment Configurations include scope of the
tests, and test duration in terms of data set.

Define sets of interesting sequences
 Control/correlate inputs on different ports
 Calculate expected results: number, order, values
 Temporal checks, e.g., grant within 2 cycles after req


Record inputs and outputs
10 minutes – hour per test is recommended
2.3 Test Plan
Test Plan Basics
A good test plan should list all of the interesting test cases that would
serve to verify the design. Defining at least a rough test plan is
necessary before starting the design and modeling of your test
environment, because the goal of the design and modeling stage is to
enable every potentially valuable test.
List of all configuration attributes that you would like to test (see
Configuration Modeling)
List of all variations of every data item, i.e. the important attributes of
each data item that you would like to control (see Modeling Data Items)
List of interesting sequences for every DUT input port (see Generating
Sequences)
List of all corner cases to be tested, e.g. smallest and largest data
elements, extreme values, unique time relations (collisions), etc.
List of all error conditions to be created and all erroneous inputs to be
injected (for verifying the DUT's reaction to errors)
Examples
Configuration attributes


Each input and output port has a different clock speed.
The connection table is initialized with one to one connections only.
Varieties of data items

Cell data (not including the header) is all zeros, all ones, alternate ones and zeros
(h'aa or h'55), or randomly chosen.
Interesting sequences


A port serves a single channel (all cells injected to this port have the same VCI).
A port serves 100 channels, i.e. cells that use different channels will be interleaved.
Corner cases


All ports receive a packet at the same clock cycle.
Create input buffer overflow by, for example, setting slow clocks for the output ports
and fast clocks for the input ports.
Error conditions and erroneous inputs


Create header checksum (HEC) error
The packet's VPI/VCI is not specified in the connection table.
Defining a Test Plan
A good test plan should list all of the
interesting test cases that would serve to
verify the design.
Defining at least a rough test plan is
necessary before starting the design and
modeling of your test environment, because
the goal of the design and modeling stage is
to enable every potentially valuable test.
Implementation
List of all configuration attributes that you would like to test
(see Configuration Modeling)
List of all variations of every data item, i.e. the important
attributes of each data item that you would like to control (see
Modeling Data Items)
List of interesting sequences for every DUT input port (see
Generating Sequences)
List of all corner cases to be tested, e.g. smallest and largest
data elements, extreme values, unique time relations
(collisions), etc.
List of all error conditions to be created and all erroneous inputs
to be injected (for verifying the DUT's reaction to errors)
Tips
While listing the above, you should define the range of values for each
generated data item. This will help you to define the test plan, and it
will also be very useful when modeling the input data. (See Note 3)
You should involve the DUT designers in the process of defining the
test plan. Designers know the internals of the DUT implementation and
can help you define interesting test cases. (See Note 1)
Once you have completed writing your preliminary test plan, you
should check that you have enough time and computing resources to
run as many tests as you have specified. (See Note 2)
In your test plan, you should separate directed tests from randomly
generated tests. Directed tests are easier to define and check;
therefore, they are are very effective in the beginning when the DUT
and the test environment are relatively unstable and incomplete.
Random (or, more accurately, directed-random) tests, given enough
time, may find many bugs in areas that you did not consider while
writing your directed tests. Therefore, random tests are most effective
after completing the initial, directed tests.
Note 1: Implementation
knowledge
Suppose you know that your 0-to-8k packets are internally kept in a 1k
buffer. Skew the distribution of packet sizes, e.g. using the "select"
construct:
<'
struct packet {
size: uint;
keep soft size == select {
// see:”keep soft” syntax
10: 1k;
// Exactly the buffer size
10: [1k-2..1k+2];
// Range around buffer size
10: [2k, 3k, 4k, 5k, 6k, 7k, 8k];
// Multiple of the buffer size up to max packet size
70: others;
// All other values
};
};
'>
Note 2: Testing Resources
While defining your test plan, it is very important to estimate the length of one
test run and how many tests you can run during the whole project verification.
These numbers should be consistent with the time and computing resources that
you have available for this project.
While long tests can reach more interesting situations, they are harder to debug
when they fail. It is usually preferable to have many smaller (more focused and
easier-to-debug) tests then a few long ones.
Typically, a test simulation time of 10 minutes to an hour provides a good balance
of all the above factors, assuming that all interesting test cases can be created
within this time span.
A more detailed formula that could help you establish a balance between your
requirements and resource availability is:
test_time * tests * seeds * reruns * configuration =
machines * net_testing_days * hours_a_day * 60
Note 3
As part of defining a test plan, you may need
to specify the legal values, illegal values, and
corner cases of each input data element you
want to generate.
The following suggests a format that should
make it easy to subsequently model these
inputs, e.g., specify constraints:
Ranges
Property
Valid Ranges
Invalid Ranges
Corner Cases
Frame length
[64..1518]
[0..63,
1519-2500]
[0,63,64,
1517…]
Packet delay
[96..]
[0..95]
[0,95,96]
Arith instruct.
[R0.., IMM]
[Jump…]
none
2.4 Principles of Verification
Methodology
Verification Methodology Goals
Design a more effective verification environment by exploiting
the full power of Specman Elite.
Understand the rationale behind some of the features provided
by Specman Elite.
Improve the modularity, hence the reusability, of your
verification environment.
Contents




The verification concepts discussed in this book address the
following major verification tasks (see Figure 1):
Creating tests that stimulate the DUT (the device under test).
Checking that the DUT output corresponds to the input stimuli,
according to the DUT specification.
Analyzing the testing coverage to find areas that need more
testing.
Verification Process
2.4.1 Separating Verification
Environment from Testbase
Separation means that you can enlarge or change your test base without
affecting your verification environment. To implement separation of
verification environment from test base:
In your verification environment


Provide the infrastructure for generating inputs, injecting them into the
DUT, collecting the DUT output, and checking the output.
Define spec constraints only, that is, use constraints for defining the input
stimuli based on the DUT spec.
These "spec constraints" should define the set of legal input values and the set of
controllable errors that you want to test. They should be unrelated to specific
tests. In other words, the spec constraints define the entire set of input
combinations that could possibly be generated.
In your test base


Direct the Specman Elite random generator to areas of interest by defining
constraints on the inputs.
The constraints defined in your tests should be based on the test plan.
2.4.2 Specification Based
Verification
The Specman way
Generating input stimuli from the DUT spec
Input stimuli are automatically generated by Specman Elite's generator.
To specify the stimuli requirements, you need to define the data
structure to be generated and declare constraints on the generated
input.
Writing tests from the test plan
Tests add constraints to the inputs in order to direct the generator to
specific areas of interest. You can create tests with constraints derived
directly from the test plan.
Temporal checking or protocol checking from the spec or standard
Write temporal checks in a declarative way to facilitate capturing DUT
and standard bus protocol specifications
Defining test coverage needs based on the Test plan
Coverage goals for testing are defined by specifying the DUT
functionality to be monitored and reported. You can declare coverage
items according to the major sections of the test plan.
2.4.3 Generating Effective
Stimuli
Pattern Definition
Testing the entire set of combinations of inputs
scenarios that is theoretically required to test a
typical device is usually impossible. A more practical
way is to try to find the most effective subset of
these combinations and to avoid redundant ones.
This requires a test generator that can be directed to
the areas with the most probability of finding bugs.
This pattern explains Specman Elite's concept of
directed-random generation and how it addresses the
complexity of test generation requirements.
The DUT Input Verification
Space
One way to define the set of input combinations for a given
device is as an n-dimensional space of 2n combinations of
discrete points,where n represents the number of DUT input
pins.
Each dimension here has two values: one and zero. Adding the
time domain (because a test is typically a sequence of
combinations) results in an n+1 dimensional space.
This space is the DUT input verification space in which stimuli
can be generated.
In Specman Elite, we define stimuli at a high level of
abstraction. We use the term, data item, to describe a collection
of logically related values and attributes--for example, a packet,
bus transaction, or instruction.
Typically, a stream of data items is injected into each "logical
input" of the DUT.
Directed Verification Space
Directed vs Random Tests
Directed tests, which target specific aspects
of DUT behavior. These tests are represented
as the set of red dots in Figure 1.
Random tests, which randomly select
combinations within the given space.
Directed
Insures testing of specific areas of interest
Effective for debugging at early stages of
testing
But : Require a lot of resources to develop a
testbase that covers a huge input space
Random tests
Can cover a huge space with limited effort
Generates combinations that test writers may
not have considered
But: May require many runs to generate
specific points of interest
May generate uninteresting test cases
Directed Random Tests
Specman Elite's directed-random testing
provides a way to create directed or random
tests using constraints that you define to
identify a subspace of interest in the DUT
input verification space
The Specman Elite generator randomly
generates input sequences in the verification
space you define.
By running the same directed-random test
many times, you can get different
combinations within the same test space.
Directed Random Space
Constraints
You can change the size of the subspace to be tested
by modifying the constraints. You can:
Constrain data-item attributes to:


A range of values.
A specific value, making it deterministic. Constraining each
attribute to a specific value results in a directed
(deterministic) test.
Leave data-item attributes unconstrained. For
unconstrained attributes, the generator selects a
random value from the entire range defined by the
verification environment.
2.4.4 Feedback Loop
Developing a good testbase is an iterative process. You run
tests, look at the results and the coverage achieved, and decide
what you should test next.
One of the most important issues that you face during this
process is: how do you know when you have done enough
testing to verify the design?
To answer this question, you must measure the testing progress
or, preferably, set up metrics by which you can measure how far
you are from reaching your testing goals.
Define a functional coverage approach to provide a closed
feedback loop for ensuring that your testing is complete.
Functional Coverage
Define coverage in terms of DUT functionality


Define coverage not only for DUT signals or state machine
registers but also for complex sequences and attributes
maintained only in the verification environment
Define cross coverage for already defined coverage items
Set coverage goals by defining the minimum number
of occurrences required for each coverage item
Get reports on what is still not tested enough
Specify what should be covered

Define, based on your knowledge of the DUT, the metric by
which the quality of the testing is going to be measured.
Testing Process
Feedback Based Testing
1.
Define coverage groups and items as part of the verification
environment, according to the test plan.
Write tests, place them in the testbase, and run them one by one.
2.

3.
4.
Specman Elite collects coverage information while running the tests and
writes coverage files.
After all tests are run, review the coverage reports of the
information accumulated in the coverage files.
Write new tests to cover untested areas (as reported by Specman
Elite), or continue running existing tests on areas that are not tested
enough.
Coverage Example
<'
extend cpu {
// coverage dedicated event
event cov_exec is @cpu_exec @sys.tick_end;
// Sampling at Tick End
cover cov_exec using
text = "Instruction execution final state",
when = (init_complete == TRUE) is { // sample only after initialization
item opcode: opcode = 'top.cpu.opcode';
item carry: bool = 'top.cpu.carry';
// test plan: program counter addresses [0..32KB-1] (8 x 4KB pages),
// ignore [32KB..64KB], special cases: page 0 - [0..4K-1]
item pc: int = 'top.cpu.pc' using
text = "Post-execution program counter",
ranges = {
// buckets definition (see Note 8)
range([0..4k-1], "page 0");
// base page
range([4k..32k-1], "", 4k);
// other pages (a bucket per each)
}, ignore = (pc >= 32K);
Coverage Example
// test plan: check for program counter values on 4KB page boundaries.
item pc_on_page_boundary: int = 'top.cpu.pc' using
text = "Page boundaries",
ranges = {
range([0]); range([4k]); range([8k]); range([16k]);
range([20k]); range([24k]); range([28k]);
range([0..32k-1], "non-boundary"); // will catch all non-boundary values
}, ignore = (pc_on_page_boundary >= 32k);
// cover the amount of bytes the stack pointer changed
item stack_change: int = stack_monitor.change using
text = "Induced stack change",
ranges = { range([0..32], "", 1, 0);}; // bucket per each, no samples isn't a hole
// a cross-coverage of opcodes
// vs. the induced stack change
cross opcode, stack_change using
text = "Opcode vs. stack change";
}; // cover cov_exec
}; //extend cpu
Coverage Example
…
extend global {
setup_test() is also {
set_config(cover, mode, normal);
set_config syntax
};
};
'>
// see: setup_test() method
// enable coverage collection // see:
Driving
//
define <demo_inst'action> "demo_inst <op'name> <pc'num>
<sp'num> <c'num>" as {
'top.cpu.opcode' = opcode'<op'name>;
'top.cpu.pc'
= <pc'num>;
'top.cpu.sp'
= <sp'num>;
'top.cpu.carry' = <c'num>;
emit cpu_exec;
// forces an emission
// see: emitting events
wait [1] * cycle;
};
Driving
extend cpu {
demo_driver() @sys.any is {
wait [4] * cycle;
init_complete = TRUE;
// opcode pc
sp
carry (values are AFTER inst. execution)
demo_inst ADDI 200
9200 0 ;
demo_inst ADDI 208
9200 1 ;
demo_inst SUB
216
9200 0 ;
demo_inst JMP
4096 9200 0 ;
demo_inst CALL 6048 9196 0 ;
demo_inst ADD
6600 9196 1 ;
demo_inst RET
4054 9200 1 ;
stop_run()
}; // demo_driver()
run() is also {
start demo_driver();
};
}; // extend cpu
Show Coverage
extend global {
finalize_test() is also {
// see: finalize_test() method
specman("show cover cpu.*.*");
// see example in Note 6
// see show coverage command
};
};
'>
Execution Flow Chart
2.5 Awareness of Checking
Requirements
Checking Requirements on the
Verification Environment
It is very important to consider your checking
requirements in the early stages of design. The
questions that should be asked are:
Is there a way to predict, estimate, or calculate the
expected results?
Is there a set of rules for DUT output that determines
its correctness?
Is there a way to collect all the information required
for checking during a test run?
If the answer to any of the above is no, you must
reconstruct the design of your verification
environment.
Checking Issues
In some applications, checking can be the most difficult task in building a
verification environment.
Manual checking - When DUT results must be checked manually, you can create a
set of inputs with a corresponding set of manually checked outputs and use
these sets for regression testing.
Erroneous inputs - When you deliberately generate an erroneous input, you have to
mark the erroneous data in order to properly check the DUT response.
Unordered output - Sometimes when the DUT output emerges in a different order
than the injection order, you must add some identification information to the
generated data to help you identify it during checking. For example, you could
add unique identification number to each packet. (Scoreboard Checking)
Logging exceptions - All kinds of exceptions or interrupts that happen during
simulation should usually be recorded as they affect the behavior of the DUT.
This kind of information should be recorded in association with the data items
that were in process when they occurred.
Checking Examples
Self-checking code - In CPU verification, you may decide to
generate self-checking instruction sequences and not use an
instruction simulator to generate the expected result. This
certainly affects the way you do generation.
Modeling of memories - There are many ways to model a memory
as part of the verification environment. Each method suits a
different type of checking requirements. Three possible
situations are:



You need to record all read and write transactions to be able to
verify that they happened on time.
You only need to check the memory at the end of the run (ignoring
any false writes that may have been overwritten).
You need to generate new data for every read from memory, but
you can ignore the writes to memory (meaning that memory
transactions will not be checked at all).
2.6 File Organization and
Coding Guidelines
Formatting
Use indentation - All struct members should be indented more than the
previous hierarchy.
Block templates - Place the opening curly brace ({) at the end of the line
that introduces a block and the closing brace (}) on a separate line,
with a comment after it.
Line length - All lines should be shorter than 78 characters. No line should
wrap.
Use the “specman-mode” in Emacs by loading specman.el
Comments and Headers
Comments should help the readers, not confuse them. Avoid long or
complex stories.
Use quotations from the spec in the e file. Write the spec's name and
the sentence you implement in your e code. This applies when writing
constraints, temporal or data checks, events or coverage definitions.
For in-line commenting there are two options: -- or //. For readable
code, choose one method and stick with it in all modules.
If in-line comments appear at the end of lines of code, indent all such
comments to the same position.
If in-line comments are on separate lines (method headers or
comments that break code into blocks), indent them to the same
position as the following line.
File header: file_name : dut_subject. If required, add a few lines of
description.
Name Definitions
When defining a field of enumerated type, define the type outside the struct,
rather than using an in-line definition.
If the architecture requires long names, better to have long type names and
short field names.
Use plural names for lists (e.g.: packets, instructions). For other purposes, use
singular names.
Use canonical signal names for code re-use.
When a name is combined from two words, there are two naming styles: upper
case (FlowControl) or underscore (flow_control). Pick one style and keep it.
When defining a variable, it is better to specify its type rather than to use the
"declare + assign" operator (:=)
Agree on abbreviations: Avoid mixing "ctl" and "cntl“
When writing a "for each" loop, you can assign a name to the loop variable or
use "it" instead. Assign and use names.
Do not use Specman Elite's key words as names of fields, variables or methods.
Use unique names for fields/variables in same scoping. Do not give the same
name to a method's variable as a field in the struct.
Data Structs Organization
Under sys
For good modularity, sys should include only the high level
structs and no details.
Each struct has its post_generate() and check() methods, so
most of the time there is no need to extend methods of sys.

When you want to add a new check, add your check to the check()
method of the appropriate struct.
When you do extend a predefined method of sys, do not write
the full code in it. Write a new method, and call it from the
predefined method extension.

The fastest way of fixing bugs in e code is by loading a file where
the bugged method is extended. If there is a bug in a method of
sys, doing such an extension is possible, but the loaded file
becomes very long and tangled.
Methods & Modularity
If a method gets too long (more than 40 lines can
serve as a rule-of-thumb), break it up into a few
methods.

Shorter methods are more readable and easier to maintain.
You may need to stop the execution of an active
method.



Some examples might be if the method has no meaning
under a specific condition or if there is a risk of access
violation (trying to access a struct that is NULL).
In such cases, you have to verify that further extensions of
the method will be aware of the special condition.
In methods that return a value, assign the result variable. In
other methods, define a field in the struct for this purpose.
3. Modeling
Methodology
Specman Elite uses two fundamental e constructs to
represent both the test environment surrounding the
DUT and the tests themselves:


structs / units
time-consuming methods (TCM)
Structs model everything but protocols and flow
control, which are handled by TCMs and methods.
Note: Normally, everything stated about structs applies
to units as well.
Modeling Tasks
I/O Channel stubs

Each in charge of one input/output channel of the DUT, reading
from and/or writing to it.
Data items

Flowing through the channels (packets, cells etc.). These are
usually held in lists, created at the beginning of the test (pre-run
generation) or dynamically during it (on-the-fly generation)
Protocols

Describing the communication with the DUT and/or the DUT's
behavior.
Interesting parts of the DUT (to enable simulation of their
functionality).
Test environments and DUT configurations.
Identify structs and methods/TCMs
1.
Using the information you have gathered about the DUT compose a short
description of your test environment.



The entities (modules/blocks) surrounding the DUT (stubs)
The activities performed by those entities (protocols)
The information passed between those entities and the DUT, and between the
entities and themselves (data items)
2.
Highlight all the nouns ("bus", "packet", "instruction", etc.) and all the verbs
("sends", "reads", "verifies", etc.) in your description.
3.
In most cases, the nouns you have highlighted are a good starting point for
the list of structs in your test environment. The verbs normally comprise the
main methods/TCMs.
4.
Review the test environment (the full specifications), making sure you have
not omitted anything important, especially IO channels of the DUT.
Other considerations regarding structs you should keep in mind are:


The abstraction level you need (see Determining the Abstraction Level)
The difference between static struct instances and dynamic struct instances
Modeling Representations
I/O Stub
Unit – static
Data Item
Stuct with physical fields
Data Stream
List of structs
Multiple instances of the same DUT interface
element
List of units
Compound data item
Struct containing structs
Raw data – bit/byte stream
List of bits/bytes
Time-based operation with regard to the DUT
TCM or temporal expression
Test configuration
Unit
Timing information
Stuct or struct member
Part of the DUT
Unit
Queues, buffers, stack, etc.
List of elements
Memory
Keyed list
ATM Modeling Example
3.1 Modeling with Structs
Uses for Structs
Channel stub units communicating with the DUT using its interface ports: Each
stub-struct works in association with one channel of the DUT's interface.

The struct either injects data into the channel or listens to data emerging from it or
both. The distinction between the input and output data flow directions is not
mandatory, but it makes the code simpler and modular
Data items processed by the DUT

These are usually created during the test (either as pre-run or on-the-fly generation).
Interesting (for test-purposes) parts of the DUT

These enable simulation of the DUT. That information is then used to generate
interesting test cases and check the actual DUT behavior, comparing it with the
predicted values.
Higher-level containers gathering a group of related structs into one entity

These localize different information items and common tasks.
Abstract entities, needed by the test environment

The most common instances are the configuration of the DUT and test environment.
ATM struct Hierarchy
Switch Structs
Channel stubs: io_stub (SENDER or LISTENER)

Each DUT interface has two instances of io_stub communicating with it: a sender
(sending cells to the DUT) and a listener (listening to cells emerging from the DUT).
Data items: cell (consisting of header and payload)

Both sender and listener structs have lists of cells they sent or received. cells are
further divided into more basic building blocks: header and payload structs.
Interesting parts of the DUT: port_buffer

The buffer of each communication port in the switch is modeled using a struct named
port_buffer. Since each port of the DUT has its own buffer, we have one instance of
port_buffer per port.
High-level containers: port, switch

The port struct represents the entire stub working in association with one
communication channel in the DUT. It consists of one instance each of sender io_stub,
listener io_stub and port_buffer.
Abstract entities: test_mode, switch_config

Two configuration structs: switch_config to hold the configuration of the DUT and
test_mode for the test environment
Units
Units are the basic structural blocks for creating verification
modules (verification cores) that can easily be integrated
together to test larger and larger portions of an HDL design as it
develops.
Units, like structs, are compound data types that contain data
fields, procedural methods, and other members.
Unlike structs, however, a unit instance is bound to a particular
component in the DUT (an HDL path).
Furthermore, each unit instance has a unique and constant
place (an e path) in the run-time data structure of an e
program. Both the e path and the complete HDL path
associated with a unit instance are determined during pre-run
generation.
Units
The basic run-time data structure of an e program is a tree of
unit instances whose root is sys, the only predefined unit in
Specman Elite.
Additionally there are structs that are dynamically bound to unit
instances.
ATM Example
<'
type crc_type: int (bits: 32);
// crc field of cells
type reporting_level_type: [SILENT, CRITICAL, NORMAL, FULL];
//
// top-level containers and config
//
extend sys {
test_mode: test_mode is instance;
switch: switch is instance;
};
Unit Switch
unit switch {
// top DUT unit
config: switch_config is instance;
ports: list of port is instance;
keep ports.size() == config.num_of_ports; // see: size() syntax
keep for each (p) in ports {
p.num == index;
};
!curr_port: port;
// for cells generation
wait_and_stop() @sys.any is {
// simplified test stop, see Note 15
wait [3000] * cycle;
stop_run();
};
run() is also {
start wait_and_stop();
};
};
Units Test_mode &
Switch_config
unit test_mode {
// test environment config
reporting_level: reporting_level_type;
keep soft reporting_level == NORMAL;
// default setting
allow_erroneous_input: bool;
// generate errors?
keep soft allow_erroneous_input == FALSE; // by default, no.
// see: keep soft syntax
};
unit switch_config {
num_of_ports: int;
keep soft num_of_ports == 4;
};
// DUT configuration
// Note 1
IO Stub types
//
// IO-stubs related structs
//
type io_stub_kind: [SENDER, LISTENER];
unit io_stub {
kind: io_stub_kind;
parent_port: port;
!cells: list of cell;
};
// generic, sends or listens
// Note 14
// not generated pre-run
IO Stub Sender
extend SENDER io_stub {
// SENDER additions, see Note 9
event send_clk is cycle @sys.any; // fake clock", see Note 11
send() @send_clk is {
// Note 2
var c: cell;
for i from 0 to parent_port.cells_to_send - 1 {
sys.switch.curr_port = parent_port;
gen c keeping { .parent == me; };
if sys.test_mode.reporting_level in [NORMAL..FULL] then {
out("sending ", c.legal ? "legal" : "bad", " cell ", c); };
//… send cell here using packing
wait [c.header.len] * cycle;
// instead, we just delay
emit c.sent;
// useful for coverage, see Note 8
cells.add(c);
// see: add() syntax
}; // for i
if sys.test_mode.reporting_level in [NORMAL..FULL] then {
out("port ", parent_port.num, " finished."); };
}; // send()
run() is also { start send();};
}; // extend SENDER io_stub
IO Stub Listener
extend LISTENER io_stub {
// LISTENER additions, see Note 9
event recv_clk is cycle @sys.any; // fake clock, see Note 11
listen() @recv_clk is {
// Note 2
while (TRUE) {
wait rise('top.port(parent_port.num).TXEN'); // indicates transmission
var c: cell = new;
c.parent = me;
//... read cell fields from the channel and
//... apply unpack()
emit c.received;
cells.add(c);
}; // while (TRUE)
}; // listen()
run() is also {
start listen();
};
}; // extend LISTENER io_stub
Unit Port
unit port {
// top IO-stub container
num: int;
// port's number
buffer: port_buffer is instance;
// simulates DUT's buffer
keep buffer.parent == me;
sender: SENDER io_stub is instance;
// sends cells to DUT
keep sender.parent_port == me;
listener: LISTENER io_stub is instance;
// listens to cells from
DUT
keep listener.parent_port == me;
// configuration fields
cells_to_send: int;
keep soft cells_to_send in [1..20];
tests
}; // unit port
// by default have small
Unit Port_Buffer
unit port_buffer {
// DUT sub-module simul., see Note 5
parent: port;
get_free_area_size(): int is { // "another usage example"
// must know the actual amount of free space by simulation
result = 100;
};
};
Data Item Structs
struct cell {
// top data item struct
parent: io_stub;
// Note 14
legal: bool; keep soft legal == TRUE;
// by default have legal cells
%header: cell_header;
// physical field, see Note 4
%payload: cell_payload;
!%crc: crc_type;
// not to be generated
keep not sys.test_mode.allow_erroneous_input => legal == TRUE;
keep header.len <= parent.parent_port.buffer.get_free_area_size();
keep payload.data.size() == header.len;
// see: size() syntax
event sent;
// emitted when cell is sent
event received;
// emitted when cell is received
post_generate() is {
// see: post_generate() method
if legal then { crc = calculate_crc(); // have correct crc for legal cells
} else if crc == calculate_crc() then { //change it, to get wrong crc
crc = ~crc; }; };
calculate_crc(): crc_type is { // ... calculate it using header, and payload
};
}; // struct cell
Data Cell_header
struct cell_header {
%len: int(bits: 32);
//... other header struct members
};
struct cell_payload {
%data: list of byte;
//... other payload struct members
};
'>
Execution Flow Chart
3.1.1 Modeling I/O Stubs
A typical test environment interfaces each I/O port of the DUT with
a dedicated entity in e, referred to as I/O-stub.
An I/O-stub should be represented by a unit in e. Each I/O-stub
communicates with a single port of the DUT.
A typical I/O stub contains the following elements:





TCM(s) communicating with the DUT according to a predefined
protocol (Usually the default sampling events, i.e. clocks, for these
TCMs are also defined in this unit.)
Data structures containing the data items injected into and
collected from the DUT for checking and/or coverage purposes
Configuration parameters
Event definitions (e.g. the clocks used by the DUT port). See more
on this in Modeling Events.
Other elements such as coverage groups, reporting utilities, etc.
(see Note 3)
3.1.2 Modeling Data Items
There are four steps to modeling data items:
1. Identification of the data items

2.
Identification of the physical components from the DUT's specs

3.
They are the "physical fields" of the data item struct. For example: header
and payload in packets/cells, opcode and operands in a CPU instruction
etc.
Definition of data item variations from both the DUT's specs and the
test plan

4.
They are represented as structs in e.
They add virtual (non-physical) fields to the struct. For example: kind,
legal indicator, etc.
Incorporation of utility fields to assist in generation, simulation, and
checking

They provide more virtual fields with useful information about data items
and their whereabouts. For example: index, parent, occurrence flags,
cumulative histories, etc.
3.1.3 Configuration Modeling
In most verification environments, there are configurable
parameters that differ from test to test. How can you model
these parameters?
To achieve re-usability and flexibility we distinguish
configuration parameters along two orthogonal dimensions:

Domain - the DUT or the test environment
 DUT - Derived from the DUT specifications and comprising all
configurable aspects of the device itself
 Test environment - Controls the topology and affects the the way tests
are generated, simulated, and checked

Scope - global or local to some sub-part
 Global - Parameters applying to the entire DUT/environment
 Non-global - Parameters applying to only part of the DUT/environment
Recommended Scheme
Global parameters

DUT configuration
 Fields in a global unit (suggested name: "dut_config") that is a member of the
top-level DUT unit.
 In our example, we use switch_config, which is a member of sys.switch.

Test Environment
 Fields in a global unit (suggested name: "test_mode") that is a member of sys
(see also Enhancements). In our example, we use sys.test_mode.
Non-global parameters

Fields in the top unit, which represents the relevant part Configuration fields
are set by test files, usually by means of constraints.
NOTE:


All configuration units/fields must be generated pre-run
Configuration units/fields must be generated before anything that may rely
on their values
Example
Top-level unit: switch
Global test environment configuration unit: test_mode (a field of
sys)
Global DUT configuration unit: switch_config (a field of switch)
Local test environment configuration parameter in unit
port:bad_cells_percentage
<'
// test environment related types
type debug_level:
[ OFF, LOW, MEDIUM, HIGH ];
type test_size_kind: [ SMALL, LARGE ];
type switch_rev:
[ REV_2A, REV_2B];
// switch revision
// DUT configuration related types
type line_speed_kind: [ A155MBPS, A622MBPS ];
Debug Msg Macro
//
// debug_msg macro : compares <level'exp> to the current debug level, and
//
outputs message accordingly.
//
See Note 7 for additional related macros
//
See also defining syntactic macros
define <debug_msg'action> "debug_msg <level'exp> <msg'exp>" as {
var l<?>: debug_level = <level'exp>;
if l<?>.as_a(int) <= sys.test_mode.debug_level.as_a(int) then {
out(<msg'exp>)
};
};
Test Mode Example
extend sys {
test_mode: test_mode is instance;
// generated first, see Note 6
switch: switch is instance;
};
unit test_mode {
// global env parameters
test_size: test_size_kind;
keep soft test_size == SMALL;
// by default, generate small tests
debug_level: debug_level;
// a global flag. See Enhancements for an alternative
keep soft debug_level == OFF;
// by default, no debug messages
allow_erroneous_input: bool;
// allow errors in generated input
keep soft allow_erroneous_input == FALSE; // by default, no
// see: keep soft syntax
}; // unit test_mode
Unit Switch
unit switch {
config: switch_config is instance;
// generated first, see Note 6
rev: switch_rev;
// "DUT revision", see Note 10
num_of_ports: int;
// test env. topology parameter
keep soft num_of_ports == 4;
// Note 13
ports: list of port is instance;
keep ports.size() == num_of_ports;
keep for each in ports {
.num == index;
.parent == me;
};
….
// see: size() syntax
Unit Switch
// oversimplified stopping the test, see Note 15
wait_and_stop() @sys.any is {
var run_time: int;
run_time = sys.test_mode.test_size == SMALL ? 1000 : 3000;
wait [run_time] * cycle;
stop_run();
};
run() is also {
start wait_and_stop();
};
}; // unit switch
Unit Switch_config
unit switch_config {
// global switch configuration
line_speed: line_speed_kind;
// spec configuration parameter
keep soft line_speed == A155MBPS;
}; // unit switch_config
Unit Port
unit port {
parent: switch;
num: int;
// see using back pointers, see Note 14
// local configuration field:
bad_cells_percentage: int[0..100];
// if errors are allowed, by default have them in 10% of cells
keep soft bad_cells_percentage == (sys.test_mode.allow_erroneous_input ? 10 : 0);
num_of_cells: int;
keep soft num_of_cells in [0..2000];
keep (sys.test_mode.test_size == SMALL) => soft num_of_cells in [1..20];
!cells: list of cell;
// generated on-the-fly
event sclk is cycle @sys.any; // port's receiving clock, see Note 11
…
Unit Port
…
send() @sclk is {
var c: cell;
for i from 0 to num_of_cells-1 {
gen c keeping {
soft .legal == select {
bad_cells_percentage:
FALSE; // make this % illegal
100 - bad_cells_percentage: TRUE; // make the rest legal
};
};
// report aninformative message, see Note 9 earlier
debug_msg MEDIUM append("Sending ", c.legal ? "legal" : "illegal",
" cell ", c, " from port ", me, " no. ", num);
wait [1] * cycle;
//... substitute actual transmission code
cells.add(c);
// see: add() syntax
};
…
Unit Port
…
debug_msg HIGH append("Port ", str_dec(num), " finished."); // see: dec() syntax
}; // send()
run() is also {
start send();
};
}; // unit port
struct cell {
legal: bool;
//... other cell fields
};
'>
Execution Flow Chart
3.2 Modeling Input Relations
Contents
Multi-Port Synchronization

Discusses ways to synchronize the stimuli to
multiple ports.
Modeling Intervals

Discusses ways to model time gaps between
sequential data items.
Multiple Stub Instances

Suggests a way to implement multiple instances of
the same I/O stub.
3.2.1 Multi-Port
Syncrhonization
3.2.3 Multiple Stub Instances
Many times, a DUT will have several ports of the same kind. When modeling such an
environment, you may like to ensure that:


The code written to handle one stub can be instantiated more than once (each instance
using its own set of HDL signals).
The same stub can be easily reused in other projects.
How to Share the Same Stub Code
1.
Create one stub unit that interacts with one port and then
create multiple instances of the stub
2. Use computed names each stub instance can access the right
signals, by using its ID as part of the signal name. For
example, the computed name '`CLK(me.number)' allows stub
number 0 to use '`CLK0', stub number 1 to use '`CLK1', etc.
3. Use `defines to create aliases to signal names differing only
in an embedded identification substring, e.g. `CLK0, `CLK1,
`CLK2. (see using `define)
Steps 1 and 2 target the multi-instance sharing issue raised in the
pattern definition. Step 3 facilitates the use of the same stub
in other projects.
Note: Computed names support expressions of any scalar type
(including enumerations) and strings.
Example: Ethernet Switch I/O Units
extend sys {
switch: switch is instance;
};
unit switch {
// top-level unit
ports: list of port is instance;
keep soft ports.size() == 2; // see: keep soft syntax, size() syntax
keep for each in ports {
.number == index;
// Each instance has an ID to be
};
// used in computed names.
wait_and_stop() @sys.any is { // simplified stopping of test, see Note 1
wait [1000] * cycle;
stop_run();
};
run() is also {
start wait_and_stop();
};
}; // unit switch
Unit Port (1)
unit port {
// I/O stub unit
number: int;
// ID of this instance
cur_cell: cell;
// current cell
event rclk is rise('`RCLK(number)') @sim; // RX bus clock
event tclk is rise('`TCLK(number)') @sim; // TX bus clock
// send_cells() - repeatedly generates and sends cells
send_cells() @rclk is {
while TRUE {
gen cur_cell;
send(cur_cell);
wait [5] * cycle;
// constant delay between cells
};
};
Unit Port (2)
// send() - sends the given cell to the DUT port, one byte per cycle
send(c: cell) @rclk is {
out("Sending ", c);
'`RXRDY(number)' = 1;
// indicates data is valid
for i from 0 to c.data.size()-1 {
'`RXDAT(number)' = c.data[i]; // sends next byte
wait cycle;
};
'`RXRDY(number)' = 0;
// end of transmission
};
Unit Port (3)
// collect_cells() - repeatedly waits for output cells and collects them
collect_cells() @tclk is {
var collected_cell: cell;
while TRUE {
collected_cell = collect();
};
};
Unit Port (4)
// collect() - collects a cell from the DUT port, one byte per cycle
collect(): cell @tclk is {
wait true('`TXRDY(number)');
// wait for transmission
result = new cell;
while '`TXRDY(number)' {
// While data lines are valid,
result.data.add('`TXDAT(number)'); // read next byte
// see: add() syntax
wait cycle;
};
out("Collected ", result);
};
Unit Port (5)
run() is also {
start send_cells();
start collect_cells();
};
}; // unit port
// Sender and receiver TCMs run
// concurrently
struct cell {
// data item struct
%data: list of byte;
keep soft data.size() == 20;
// see: keep soft syntax, size() syntax
//... add other cell fields
};
'>
4 Generation
5 Simulation
6 Temporals
7 Checking
8 Coverage
Coverage
1.
The coverage engine in Specman Elite can collect functional coverage
information and then produce various reports based on it. Specman
Elite supports functional coverage rather than code coverage.
Cover a combination of DUT data, states, and state-transitions
Define coverage goals at a high level of abstraction and ensure
effective testing by comparing the goals with the actual results
Verify generation distribution by measuring the coverage of
generated data
Get information on uncovered areas and direct tests to those areas
Get information about delay distribution and DUT performance
Process
Define what should be covered






Use test plan
Determine data fields/registers.
Define separate buckets for legal values, illegal values, and
boundary values, i.e. corner cases.
Examine both interfaces and internal states
Choose the state registers and state transitions of important
state machines
Define interactions between some of the above states or
data, e.g. the state of one state machine relative to another
or to a value of a signal
Process
Define the coverage points in e




Choose the most suitable struct to contain the
cover group definitions.
Choose the sampling event for each cover group
Define the coverage group in a separate file from
the basic struct definition. Divide definitions into a
tree-like files structure according to modules,
allowing tests to import part of them
When testing begins, have all groups collect full
data. Thereafter, cover groups can be marked as
count_only to optimize memory consumption and
execution time
Process
Use the coverage feedback




Develop an initial test base.
Direct further test development, aiming at those
"holes" found in the coverage report.
Verify generation distribution. If it is not what you
expected, check your generation or use
constraints to direct the generator
Determine when testing is finished
Interesting Values
Some interesting coverage items might be:
Main state machines
Control registers
Corner cases like buffer overflow/underflow
Interrupts and exceptions
Registers or fields that represent
classification, kind, or category, e.g. packet
kind (ATM/Ethernet, big/small), instruction
kind (arithmetic, branch), etc.
Coverage Example
Suppose also that your architecture file includes:
//Basic struct definition
<'
struct frame {
legal: bool;
length: uint;
//... more fields
event sent;
};
'>
Define buckets for cover items
//test spec: frame length: legal[64..1518], illegal [0..63, 1519..2500],
// corners: [0,63,64,1517,1518,1519]
extend frame {
cover sent is {
// see: defining cover group
item legal;
item length using ranges = {
// cover item
range([65..1516],"legal");
// Legal - without corner cases
range([1..62, 1520..2500],"illegal"); // Illegal - without corner cases
range([0],"corner-0");
// Corner case: 0 - illegal
range([63],"corner-63");
// Corner case: 63 - illegal
range([64],"corner-64");
// Corner case: 64 - legal lower corner
range([1517],"corner-1517");
// Corner case: 1517 - legal upper corner
range([1518],"corner-1518");
// Corner case: 1518 - legal upper corner
range([1519],"corner-1519");
// Corner case: 1519 - illegal upper range
};};};
Use internal information when
defining coverage points
Looking at the DUT internals can significantly improve the quality
of your coverage reports. For example, looking at the depth of
internal queues (especially crossing that with other pieces of
information) can reveal interesting holes in your testing.
Letting implementors define internal coverage points is an excellent
idea, because:
By defining the coverage points, implementors can document
their concerns, knowing that those concerns will eventually be
addressed. The coverage points say in effect: "If you did not
implement these values, then the test is incomplete."
There is no danger of internal knowledge contaminating testing
and checking, because adding coverage points does not affect
the functionality of anything else.
Interaction between states
and data elements
The following are examples of interaction between coverage items
that can provide important information. These examples are
taken from various applications:
CPU


Interrupt kind and the interrupted instruction opcode
The kind of memory read request (instruction fetch, data read,
read for address calculation) and the cache hit/miss indication
Bus-Oriented Device


Transaction kind and kind of exception (e.g. abort, retry, etc.)
Transaction kind and number of data transfer cycles (in burst
mode)
Data Communication


Packet kind and signaling of a CRC error
Packet kind and packet length
Where to define a cover group
The rule-of-thumb is to choose the
struct most relevant to the covered data
and from which all the data is
accessible.
For example:


Coverage of DUT Signals
Coverage of Generated Data
How to choose the sampling
event
Coverage Group
Data fields that do not change
throughout the test, thus needing to
be sampled exactly once
Recommended Sampling Event
The pre-defined quit event
Generated data items
Pre-existing or newly defined event
(within the data item struct) that is
emitted by the injecting TCM prior
to the injection.
Data items collected from the DUT's
output
Pre-existing or newly defined event
(within the data item struct) that is
emitted by the collecting TCM
whenever all of any item's data is
ready for sampling
DUT signals
A dedicated event triggered by a
DUT signal, indicating that the
sampled DUT signals have changed
and are now stable
Suggested File Organization
Properly organizing coverage definitions into files allows for improved
maintainability and better control over what will be covered. If every
module's coverage definitions are defined in a separate file, choosing
what files to load determines what will be covered. Separate files for
each module's coverage definitions also improves modularity and
reusability of the modules themselves
To organize your coverage definitions for greater modularity and
reusability:


Group coverage definitions into files according to modules
Create a file or files that import the modular coverage definitions
If you have hierarchy of modules, you should create a tree coverage
file that reflect that hierarchy
Separated in this fashion, you can import only those files or file subtrees that are currently of interest to you, thereby saving memory and
disk space
Usage of count_only
The count_only option is very important for saving
memory and disk space. However, when defining a
cover item as count_only,the timing of its samples is
lost. Under such circumstances, you cannot use such
an item when interactively defining (post-run) crosscoverage items.
We recommend that initially you do not use the
count_only option but rather collect all information in
full. Later on, when all cross-coverage items are
defined, items that should be marked as count_only
would be more easily identified.
Code Examples - coverage
Complete e code:










8.1 cover_dut_signal.e
8.2 cover_sm.e
8.3 cover_generation.e
8.4 cover_output.e
8.5 cover_async_overview.e
8.5.1 cover_async_persisting.e
8.5.2 cover_async_coupled.e
8.5.3 cover_async_coinciding.e
8.5.4 cover_async_interval.e
8.6 cover_api.e








8.7.1 cover_analysis_collect_data.e
8.7.2 cover_analysis_grade.e
8.7.3 cover_analysis_rank.e
8.7.4 cover_analysis_optimize.e
8.8. 1 cover_based_gen_collect.e
8.8.2 cover_based_gen_write.e
8.9.2 cover_tip_tick_end.e
8.9.1 cover_tip_multi_inst.e
8.1 Coverage of DUT Signals
See code: cover_dut_signal.e
Process
Define the signals to be covered


You may like to sample any important signals or states of
state machines from which the DUT's status and
functionality can be monitored.
Sometimes you can directly sample signals in your HDL
model, and sometimes you need to sample the values of e
data fields that are a function of the DUT signals
Define the sampling trigger

The triggers should correspond as closely to the DUT timing
as possible. Thus, the trigger for sampling should usually be
an explicit DUT signal
Create the coverage definitions in the proper structs
Specify the coverage options to be utilized
The signals to sample
As a rule, the initial and most obvious selection of covered signals can
be derived from the block-level specs. Furthermore, the DUT designer
may want to cover additional implementation-specific signals.



Cover the states of main state machines.
Cover signals indicating exceptions and corner cases (e.g. overflow).
Cover other important/risky parts of specific modules according to the
recommendation of each module's designer.
 Remember that coverage definitions should be placed in separate files organized
in a tree-like structure

Gradually enhance the coverage in accordance with the discovery of bugs
 Once a bug is found in a module, check to see if the signals already covered in
that module suffice or whether you need to add coverage of signals surrounding
the area where the bug was found.

Once the basic items are defined, add transition items and cross-coverage
items.
 Typically, you can derive many transition and cross-coverage items from the test
plan. For example, tables of legal combinations of signal values indicate the need
for a cross-coverage item.
Sampling DUT signals or their
e shadows?
Even if a DUT signal has a shadow field in your e
environment, sampling the signal is preferable.

This way you get a more reliable coverage of your DUT,
rather than relying on indirect information from your test
environment.
There is no substantial performance penalty with this
approach.

If the coverage requires a higher-level value that is a
function of one or more DUT signals, compute that value in
your e code and then sample it.
Sampling the higher-level value is more efficient and
easier to use later on than sampling all the
underlying signals.

This is true even when some of the underlying signals are
needed for other purposes as well and are therefore covered
anyway.
Using DUT signals as coverage
triggers
The event associated with a coverage group
(triggering its sampling when it occurs) must be a
Specman Elite event defined in your e code. The
event can be:


A dedicated event defined solely for the purpose of coverage
An existing event already used for other purposes
When covering DUT signals, the first approach is
usually best since it isolates the DUT coverage from
the rest of the environment.

This lets you keep the event as close to the DUT as possible
and protect it from bugs and changes in the test
environment.
Common examples
Sampling one signal: The event is generally triggered
by a change in the sampled signal itself:
event cov_pipe_sm is change('top.pipe.state') @sim;
Sampling multiple signals at the same time: The
event is generally triggered by a signal indicating that
the sampled signals are now stable:
event cov_interrupt is rise('top.intr') @sim;
// when emitted, interrupt-properties signals can be
sampled
Sampling multiple signals at different times: This
requires defining a temporal expression to express
the time differences. The following example covers
the time delay between requests and grants on a
bus.
Example
unit requests_monitor {
req_time: time;
grant_delay: int;
event cov_req_grant is {
// Emitted after request-grant sequence
rise('top.arbiter.req') exec { req_time = sys.time;}; // see: exec syntax
[..];
rise('top.arbiter.grant') exec { grant_delay = sys.time - req_time; };
};
cover cov_req_grant is {
// see: defining cover group
// test plan: check grant delays in the range [2..7], and other (invalid) ones
item grant_delay using
ranges = { range([2..7], "", 1);
range([0..1, 8..1M], "invalid");
}; // etc.
};
}; // unit requests_monitor
Which structs should host
coverage definitions?
Covered
signals domain
Suggested Example
host struct
DUT's I/O
block
I/O stub
units
Application: datacomm
Covered signal: the port's main state
machine(s)
Host unit: port, representing an I/O port of
the DUT
Emulated
modules
Emulating
unit in e
Application: bus-oriented device
Covered signal: Arbiter's state machine(s)
and outputs
Host unit: arbiter, monitoring the bus arbiter
Global DUT
condition
Top DUT
unit
Application: a CPU
Covered signals: program counter, stack
pointer and status register
Host unit: cpu, representing the entire DUT
Utilizing coverage options
Option
Application
Description
Defining
buckets
cover items
allows defining and naming sets of
values, which are treated as a
single bucket by the coverage
engine
Filtering
samples
cover groups
allows filtering irrelevant samples
Ignoring values
cover items
allows ignoring some values so
that they are not shown in the
coverage reports
Defining
descriptions
cover groups
and cover
items
allows attaching descriptive text
that is shown in coverage reports
8.1 Coverage of State
Machines
See code: cover_sm.e
State Machines
One of the basic requirements of your verification
environment should be that your machines visit all
their states and pass through all possible transitions.
Covering state machine signals is done just like any
other DUT signal.

Typically, when dealing with state machines, you are
interested not only in the value of the signal that represents
the machine, but also in some or all of the transitions the
signal undergoes.
To cover transitions, Specman Elite provides the
transition cover item.
Process
1.
2.
3.
4.
5.
6.
7.
Define an enumerated type <sm_name>_state that describes
the states of the machine. Remember to verify that the enum
values correspond to the HDL definitions of the state machine
signal's values
Define an event to be emitted whenever the state machine's
signal changes (or when its clock ticks)
Define a cover group based on the event defined in step 2.
Add to the cover group a cover item of type
<sm_name>_state for sampling the value of the state
machine's signal.
Add to the cover group a transition cover item of the item
defined in step 4.
Use the ignore option to mask irrelevant transitions.
Use the illegal option to check for illegal transitions.
Note: Implementation issue
This discussion assumes that the state
machine is implemented as a single wide
signal in the DUT. In such case, each value of
the signal represents a different state of the
machine.
Another possible implementation is to use a
separate 1-bit signal for each of the states
such that the machine consists of several
signals.
Multi-Word State Machine
Assume that the state machine is represented
by the following 1-bit signals: 'top.cpu1.start',
'top.cpu1.fetch1', 'top.cpu1.fetch2',and
'top.cpu1.exec'.
The cover event needs to catch the change in
any of these signals, and the value sampled is
the merged value of these signals (as if they
were one 4-bit signal).
Multi-word code
type cpu1_state: [START = 1, FETCH1 = 2, FETCH2 = 4, EXEC = 8];
unit cpu1 {
event clk1 is rise('top.clock') @sim;
event cpu_multi_sm is (
change('top.cpu1.start') or change('top.cpu1.fetch1')
or change('top.cpu1.fetch2') or change('top.cpu1.exec')
) @clk1;
cover cpu_multi_sm is {
// see: defining cover group
item multi_sm : cpu1_state = ('top.cpu1.start' + 'top.cpu1.fetch1' << 1
+ 'top.cpu1.fetch2' << 2 + 'top.cpu1.exec' << 3).as_a(cpu1_state);
transition multi_sm;
};// cover cpu_multi_sm
};// cpu1
State Machine Example
CPU types
<'
verilog import cover_sm.v;
// see cover_sm.v
// enums correspond to the HDL definitions, see Note 1
type cpu_state: [START = `START,
FETCH1 = `FETCH1,
FETCH2 = `FETCH2,
EXEC = `EXEC];
type opcode_kind: [ADD = `ADD,
ADDI = `ADDI,
SUB = `SUB,
SUBI = `SUBI,
JMP = `JMP,
CALL = `CALL,
RET = `RET];
Unit CPU
extend sys {
cpu: cpu is instance;
};// sys
unit cpu {
init_complete: bool;
keep init_complete == FALSE;
// FALSE on begin of simulation
opcodes: list of opcode_kind;
// generated at pre-run
keep soft opcodes.size() in [5..10]; // see: keep soft syntax, size() syntax
event clk is rise('top.clock') @sim;
……..
Drive Method
drive_cpu() @clk is {
// feed opcodes into the CPU
wait fall('top.reset');
// wait until DUT finishes reset
init_complete = TRUE;
for each (op) in opcodes {
'top.instruction' = op;
out("Executing instruction ",op);
//... to be replaced with actual driving code
wait [5] * cycle; // wait until instruction execution completes
};// for
wait cycle;
stop_run();
};// drive_cpu
run() is also { start drive_cpu(); };
};// cpu
‘>
Coverage File
<'
extend cpu {
event cpu_sm is change('top.cpu.main_cur_state') @sim;
// the state machine event emitted
// when machine changes its state,
cover cpu_sm using
// the state machine cover group
text = "The CPU's main state-machine",
when = (init_complete == TRUE)
// sample only if initialization is completed
is {
item sm : cpu_state = 'top.cpu.main_cur_state' using
text = "The CPU state machine signal";
transition sm using
text = "The CPU state machine transitions",
illegal=(sys.cpu.illegal_cpu_transitions(prev_sm,sm));
// sometimes using pre-defined illegal is not enough
// prev_sm is automatically defined for you
};// cover cpu_sm
…
Coverage File
// return TRUE for all illegal transitions by defining the allowed transitions",
illegal_cpu_transitions(prev_sm: cpu_state, cur_sm: cpu_state): bool is {
result = not ((prev_sm == START and cur_sm == FETCH1)
or (prev_sm == FETCH1 and cur_sm == FETCH2)
or (prev_sm == FETCH1 and cur_sm == EXEC)
or (prev_sm == FETCH2 and cur_sm == EXEC)
or (prev_sm == EXEC and cur_sm == START));
};// illegal_cpu_transitions
};// cpu
extend global {
setup_test() is also {
// see: setup_test() method
set_config(cover, mode, normal);
// enable coverage collection
// see: set_config syntax
};
finalize_test() is also {
// see: finalize_test() method
specman("show cover *.*.*");
// automatically show coverage at end of run
};
};// global
‘>
Sampling when the state
machine's clock ticks
There are two ways of defining the state machine
coverage event:

By the change of the state machine's state (as in the code)
event cpu_sm is change('top.cpu.main_cur_state') @sim;

By the clock that times the transitions of the state
machine.
event clocked_cpu_sm is rise('top.clock')@sim;
The second approach is necessary when you want
to cover self-transitions (transitions from a state to
itself).
clocked_illegal_cpu_transitions(prev_sm: cpu_state, cur_sm: cpu_state):
bool is {
result = illegal_cpu_transitions(prev_sm, cur_sm)
and (prev_sm != cur_sm);
When using illegal is not
enough
The illegal option invokes a DUT error whenever its
expression is true. But this should not be used as the
only way for checking because:
The DUT error occurs at the end of the test and not
when the illegal transition actually occurred.
If you run tests without coverage, there will be no
error at all.
We recommend adding an explicit check for the
illegal transitions in addition to using the illegal
option.
Example:
illegal check (vs cover)
<'
extend cpu {
previous_state: cpu_state;
on cpu_sm {
// check whenever the state machine signal changes
var cur_state: cpu_state = 'top.cpu.main_cur_state';
check that not illegal_cpu_transitions(previous_state, cur_state)
else dut_error("Illegal transition from ",previous_state," to ",cur_state,
" occurred");
previous_state = cur_state;
};// on cpu_sm
drive_cpu() @clk is first {
// initialize previous_state
previous_state = 'top.cpu.main_cur_state';
};
};// cpu
'>
8.3 Coverage of Generated
Data
See code: cover_generation.e
Covering distribution of inputs
Data to cover:


DUT configuration options, test configuration options, and
test environment topology information
Data items' main attributes
Sampling event:


DUT and test configuration options can be sampled on quit
event. This is true for every struct that does not change
throughout the test and still exists at the end of the
simulation run. If the configuration or topology changes a
few times during the test run, you must explicitly define,
emit, and use events that signify these changes.
Data item fields can be sampled on a pre-existing or newly
defined event of the dataitem struct that is emitted by the
injecting TCM prior to the injection
Quit event as sampling event
of cover groups
Every struct has a quit() method and a quit event. The quit()
method is called for every struct under sys at the end of the
simulation run. In addition to other tasks, the quit() method
emits the quit event.
For struct instances that do not change their values throughout
the test's run and exist under sys when the simulation run ends,
the quit event is a perfect choice for sampling the coverage
data. It is also recommended to use the global option for this
type of cover groups
For struct instances that do change or are dismissed during the
run, you should use a dedicated event that signifies the change
(i.e. is emitted when the change happens).
The sampling event for the
data item struct
The best time to sample a generated data item is
when its injection into the DUT begins.

Therefore, sample timing should be controlled by the
injecting TCM.
To implement this arrangement in our example, we
define an event (started) within the data item struct
(cell) to indicate the beginning of the data item's
injection (see code).

When the injecting TCM starts injecting a data item, it emits
the started event for that data item (see code)
Using Backpointers and files
The child struct defined in a file of
its own: file: child.e
<'
struct child {
//... child struct members
};
'>
Extend the child, in the file parent.e
<‘
extend child {
parent: parent_struct;
// a generated field
};
//use the field and link it in
struct parent_struct {
// The constraint is always
//
in the parent struct
one_child: child;
keep one_child.parent == me;
children: list of child;
keep for each in children {
.parent == me;
};
};
‘>
Buckets for int/uint coverage
items
Generally, we recommend defining buckets
for items with a large range of possible values
(like ints/uints). Omitting the bucket
definitions may result in less informative
coverage reports for those items, because:


Each sampled value is displayed in a bucket of its
own, possibly making the report long and
unreadable.
Values which were not sampled (holes) are not
easily identified.
Example
Suppose you are covering the delay (in clock cycles)
between requests from the arbiter and their grant. A
poor cover item definition would be:
item last_grant_delay;
If the test produced the following set of delays:{4,3,4,4,8,3},
the coverage report of the above item will look something like
this:
last_grant_delay (6 samples, 1 tests ):
2 times (33%, 1 tests) [3]
3 times (50%, 1 tests) [4]
1 times (16%, 1 tests) [8]
From this report, it is hard to identify values that were not
sampled but should have been sampled and illegal values that
were sampled
Use Ranges to Make Buckets
<‘
// test spec: Check each of the short-cycle delay values [2..6],
// and also longer valid values (up to 9 cycles). Check also
// the handling of invalid delays.
struct request_grant_monitor {
event grant;
last_grant_delay: int;
cover grant is {
// see: defining cover group
item last_grant_delay using
ranges = { range([2..6],"" ,1); // short delays. Creates a bucket for each
range([7..9], "long");
// long delays. One bucket for all
range([0..1, 10..1M], "invalid"); // one bucket for all invalid values
};
};
};
'>
Results:
With the cover item defined this way, the coverage report will
reveal holes and group intentional, invalid values into a bucket
of their own.
last_grant_delay (6 samples, 1 tests ):
0 times ( 0%, 0 tests) [2]
2 times (33%, 1 tests) [3]
3 times (50%, 1 tests) [4]
0 times ( 0%, 0 tests) [5]
0 times ( 0%, 0 tests) [6]
1 times (16%, 1 tests) long
0 times ( 0%, 0 tests) invalid
From this report, you can easily see that delays of 2, 5, and 6
cycles were not tested nor were any invalid values tested.
8.4 Coverage of DUT Data
Output
See Code: cover_output.e
Coverage of output structs
Coverage of output DUT data items (cells, packets,
memory transactions, etc.) can be an excellent
measure of the effectiveness of your tests.
The coverage information for each data item may be
sampled once all the data is collected into an e
struct.
This section explains how you can get coverage of
interesting attributes of the data items collected from
the DUT output and stored as e structs. It also
suggests a simple way to define the coverage group
and the coverage event (used for sampling the
coverage data).
Implementation
To define coverage of data that is collected
into a struct (typically by a TCM):
1.
2.
3.
Define an event in the data item struct.
Define the cover group in the data item struct
using that event.
Add an action to the collecting TCM so that it
emits the event when it finishes collecting the
data.
Basic structs definition for a
switch
<'
extend sys {
switch: switch is instance;
// DUT top level unit instance
};
unit switch {
// top DUT unit definition
ports: list of port is instance;
// one for each input or output port
keep ports.size() == 4;
// see: size() syntax
keep for each (p) in ports {
p.num == index;
};
wait_and_stop() @sys.any is {
// simplified way to stop the run, see Note 5
wait [30] * cycle;
stop_run();
};
run() is also {
start wait_and_stop();
};
};
I/O stub: port
unit port {
// I/O-stub unit
num: int;
// port's number
!cells: list of cell;
// generated on the fly
event recv_clk is cycle @sys.any;
// fake clock
listen() @recv_clk is {
var c: cell;
// holds the currently collected data struct
while (TRUE) {
c = new;
// allocates a new empty struct
c.parent = me;
fill_cell(c);
// fills the cell with random data
emit c.received;
// signals completion of data item collection
cells.add(c);
// see: add() syntax
wait [1] * cycle;
// needed for demonstration only
}; // while (TRUE)
}; // listen()
run() is also {
start listen();
};
Fill Cell
// This method is for demonstration purposes only. It generates random
// data for the cell, instead of really collecting it from the DUT.
fill_cell(c: cell) is {
var temp_c: cell;
gen temp_c keeping { .parent == me};
c.valid = temp_c.valid;
c.header = temp_c.header;
c.payload = temp_c.payload;
};
}; // unit port
data item: cell
struct cell {
// top data item struct
parent: port;
// back pointer to port
valid: bool;
keep soft valid == TRUE;
// by default have valid
cells
%header: cell_header;
%payload: cell_payload;
keep valid => header.len in [64..1518];
keep payload.data.size() == header.len;
// see: size()
syntax
event received; // emitted upon completion of data collection
}; // struct cell
Cell header and payload
struct cell_header {
%len: int(bits: 32);
// Add here other header's fields ...
};
struct cell_payload {
%data: list of byte;
// Add here other payload fields ...
};
'>
Add the coverage group for
the collected cells
extend cell {
cover received using text = "Coverage of the collected cells data" is {
item valid;
item payload_size: uint = payload.data.size() using ranges = {
// Test Spec: Check small and large payload sizes, and for invalid sizes
range([64..200],"Small-Valid");
range([201..1518],"Large-Valid");
range([0..63,1519..2500],"Invalid"); };
// The DUT port that transmitted this cell
item port_num: uint = parent.num using
// Note that num is a field of the parent port
ranges = {range([0..50],"",1)},
// Define buckets when you have a large range
illegal = (port_num >= sys.switch.ports.size()); // Cover used ports only
}; // cover received
}; // extend cell
Extend Global
extend global {
setup_test() is also {
// Activate coverage collection.
// see: setup_test() method
set_config(cover, mode, normal);
// see: set_config
syntax
};
finalize_test() is also {
// see: finalize_test() method
specman("show cover *.*.*");
// Automatically show coverage at end of run
}; // See Note 1 Coverage report generated by this example.
};
'>
Coverage Reports
Coverage Reports
Coverage Reports
8.5 Covering Asynchronous
Values
See Code: cover_async_overview.e
Introduction
In many cases (especially for cross-coverage), you
may need to cover asynchronous values, i.e. values
that should be sampled at different times.

Such asynchronous values usually originate from different
signals in the DUT, but they may also originate from the e
test environment.
To obtain cross-coverage of more than one value, all
the cross-coverage values must be contained within
one cover group.
Since all the items in a cover group are sampled at
the same time, you must define a single sampling
moment, in which all the involved values (or some
saved snapshot of them) are sampled at once.
Implementation
1. Choose a host struct

Keep in mind all the relevant events, fields and DUT signals
to be covered.
2. Define a sampling event

Use a new, dedicated event, which will be emitted when all
the covered values are valid and can be sampled. If there is
no point in time when all the covered values are valid, save
value snapshots.
3. Define a coverage group

Base the coverage group on the sampling event (defined in
step 2), and include cover items that sample all the
required values.
Saving snapshots of values
inside temporal expressions
The temporal language allows execution of e
actions during the evaluation of a temporal
expression, using exec blocks.

This lets you save values for use even after the
value has changed.
For example, the temporal expression of the
req_ack event in the following example
detects a request-acknowledge sequence.

It saves the request kind, which may not remain
valid until the time of the acknowledge:
Example
<'
struct requests_monitor {
event req is rise('top.req') @sim;
// emitted when request issued
event ack is rise('top.ack') @sim;
// emitted when acknowledged
req_kind: int;
// saves the request kind
event req_ack is {
@req exec { req_kind = 'top.req_num';};// on a request, save its kind
// see: exec syntax
[1..8];
// 1 to 8 cycles
@ack;
};
on req_ack {
out(req_kind, " acknowledged.");
// using the saved value
};
};
'>
Temporal Expression (aside)
{rise('top.req'); [1]; @ack} @clk;
The "@clk" syntax means "evaluate when clk occurs".
The above expression is a sequence temporal
expression containing three simpler temporal
expressions:
rise('top.req') - a rise in the level of an HDL signal,
followed by
[1] - one cycle of the clk sampling event, followed by
@ack - an occurrence of the ack event
8.5.1 Persisting Values
Asynchronously changing values
that remain valid between changes
Implementation
You need to sample both values whenever
one of them changes. To cross-cover two
asynchronous persisting values:
1. Define an event that is emitted whenever
one of the values changes. For example:
event cov_data is (@data1_change or @data2_change) @sys.tick_end;
2. Define a cover group for the event defined
in step 1 and sample both values.
Example
Code (Fragment)
extend cpu_monitor {
event ds_change is change('top.cpu.ds')@sim; // data segment change
event cs_change is change('top.cpu.cs')@sim; // code segment change
event segments_change is (@ds_change or @cs_change)@sys.tick_end;
cover segments_change is {
// see: defining cover group
item ds: int = 'top.cpu.ds' using
// sample data segment value
ranges = { range([0..128K-1], "", 32K); };
item cs: int = 'top.cpu.cs' using
// sample code segment value
ranges = { range([0..128K-1], "", 32K); };
cross ds, cs using
// cross-coverage of the two
text = "Data segment vs. code segment";
};
}; // extend cpu_monitor
Coupled Asynchronous Values
Covering couplets of values that change asynchronously
Coupled Values
Coverage of two values (either DUT signals or
fields in e structs) which change
asynchronously in respect to each other but
whose changes have a one-to-one coupling.
Called “change sets”
The values are not persisting and therefore
must be sampled when they change (or
become valid).
Implementation
1. For each of the two values, define a basic event to
be emitted when the concerned value changes.
2. Define a combined event that is emitted once for
each change set.
3. Incorporate exec blocks in the temporal expression
of the combined event that save the required data
from all the early changing values (data1 in our
example).
4. Define a cover group for the event defined in step 2,
using both the data saved within the temporal
expression (step 3) and other current values from
the DUT/test environment.
Example
Example
The following example is of a bus arbiter that
waits for requests and grants them.
Requests are accompanied by an ID, and
grants are accompanied by a burst-mode
indication.

Suppose that the test plan requires testing the
combinations of request types and the burst mode
they were assigned.
Code Fragment
extend arbiter_monitor {
request_type: uint;
// basic events - one for each value in the change set
event request is rise('top.arbiter.req')@sim; // data segment change
event grant is rise('top.arbiter.grant')@sim; // stack segment change
// combined event - emitted once for each change set
event cov_req_ack is {
@request exec {
// see: exec syntax
request_type = 'top.arbiter.req_type';} ;// saving req_type for later use
[..] * not @request ;
// see Note 2
@grant
} @sys.tick_end;
// Sampling at Tick End
Fragment
cover cov_req_ack using
text = "Request-acknowledge cycles coverage" is {
item request_type using
// sampled & saved at request time
text = "Request type",
ranges = { range([0..3], "", 1);
// buckets for large ranges,
range(others, "invalid"); };
item burst_mode: bool
// sampled at grant time
= 'top.arbiter.burst_mode' using
text = "Burst mode for requests";
cross request_type, burst_mode using
text = "Request type vs. allowed burst mode";
}; // cover cov_req_ack
}; // extend arbiter_monitor
Execution
Coverage Report
……
cross__request_type__burst_mode (2 samples, 1 tests ):
Request type vs. allowed burst mode
0 times ( 0%, 0 tests) [0]
0 times ( 0%, 0 tests) [1]
1 times (50%, 1 tests) [2]
0 times ( 0%, 0 tests) FALSE
1 times (100%, 1 tests) TRUE
1 times (50%, 1 tests) [3]
1 times (100%, 1 tests) FALSE
0 times ( 0%, 0 tests) TRUE
0 times ( 0%, 0 tests) invalid
8.5.3 Variably Coinciding
Values
Covering couplets of values where changes
to one value are less frequent but always
coincide with changes to the other value.
Introduction
Coverage collection of two values (either DUT
signals or fields in e structs) that change
synchronously in respect to each other, but
one of them changes more frequently than
the other.
Sample the values only when both change
Implementation
Trigger sampling of variably coinciding values by
changes to either the slower-changing value or the
faster-changing value.

The best approach is sampling by the slower-changing
value, since it is more straightforward and more time
efficient. However, the other approach may be useful if there
is already a coverage group for the faster-changing value
(data1 in our example).
To sample variably coinciding values using the timing
of the slower-changing value:



Define a basic event that detects value changes of the
slower-changing value (data2 in Figure 1).
On the basis of the event defined in step 1, define an event
that is emitted at the end of the tick (see code example).
Define a coverage group to sample both values for the event
defined in step 2.
Event definitions
Code Fragment
unit cache_monitor {
// top-level cache monitor
event cache_miss is rise('top.cache.miss') @sim;
event cache_read is rise('top.cache.read') @sim;
};
extend cache_monitor {
event miss_cover is @cache_miss @sys.tick_end;
cover miss_cover using
text = "Cache misses (triggered by the miss signal)" is {
item addr: bus_addr_type = 'top.cache.addr' using
ranges = { range([0..64K-1], "", 16K); },
ignore = (addr >= 64K);
}; // cover miss_cover
}; // extend cache_monitor
8.5.4 Values Within an
Interval
Covering values that change within
a designated time interval.
Implementation
To cover value(s) X within a designated interval I:
1. Define the basic events that signal:
1.
2.
3.
The beginning of the interval
The end of the interval
The change of the value(s) X
2. Define an event (using the basic events defined in
step 1) for the sequence:
1.
2.
3.
Interval start
X changing
Interval end
3. Define a cover group for the sequence event (step 2
above) sampling all the required values, either
directly or from their saved state.
Example
Structs/Units
<'
define COL_WINDOW 64;
// define collision window (see Note 1)
define TX_END_AFTER_COL 32;
// cycles to stop TX after a collision
extend sys {
port: port is instance;
};
unit port {
event clk is cycle@sys.any;
// fake clock, see Note 4
// basic events: interval start and end, and the asynchronous collision
event TX_start is rise('top.RD') @sim;
event TX_end is fall('top.RD') @sim;
event collision is rise('top.collision') @sim;
};
'>
Coverage Event definition
<'
extend port {
!start_time: time;
!collision_time: time;
event cov_collision is {
@TX_start exec { start_time = sys.time};
// save transmission start time
[..] * (not @TX_start and not @TX_end);
@collision exec { collision_time = sys.time;}; // save collision time
[..] * not @TX_start;
// ignore additional collisions
@TX_end
} @clk;
Coverage command
cover cov_collision using
text = "Collisions-related delays" is {
item collision_delay: int = (collision_time - start_time) using
text = "Delay between TX start and a collision",
ranges = { range([1..COL_WINDOW-1], "collision window");
range([COL_WINDOW], "collision window limit");
range(others, "late collision"); };
item TX_end_delay: int = (sys.time - collision_time) using
text = "Delay between a collision and the TX end",
ranges = { range([1..TX_END_AFTER_COL], "short");
range([TX_END_AFTER_COL+1..100], "long");
range(others, "very long"); };
}; // cover cov_collision
}; // extend port
Timing example
Coverage Report
collision_delay (2 samples, 1 tests ):
Delay between TX start and a collision
2 times (100%, 1 tests) collision window
0 times ( 0%, 0 tests) collision window limit
0 times ( 0%, 0 tests) late collision
TX_end_delay (2 samples, 1 tests ):
Delay between a collision and the TX end
2 times (100%, 1 tests) short
0 times ( 0%, 0 tests) long
0 times ( 0%, 0 tests) very long
8.7 Coverage Analysis
See code(s): cover_analysis…e
Overview
After you collect coverage data from your test base,
you may want to analyze it to get a more
comprehensive view of your coverage. Such analysis
may address the following questions:





What buckets and items are not covered at all?
What tests cover what items, and which items (if any) are
covered by only one test?
What is the coverage grade of your current testing
environment, i.e. how far are you from full coverage?
What is each test's contribution to the current coverage?
What is the optimal sub-list of tests for best coverage (in
terms of coverage grade and in terms of efficiency)?
Tutorial format
To use this application, download the full
code files from each of the sub-sections in
this section and then load them in succession.
Then follow the instructions, to get the kind
of report shown in the next slide.
Activating the coverage
analysis application
To activate the coverage analysis application:


Run your environment.
Load the e code of this book
 cover_analysis_collect_data.e

Issue the command analyze cover from the
Specman Elite prompt.
9 Performance
10 Debugging
11 Test Base Implementation
12 Pre-defined Components
Contents
"PCI eVC User Guide Version 1.2". The PCI eVC suits
both devices that contain an arbiter and devices that
have only bus agents.
Ethernet eVC User Guide Version 1.0. The ethernet
eVC suits any ethernet device, with any number of
ethernet ports.
AMBA AHB eVC User Guide Version 1.0. The AMBA
AHB eVC suits any device that contains an AHB bus
interface.
USB eVC User Guide Version 1.0. The USB eVC suits
any device that contains a USB bus interface.
Download