McCabe Complexity Metrics

advertisement
Management
Overview
9861 Broken Land Parkway
Fourth Floor
Columbia, Maryland 21046
800-638-6316
www.mccabe.com
support@mccabe.com
1-800-634-0150
1
Copyright McCabe & Associates 1999
Agenda






McCabe IQ Overview
Software Measurement Issues
McCabe Concepts
Software Quality Metrics
Software Testing
Questions and Answers
2
Copyright McCabe & Associates 1999
About McCabe & Associates
Global
Presence
3
20 Years of
Expertise
Analyzed Over 25 Billion
Lines of Code
Copyright McCabe & Associates 1999
McCabe IQ process flow
Target platform
Analysis platform
Source code
Quality
Managemen
t
Instrumented
source code
McCabe IQ
Compile
and run
Effective
Testing
Execution log
4
Copyright McCabe & Associates 1999
McCabe IQ and Configuration
Management
Quality
Management
Effectiv
e
Testing
• Merant PVCS
McCabe IQ
Execution
Log
• Rational ClearCase
• CA Endevor
Test
Environmen
t
• Monitor quality as software changes
• Manage test environment
5
Copyright McCabe & Associates 1999
McCabe IQ and Test Automation
Risk
Management
Source code
Effective
Testing
Test
Management
McCabe IQ
Mercury Interactive:
• TestDirector
• WinRunner
Test executable
GUI Test
Automation
Execution log
Non-GUI Test
Automation
• Risk-driven test management
• Effective, automated testing
6
Copyright McCabe & Associates 1999
McCabe IQ Components
QUALITY ASSURANCE
McCabe QA
McCabe Data
McCabe Compare
McCabe Change
TESTING
McCabe Test
McCabe TestCompress
McCabe Slice
McCabe ReTest
McCabe IQ Framework
(metrics, data, visualization, testing, API)
Source Code Parsing Technology
(C, C++, Java, Visual Basic, COBOL, Fortran, Ada)
7
Copyright McCabe & Associates 1999
McCabe QA
McCabe QA measures
software quality with
industry-standard metrics
– Manage technical risk factors as
software is developed and changed
– Improve software quality using
detailed reports and visualization
– Shorten the time
between releases
– Develop contingency
plans to address
unavoidable risks
8
Copyright McCabe & Associates 1999
McCabe Data
McCabe Data pinpoints
the impact of data
variable modifications
– Identify usage of key data
elements and data types
– Relate data variable changes to
impacted logic
– Focus testing resources on the
usage of selected data
9
Copyright McCabe & Associates 1999
McCabe Compare
McCabe Compare
identifies reusable and
redundant code
– Simplify maintenance and
re-engineering of
applications through the
consolidation of similar
code modules
– Search for software defects
in similar code modules, to
make sure they’re fixed
consistently throughout the
software
10
Copyright McCabe & Associates 1999
McCabe Change
McCabe Change identifies
new and changed modules
– Manage change with more
precision than the file-level
information from CM tools
– Work with a complete
technical risk profile
 Complex?
 Poorly tested?
 New or changed?
– Focus review and test efforts
11
Copyright McCabe & Associates 1999
McCabe Test
McCabe test maximizes
testing effectiveness
– Focus testing on high-risk
areas
– Objectively measure testing
effectiveness
– Increase the failure detection
rate during internal testing
– Assess the time and
resources needed to ensure a
well-tested application
– Know when to stop testing
12
Copyright McCabe & Associates 1999
McCabe Slice
McCabe Slice traces
functionality to
implementation
– Identifies code that
implements specific
functional transactions
– Isolates code that is unique
to the implementation of
specific functional
transactions
– Helps extract business rules
for application redesign
13
Copyright McCabe & Associates 1999
McCabe IQ Components
Summary








McCabe QA: Improve quality with metrics
McCabe Data: Analyze data impact
McCabe Compare: Eliminate duplicate code
McCabe Change: Focus on changed software
McCabe Test: Increase test effectiveness
McCabe TestCompress: Increase test efficiency
McCabe Slice: Trace functionality to code
McCabe ReTest: Automate regression testing
14
Copyright McCabe & Associates 1999
Software Measurement Issues





Risk management
Software metrics
Complexity metrics
Complexity metric evaluation
Benefits of complexity measurement
15
Copyright McCabe & Associates 1999
Software Risk Management

Software risk falls into two major categories
– Non-technical risk: how important is the system?

Usually known early
– Technical risk: how likely is the system to fail?


Often known too late
Complexity analysis quantifies technical risk
– Helps quantify reliability and maintainability

This helps with prioritization, resource allocation,
contingency planning, etc.
– Guides testing


16
Focuses effort to mitigate greatest risks
Helps deploy testing resources efficiently
Copyright McCabe & Associates 1999
Software Metrics Overview

Metrics are quantitative measures
– Operational: cost, failure rate, change effort, …
– Intrinsic: size, complexity, …

Most operational metrics are known too late
– Cost, failure rate are only known after deployment
– So, they aren’t suitable for risk management

Complexity metrics are available immediately
– Complexity is calculated from source code

Complexity predicts operational metrics
– Complexity correlates with defects, maintenance
costs, ...
17
Copyright McCabe & Associates 1999
Complexity Metric Evaluation

Good complexity metrics have three properties
– Descriptive: objectively measure something
– Predictive: correlate with something interesting
– Prescriptive: guide risk reduction

Consider lines of code
– Descriptive: yes, measures software size
– Predictive, Prescriptive: no

Consider cyclomatic complexity
– Descriptive: yes, measures decision logic
– Predictive: yes, predicts errors and maintenance
– Prescriptive: yes, guides testing and improvement
18
Copyright McCabe & Associates 1999
Benefits of Complexity Measurement

Complexity metrics are available from code
– They can even be estimated from a design

They provide continuous feedback
– They can identify high-risk software as soon as it is
written or changed

They pinpoint areas of potential instability
– They can focus resources for reviews, testing, and
code improvement

They help predict eventual operational metrics
– Systems with similar complexity metric profiles
tend to have similar test effort, cost, error
frequency, ...
19
Copyright McCabe & Associates 1999
McCabe Concepts

Definition: In C and C++, a module is a function or subroutine with a single
entry point and a single exit point. A module is represented by a rectangular
box on the Battlemap.
main
function a
Library module
20
Difficult to maintainable module
function c
printf
Difficult to test module
function d
Well-designed,
testable module
Copyright McCabe & Associates 1999
Analyzing a Module


Stmt
Number
1
2
3
4
5
6
7
8
9
For each module, an annotated source listing and
flowgraph is generated.
Flowgraph - an architectural diagram of a software
module’s logic.
Code
main()
{
printf(“example”);
if (y > 10)
b();
else
c();
printf(“end”);
}
Battlemap
main Flowgraph
1-3
main
4
b
c
printf
5
condition
7
end of condition
8-9
21
node:statement or block
of sequential statements
edge: flow of control
between nodes
Copyright McCabe & Associates 1999
Flowgraph Notation (C)
if (i) ;
if (i) ; else ;
do ; while (i);
22
while (i) ;
if (i && j) ;
if (i || j) ;
switch(i) { case 0: break; ... }
Copyright McCabe & Associates 1999
Flowgraph and Its Annotated Source
Listing
Origin information
Module: marketing
Annotated Source Listing
Metric information
Program : corp4
09/23/99
File
: ..\code\corp4.i
Language: instc_npp
Module Module
Start Num of
Letter Name
v(G) ev(G) iv(G) Line Lines
------ ----------------------------------------------------------- ----- -----B
marketing
2
1
2
16
10
0
1*
2
3
16
17
18
19
20
21
22
23
24
25
B0
B1* B2
B3
B4* B5
B9
B6* B7 B8
}
marketing()
{
int purchase;
Decision construct
purchase = query("Is this a purchase");
if ( purchase == 1 )
development();
else
support();
4*
6*
5
7
9
Node correspondence
23
8
Copyright McCabe & Associates 1999
Low Complexity Software

Reliable
– Simple logic

Low cyclomatic complexity
– Not error-prone
– Easy to test

Maintainable
– Good structure

Low essential complexity
– Easy to understand
– Easy to modify
24
Copyright McCabe & Associates 1999
Moderately Complex Software

Unreliable
– Complicated logic

High cyclomatic complexity
– Error-prone
– Hard to test

Maintainable
– Can be understood
– Can be modified
– Can be improved
25
Copyright McCabe & Associates 1999
Highly Complex Software

Unreliable
– Error prone
– Very hard to test

Unmaintainable
– Poor structure

High essential complexity
– Hard to understand
– Hard to modify
– Hard to improve
26
Copyright McCabe & Associates 1999
Would you buy a used car from this software?

Problem: There are size
and complexity boundaries
beyond which software
becomes hopeless
– Too error-prone to use
– Too complex to fix
– Too large to redevelop

Solution: Control complexity
during development and
maintenance
– Stay away from the boundary
27
Copyright McCabe & Associates 1999
Important Complexity Measures

Cyclomatic complexity: v(G)
– Amount of decision logic

Essential complexity: ev(G)
– Amount of poorly-structured logic

Module design complexity: iv(G)
– Amount of logic involved with subroutine calls

Data complexity: sdv
– Amount of logic involved with selected data
references
28
Copyright McCabe & Associates 1999
Cyclomatic Complexity




The most famous complexity metric
Measures amount of decision logic
Identifies unreliable software, hard-to-test
software
Related test thoroughness metric, actual
complexity, measures testing progress
29
Copyright McCabe & Associates 1999
Cyclomatic Complexity

Cyclomatic complexity, v - A measure of the
decision logic of a software module.
– Applies to decision logic embedded within written
code.
– Is derived from predicates in decision logic.
– Is calculated for each module in the Battlemap.
– Grows from 1 to high, finite number based on the
amount of decision logic.
– Is correlated to software quality and testing
quantity; units with higher v, v>10, are less
reliable and require high levels of testing.
30
Copyright McCabe & Associates 1999
Cyclomatic Complexity
1
1
edges and node method
e = 24, n = 15
v = 24 -15 +2
v = 11
2
R1
3
=1
5
2
=2
3
4
R2
6
7
4
=1
R3
5
R4
9
8
10
6
R5
11
7
=1
predicate method
v=+1
v = 11
12
R11
8
13
14
region method
regions = 11
18
10
=1
R6
15
11
=1
R8
16
R7
19
21
Beware of crossing lines
17
9
=2
20
12
R9
22
13
R10
23
23
14
=1
24
31
Copyright McCabe & Associates 1999
15
Vital Signs and High v’s
•Higher risk of failures
•Difficult to understand
Risks of
increasing
v
•Unpredictable expected results
•Complicated test environments including more
test drivers
20
•Knowledge transfer constraints to new staff
15
10
5
TIME
Essential Complexity




Measures amount of poorly-structured logic
Remove all well-structured logic, take
cyclomatic complexity of what’s left
Identifies unmaintainable software
Pathological complexity metric is similar
– Identifies extremely unmaintainable software
33
Copyright McCabe & Associates 1999
Essential Complexity

Essential complexity, ev - A measure of
“structuredness” of decision logic of a
software module.
– Applies to decision logic embedded within written
code.
– Is calculated for each module in the Battlemap.
– Grows from 1 to v based on the amount of
“unstructured” decision logic.
– Is associated with the ability to modularize
complex modules.
– If ev increases, then the coder is not using
structured programming constructs.
34
Copyright McCabe & Associates 1999
Essential Complexity - Unstructured
Logic
Branching out of a loop
Branching into a
decision
35
Branching in to a loop
Branching out of a
decision
Copyright McCabe & Associates 1999
Essential Complexity - Flowgraph
Reduction

Essential complexity, ev, is calculated by reducing
the module flowgraph. Reduction is completed by
removing decisions that conform to single-entry,
single-exit constructs.
Cyclomatic
Complexity
=4
36
Essential
Complexity = 1
Copyright McCabe & Associates 1999
Essential Complexity

Flowgraph and reduced flowgraph after structured constructs
have been removed, revealing decisions that are unstructured.
v=5
37
Reduced flowgraph
v=3
Therefore ev of the original flowgraph = 3
Superimposed
essential flowgraph
Copyright McCabe & Associates 1999
Essential Complexity

Essential complexity helps detect unstructured code.
v = 10
ev = 1
Good designs
v = 11
ev = 10
Can quickly
deteriorate!
38
Copyright McCabe & Associates 1999
Vital Signs and High ev’s
•Intricate logic
•Conflicting decisions
Risks of
increasing
ev
•Unrealizable test paths
•Constraints for architectural improvement
•Difficult knowledge transfer to new staff
10
6
3
1
TIME
How to Manage and Reduce v and ev
Decreasing
and managing
v and ev
20
15
10
1
•Emphasis on design architecture
and methodology
•Development and coding standards
•QA procedures and reviews
•Peer evaluations
TIME
•Automated tools
•Application portfolio management
•Modularization
Module Design Complexity
How Much Supervising Is Done?
41
Copyright McCabe & Associates 1999
Module design complexity




Measures amount of decision logic involved
with subroutine calls
Identifies “managerial” modules
Indicates design reliability, integration
testability
Related test thoroughness metric, tested
design complexity, measures integration
testing progress
42
Copyright McCabe & Associates 1999
Module Design Complexity

Module design complexity, iv - A measure of
the decision logic that controls calls to
subroutines.
– Applies to decision logic embedded within written
code.
– Is derived from predicates in decision logic
associated with calls.
– Is calculated for each module in the Battlemap.
– Grows from 1 to v based on the complexity of
calling subroutines.
– Is related to the degree of "integratedness"
between a calling module and its called modules.
43
Copyright McCabe & Associates 1999
Module Design Complexity

Module design complexity, iv, is calculated
by reducing the module flowgraph.
Reduction is completed by removing
decisions and nodes that do not impact the
calling control over a module’s immediate
subordinates.
44
Copyright McCabe & Associates 1999
Module Design Complexity

Example:
main()
{
if (a == b) progd();
if (m == n) proge();
switch(expression)
{
case value_1:
statement1;
break;
case value_2:
statement2;
break;
case value_3:
statement3;
}
}
main
iv = 3
progd
main
Reduced Flowgraph
v=5
v=3
progd()
progd()
proge()
proge()
do not impact calls
45
proge
Therefore,
iv of the original flowgraph = 3
Copyright McCabe & Associates 1999
Data complexity

Actually, a family of metrics
– Global data complexity (global and parameter),
specified data complexity, date complexity



Measures amount of decision logic involved
with selected data references
Indicates data impact, data testability
Related test thoroughness metric, tested
data complexity, measures data testing
progress
46
Copyright McCabe & Associates 1999
Data complexity calculation
M:
M:
1
1
2
2
C1
4*
Data A
5
C1
3
=>
6
3
4*
Data A
C3
7
8
9
9
C4
C2
C2
10
11
C5
12
v=6
47
Paths
Pb : 1-2-3-4-9-3-4-9-12
P2 : 1-2-12
P3 : 1-2-3-4-9-12
Conditions
C1 = T, C2 = T, C2 = F
C1 = F
C1 = T, C2 = F
12
data complexity = 3
Copyright McCabe & Associates 1999
Module Metrics Report
v, number of unit test paths for a module
Page 1
iv, number of integration tests for a module
10/01/99
Module Metrics Report
48
Program: less
Module Name Mod # v(G) ev(G) iv(G)
File Name
------------- ----- ------ ----- ----- -----------------CH:fch_get
118
12
5
6 ..\code\CH.I
CH:buffered
117
3
3
1 ..\code\CH.I
ch_seek
105
4
4
2 ..\code\CH.I
ch_tell
108
1
1
1 ..\code\CH.I
ch_forw_get
106
4
1
2 ..\code\CH.I
ch_back_get
110
6
5
5 ..\code\CH.I
forw_line
101
11
7
9 ..\code\INPUT.I
back_line
86
12
11
12 ..\code\INPUT.I
prewind
107
1
1
1 ..\code\LINE.I
pappend
109
36
26
3 ..\code\LINE.I
control_char
119
2
1
1 ..\code\OUTPUT.I
carat_char
120
2
1
1 ..\code\OUTPUT.I
flush
130
1
1
1 ..\code\OUTPUT.I
putc
122
2
1
2 ..\code\OUTPUT.I
puts
100
2
1
2 ..\code\OUTPUT.I
error
83
5
1
2 ..\code\OUTPUT.I
position
114
3
1
1 ..\code\POSITION.I
add_forw_pos
99
2
1
1 ..\code\POSITION.I
pos_clear
98
2
1
1 ..\code\POSITION.I
PRIM:eof_bell
104
2
1
2 ..\code\PRIM.I
PRIM:forw
95
15
8
12 ..\code\PRIM.I
PRIM:prepaint
94
1
1
1 ..\code\PRIM.I
repaint
93
1
1
1 ..\code\PRIM.I
home
97
1
1
1 ..\code\SCREEN.I
lower_left
127
1
1
1 ..\code\SCREEN.I
bell
116
2
1
2 ..\code\SCREEN.I
vbell
121
2
1
2 ..\code\SCREEN.I
clear
96
1
1
1 ..\code\SCREEN.I
clear_eol
128
1
1
1 ..\code\SCREEN.I
so_enter
89
1
1
1 ..\code\SCREEN.I
so_exit
90
1
1
1 ..\code\SCREEN.I
getc
91
2
1
2 ..\code\TTYIN.I
------------- ----- ------ ----- ----- -----------------Total:
142
93
82
Average:
4.44 2.91 2.56
Rows in Report: 32
Total number of test paths for all modules
Average number of test
paths for each module
Copyright McCabe & Associates 1999
Common Testing Challenges

Deriving Tests
–

Verifying Tests
–
–


Ensuring that Critical or Modified Code is Tested
First
Reducing Test Duplication
–
49
Verifying that Enough Testing was Performed
Providing Evidence that Testing was Good Enough
When to Stop Testing
Prioritizing Tests
–

Creating a “Good” Set of Tests
Identifying Similar Tests That Add Little Value
& Removing Them
Copyright McCabe & Associates 1999
An Improved Testing Process
Black
Box
Requirements
Implementation
Test
Scenarios
White
Box
Static
Identification
of Test Paths
Analysis
Sub-System
or System
50
Copyright McCabe & Associates 1999
What is McCabe Test?
Requirements
Tracing
Test
Coverage
Source Code
Parsing
Untested
Paths
The
McCabe
Tools
Database
Import
Export
Build
Instrumented
Source Code
51
Executable
Execute
Code
Trace
Info
Copyright McCabe & Associates 1999
Coverage Mode

Color Scheme
Represents Coverage
No Trace File Imported
52
Copyright McCabe & Associates 1999
Coverage Results



Colors Show
“Testedness”
Lines Show Execution
Between Modules Tested
Color Scheme:
-
-
Trace File Imported
Branches
Paths
Lines of Code
Untested
3
67%
My_Func1ion
53
Partially
Tested
Copyright McCabe & Associates 1999
Coverage Results at Unit Level
Module _>Slice
54
Copyright McCabe & Associates 1999
Deriving Functional Tests



Visualize Untested Modules
Module Names Provide Insight
into Additional Tests
Examine Partially Tested
Modules
Module Name ‘search’
55
Copyright McCabe & Associates 1999
Deriving Tests at the Unit Level



Too Many Theoretical Tests!
What is the Minimum
Number of Tests?
What is a “Good” Number of
Tests?
Too Few Tests
Too Many Tests
18 times
Statistical Paths = 10
56
18
0
18
Minimum yet
effective testing?
Copyright McCabe & Associates 1999
10
Code Coverage
Example ‘A’
Example ‘B’
Which Function Is More Complex?
57
Copyright McCabe & Associates 1999
Using Code Coverage
2 Tests Required
2 Tests Required
Example ‘A’
Example ‘B’
Code Coverage Is Not Proportional to Complexity
58
Copyright McCabe & Associates 1999
McCabe's Cyclomatic Complexity
One Additional Path Required
to Determine the
Independence of the 2
Decisions
McCabe's Cyclomatic Complexity v(G)
Number of Linearly Independent Paths
59
Copyright McCabe & Associates 1999
Deriving Tests at the Unit Level
Complexity = 10
Minimum 10 Tests Will:
• Ensure Code Coverage
• Test Independence of Decisions
60
Copyright McCabe & Associates 1999
Unit Level Test Paths - Baseline
Method

The baseline method is a technique used to
locate distinct paths within a flowgraph. The
size of the basis set is equal to v(G).
A
v=5
M=N
G
B
O=P
X=Y
D
C
Basis set of paths
Path conditions
P1: ABCBDEF
P2: AGDEF
P3: ABDEF
P4: ABCF
P5: AGEF
Pb: M=N,O=P,S=T,O not = P
P2: M not = N, X=Y
P3: M=N,O not = P
P4: M=N,O=P,S not = T
P5: M not = N,X not = Y
E
S=T
F
61
Copyright McCabe & Associates 1999
Structured Testing Coverage
A
1. Generates independent tests
B
R1
C
P1: ACDEGHIKLMOP
P2: ABD…
P3: ACDEFH…
P4: ACDEGHIJL…
P5: ACDEGHIKLMNP
D
E
F
R2
G
2. Code coverage - frequency of execution
H
R5
I
J
R3
Basis set
Node A B C D E F G H I J K L M N O P
Count 5 1 4 5 5 1 4 5 5 1 4 5 5 1 4 5
K
L
M
N
R4
62
P
O
Copyright McCabe & Associates 1999
Other Baselines - Different Coverage
A
1. Generates independent tests
B
R1
C
P1: ABDEFHIJLMNP
P2: ACD…
P3: ABDEGH…
P4: ABDEGHIKL…
P5: ABDEGHIKLMOP
D
E
F
R2
G
2. Code coverage - frequency of execution
H
R5
I
J
R3
Basis set
K
L
Node A B C D E F G H I J K L M N O P
Count 5 4 1 5 5 4 1 5 5 4 1 5 5 4 1 5
Previous code coverage - frequency of execution
Node A B C D E F G H I J K L M N O P
Count 5 1 4 5 5 1 4 5 5 1 4 5 5 1 4 5
M
N
R4
63
P
O
Same number of tests; which coverage is more effective?
Copyright McCabe & Associates 1999
Untested Paths at Unit Level

Cyclomatic Test Paths
– Module _>Test Paths
– Complete Test Paths by Default

Configurable Reports
– Preferences _>Testing
– Modify List of Graph/Test Path
Flowgraphs
Remaining Untested
Test Paths
Module _>Test Paths
64
Copyright McCabe & Associates 1999
Untested Branches at Unit Level
Untested
Branches
Number of
Executions
for Decisions
Preferences _>Testing (Add ‘Tested Branches’ Flowgraph to List)
Module _>Test Paths
65
Copyright McCabe & Associates 1999
Untested Paths at Higher Level

System Level Integration Paths
– Based on S1
– End-to-End Execution
– Includes All iv(G) Paths
S1 = 6
66
Copyright McCabe & Associates 1999
Untested Paths at Higher Level

System Level Integration
Paths
– Displayed Graphically
– Textual Report
– Theoretical Execution Paths
– Show Only Untested Paths
S1 = 6
67
Copyright McCabe & Associates 1999
Untested Paths at Higher Level

Textual Report of End-to-End
Decisions
Module Calling List
Decision Values
with Line/Node #
68
Copyright McCabe & Associates 1999
Verifying Tests

Use Coverage to Verify
Tests


Store Coverage Results
in Repository
Use Execution
Flowgraphs to Verify
Tests
69
Copyright McCabe & Associates 1999
Verifying Tests Using Coverage

Four Major Coverage
Techniques:
–
–
–
–
70
Code Coverage
Branch Coverage
Path Coverage
Boolean Coverage (MC/DC)
67%
23%
35%
0%
100%
Copyright McCabe & Associates 1999
When to Stop Testing

Coverage to Assess Testing Completeness
– Branch Coverage Reports

Coverage Increments
– How Much New Coverage for
Each New Test of Tests?
71
Copyright McCabe & Associates 1999
When to Stop Testing


Is All of the System Equally Important?
Is All Code in An Application Used
Equally?



10% of Code Used 90% of Time
Remaining 90% Only Used 10% of Time
Where Do We Need to Test Most?
72
Copyright McCabe & Associates 1999
When to Stop Testing / Prioritizing
Tests

Locate “Critical” Code
– Important Functions
– Modified Functions
– Problem Functions

Mark Modules
– Create New “Critical” Group


Import Coverage
Assess Coverage for
“Critical” Code
– Coverage Report for “Critical”
Group
– Examine Untested Branches
73
32
67%
Runproc
39
Search
52%
56
My_Func1ion
Copyright McCabe & Associates 1999
Criticality Coverage

Optionally Use Several
“Critical” Groups



Increasing Levels
Determine Coverage for Each
Group
Focus Testing Effort on
Critical Code
Insufficient Testing?
90%
30%
Coverage
70%
50%
Useful as a Management Technique
25%
74
Copyright McCabe & Associates 1999
When to Stop Testing

Store Coverage in
Repository
– With Name & Author

Load Coverage
– Multiple Selections
– Share Between Users
– Import Between Analyses with
Common Code
Testing _>Load/Save Testing Data
75
Copyright McCabe & Associates 1999
Testing the Changes
Changed
Code
Version 1.0 - Coverage Results
Version 1.1 - Previous Coverage
Results Imported Into New Analysis
Import Previous Coverage Results Into New Analysis:
• Parser Detects Changed Code
• Coverage Removed for Modified or New Code
76
Copyright McCabe & Associates 1999
Testing the Changes

Store Coverage for
Versions
100.00%
80.00%
– Use Metrics Trending to Show 60.00%
Increments
40.00%
– Objective is to Increase
Coverage between Releases
20.00%
0.00%
v1.0
v1.1
v1.2
v2.0
Incremental Coverage
77
Copyright McCabe & Associates 1999
McCabe Change
Changed
Code

Marking Changed Code
– Reports Showing Change
Status
– Coverage Reports for
Changed Modules

Configurable Change Detection
– Standard Metrics
– “String Comparison”
78
Copyright McCabe & Associates 1999
Manipulating Coverage

Addition/Subtraction of slices
– The technique:
Test A
Test B
~(Test A)
Compliment of Test A
Test A
Test B
(Test B)
Test A
(Test B) ^ ~(Test A)
Test B Intersect Compliment of
Test A
79
Test B
Copyright McCabe & Associates 1999
Slice Manipulation
Slice Operations
 Manipulate Slices Using Set Theory
 Export Slice to File
– List of Executed Lines

80
Must be in Slice Mode
Copyright McCabe & Associates 1999
Review


McCabe IQ Products
Metrics
– cyclomatic complexity, v
– essential complexity, ev
– module design complexity, iv

Testing
–
–
–
–

Deriving Tests
Verifying Tests
Prioritizing Tests
When is testing complete?
Managing Change
81
Copyright McCabe & Associates 1999
Download