A  ¬B

advertisement
Coverage Metrics for
Requirements-Based Testing
Michael W. Whalen
Ajitha Rajan
Mats P.E. Heimdahl
Steven P. Miller
(Univ. of Minnesota)
(Univ. of Minnesota)
(Univ. of Minnesota)
(Rockwell Collins Inc.)
1
Adequacy of Black Box Test Suites
• Is your black-box test suite adequate?
• Current practice
• Examine coverage on an executable artifact
http://www.umsec.umn.edu
Requirements
Black-Box
Tests
Soft. Model/
Source Code
Measure
Adequacy
2
Adequacy of Black Box Test Suites
• Problems with current practice
• Indirect measure
• Defects of omission in implementation not
http://www.umsec.umn.edu
exposed.
Incomplete
Implementation
Weak
Black-Box
Test set
• Executable artifact is necessary
• adequacy can only be determined late in the
development process
3
Adequacy Measure - Desirable
Properties
• Objective, implementationhttp://www.umsec.umn.edu
independent measure of adequacy of a
black-box test suite
• Objective assessment of completeness
of high-level requirements (given an
implementation)
• Potential for autogeneration of blackbox test suites.
4
The Idea
Write requirements in a formal notation…
microwave_library_temp/mode_logic
G (FD_On -> Cues_On);
ON
1
Left_Independent_Mode
AND
http://www.umsec.umn.edu
G((¬ Onside_FD_On Λ ¬
Is_AP_Engaged) →
X(Is_AP_Engaged →
Onside_FD_On))
Temporal Logic
OFF
entry:
mode=1;
NOT
{steps_remaining=
steps_to_cook;}
[steps_remaining>0]
/steps_remaining--;
[start &&
steps_to_cook>0]
3
FAILED
entry: mode=2;
2
2
Right_Independent_Mode
OR
1
Property_Satisfied
3
[steps_remaining<=0]
Left_FGS_Active
1
[clear_off || ... 2
!door_closed]
1
AND
Ok
entry: mode=3;
[clear_off]
/steps_remaining=0;
4
[door_closed]
1
[start && ...
door_closed]
2
Right_FGS_Active
Synchronous
Observers
State Machines
Pri nted 14-Jul- 2006 12:51:47
…then define structural coverage metrics to
directly and objectively describe coverage of
requirements
5
Formalizing Requirements
“If the onside FD cues are off, the onside FD cues shall
be displayed when the AP is engaged”
G((¬ Onside_FD_On  ¬ Is_AP_Engaged) →
X(Is_AP_Engaged → Onside_FD_On))
http://www.umsec.umn.edu
• Possible Coverage Metrics
• Requirements coverage: Single test case that
demonstrates that req. is satisfied
• Prone to “dumb” tests, e.g. execution in which AP is
never engaged.
• More rigorous metric is necessary
6
Modified Condition/Decision
Coverage (MC/DC)
• To satisfy MC/DC
• Every point of entry and exit in the model should be
http://www.umsec.umn.edu
invoked at least once,
• Every basic condition in a decision in the model
should take on all possible outcomes at least once,
• Each basic condition should be shown to
independently affect the decision’s outcome
Basic Conditions
Independent
effect of B
A
B
F
F
F
T
F
T
F
T
T
7
A or B
Independent
effect of A
Unique First Cause (UFC)
Coverage
“System shall eventually generate an Ack or a Time Out”
Req. LTL property - F(A  B)
http://www.umsec.umn.edu
¬A, ¬B
¬A, ¬B
A, ¬ B
¬A, ¬B
S0
S1
S2
S3
¬A, B
Si
Path satisfies UFC obligation for A but not B.
¬A, ¬B
For independence of B,
S0
¬A, ¬B
S1
Formal UFC obligation for A : ¬(A  B) U (A  ¬B)
for B : ¬(A  B) U (B  ¬A)
8
¬A, B
Si
UFC Coverage
http://www.umsec.umn.edu
•
G(A)+ = {A U (a  G(A)) | a є A+}
G(A)- = {A U a | a є A-}
•
F(A)+ = {¬A U a | a є A+}
F(A)- = {¬A U (a  G(¬A))| a є A-}
•
(A U B)+ =
{(A  ¬B) U ((a  ¬B)  (A U B)) | a є A+} 
{(A  ¬B) U b | b є B+}
(A U B)- =
{(A  ¬B) U (a  ¬B) | a є A-} 
{(A  ¬B) U (b  ¬(A U B)) | b є B-}
•
X(A)+ = {X(a) | a є A+}
X(A)- = { X(a) | a є A-}
9
Reqs. Coverage as an Adequacy
Measure for Conformance Testing
http://www.umsec.umn.edu
Hypothesis 1(H1): Conformance tests providing
requirements UFC coverage are more effective than
conformance tests providing MC/DC over the model
Hypothesis 2(H2): Conformance tests providing
requirements UFC coverage in addition to MC/DC
over the model are more effective than conformance
tests providing only MC/DC over the model
10
Experiment Setup
Reduced
UFC Test
Suite 1
Formal LTL
Requirements
Generate
UFC Tests
Reduce
http://www.umsec.umn.edu
Reduced
UFC Test
Suite 2
Formal Model
Generate
MC/DC
Tests
Reduce
Reduced
MC/DC Test
Suite 1
Reduced
MC/DC Test
11Suite 2
Reduced
UFC Test
Suite 3
Reduced
MC/DC Test
Suite 3
Results – Hypothesis 1
MC/DC vs UFC
http://www.umsec.umn.edu
% Fault Finding
100
80
60
Avg. MC/DC
40
Avg. UFC
20
0
DWM1
DWM2
Latctl
Vertmax
Hypothesis 1 rejected at 5% statistical significance
on all but the Latctl system
12
Analysis – Hypothesis 1
LTLSPEC G(var_a > (
LTLSPEC G(var_a > (
case
case
foo : 0 ;
foo & baz : 0 + 2 ;
http://www.umsec.umn.edu
bar : 1 ;
foo & bpr : 0 + 3 ;
esac +
bar & baz : 1 + 2 ;
case
bar & bpr : 1 + 3 ;
baz : 2 ;
esac
bpr : 3 ;
));
esac
));
13
Results – Hypothesis 2
http://www.umsec.umn.edu
% Fault Finding
MC/DC vs MC/DC + UFC
96
94
92
90
88
86
84
82
80
Best MC/DC
Avg. Combined
DWM1
DWM2
Latctl
Vertmax
Hypothesis 2 accepted at 5% statistical significance
on all but the DWM2 system
14
Conclusion
• UFC > MC/DC

FALSE
3 of the 4 case examples at 5% statistical significance
• UFC + MC/DC > MC/DC

TRUE
3 of the 4 case examples at 5% statistical significance
http://www.umsec.umn.edu
• Combine rigorous metrics for requirements coverage
and model coverage to measure adequacy of
conformance test suites
• UFC as well as MC/DC metrics are sensitive to
structure of requirements/implementation

Need coverage metrics that are robust to structural changes

Currently working on this
15
Download