uml exp snpd2008

advertisement
A Comparative Evaluation of Tests
Generated from Different UML Diagrams
Supaporn Kansomkeat, Jeff Offutt,
Aynur Abdurazik and Andrea Baldini
Software Engineering
George Mason University
Fairfax, VA USA
www.cs.gmu.edu/~offutt/
offutt@gmu.edu
Types of Test Activities
• Testing can be broken up into four general types of activities
1.
Test design
Design test values to satisfy coverage criteria or
other engineering goal
Requires technical knowledge of discrete math,
programming and testing
2.
Test automation Embed test values into executable scripts
Requires knowledge of scripting
3.
Test execution
Run tests on the software and record the results
Requires very little knowledge
4.
Test evaluation
Evaluate results of testing, report to developers
Requires domain knowledge
• This research focuses on test design
SNPD 2008
© Jeff Offutt
2
Model-Based Testing
• We derived tests from one of four general types of structures :
1.
2.
3.
4.
Graphs
Logical expressions
Input space descriptions
Syntactic descriptions (eg, source code, XML, SQL)
• These structures can come from various software artifacts :
–
–
–
–
–
Specifications
Requirements
Design documents
Source code
Other documentation
• When tests are derived from non-source code artifacts, we say we are
using “model-based testing”
– The most common type artifact used is a UML diagram
SNPD 2008
© Jeff Offutt
3
How to Apply MBT ?
• The UML provides many kinds of diagrams
• Test designers are left wondering questions such as …
Which UML diagrams should I use ?
What kind of tests should I create ?
SNPD 2008
© Jeff Offutt
4
Research Goal
• This research compared the use of
1.
2.
UML state charts and
Sequence diagrams
• On software developed in-house that implements part of a mobile
phone
• With two types of faults seeded by hand
1.
2.
Unit and module faults
Integration faults
• Unit Testing : Testing methods and classes independently
• Integration Testing : Testing interfaces between methods and classes
to assure they have consistent assumptions and communicate correctly
SNPD 2008
© Jeff Offutt
5
Statecharts
• A statechart describes behavior of software in terms of states and
transitions among states
• Example :
User interface
statechart
SNPD 2008
© Jeff Offutt
6
Sequence Diagrams
• A sequence diagram captures time dependent sequences of
interactions between objects
• Example :
Initialization sequence diagram
SNPD 2008
© Jeff Offutt
7
Experimental Hypotheses
• Testing is often performed at specific levels (unit, integration, system)
• Many design models describe aspects of the software at those specific
levels
• A common assumption among researchers is that tests for specific
levels should be based on the associated design artifacts
• Null hypotheses :
H01: There is no difference between the number of faults
revealed by statechart tests and sequence diagram tests at
the unit and integration testing levels
H02: There is no difference between the number of test
cases generated from statecharts and sequence diagrams
SNPD 2008
© Jeff Offutt
8
Experimental Variables
• Independent Variables
Types of UML diagrams
1.
2.
Statecharts
Sequence diagrams
Test criteria
1.
2.
Testing levels
1.
2.
Unit testing
Integration testing
Path coverage for sequence diagrams
Full predicate coverage for statecharts
Levels of faults
1.
2.
Unit level faults
Integration level faults
• Dependent variables
Two sets of tests
Number of faults found
SNPD 2008
© Jeff Offutt
9
Experimental Subjects
• Design documents
• Faults
– Six class diagrams
– 31 unit level faults
– Five statechart diagrams
– 18 integration level faults
third author
– Six sequence diagrams
– Each fault seeded into a separate
copy of the software
• 37 alternatives
• Implementation in Java
– Eight classes
– 600 LOC
• Automation
fourth author
• Test cases
– 81 tests from statecharts
– 43 tests from sequence diagrams
SNPD 2008
© Jeff Offutt
– Unix shell scripts that ran each test
case on each faulty version of the
implementation and recorded the
results
10
Generating Test Cases
• Sequence diagrams
Message sequence path coverage : Execute the software
that implements each path through each sequence
• Statecharts
Full predicate coverage : For each logical predicate on
each transition in the statechart, derive tests to test each
clause independently
• Tests were automated as sequences of method calls to the
associated object
SNPD 2008
© Jeff Offutt
11
Seeding Faults
• A unit level fault is defined to cause incorrect behavior in a unit
(method or class) when executed in isolation (31 faults)
– Mistaken variable name
– Wrong operator used
– …
• An integration level fault causes two or more units to interact
incorrectly (18 faults)
–
–
–
–
–
Cannot be detected when units are executed in isolation
Incorrect method call
Incorrect parameter passing
Incorrect synchronization
…
• A fault was revealed if the output of the faulty version of the program
differed from the output of the original program
SNPD 2008
© Jeff Offutt
12
Experimental Process
Statecharts
Sequence
Diagrams
Unit
Tests
insert faults
run tests
P
Integ
Tests
FI1 FI2
FI18
FU1 FU2
FU31
Count the
number of
faults found
SNPD 2008
© Jeff Offutt
13
Experimental Results
Levels of
Faults
Faults Found
Number of
Faults
Statechart
Tests (81)
Sequence Diagram
Tests (43)
Unit
31
77% (24)
65% (20)
Integration
18
56% (10)
83% (15)
As expected, the statechart tests found
more unit faults and the sequence diagram
tests found more integration faults
SNPD 2008
© Jeff Offutt
14
Analysis and Validity
• Both null hypotheses are rejected for these data
• The statechart tests found all the unit faults that the sequence diagram
tests found – plus more
– Functionality that was not used in the integrated system
• The sequence diagram tests found all the integration faults that the
statechart tests found – plus more
• Threats to validity :
– External – only one application and one set of tests
– Internal – other test criteria could yield different results
SNPD 2008
© Jeff Offutt
15
Conclusions
• UML statecharts do not always capture enough low level details
– OCL could help
• Concurrency meant some minor “differences” were not actually
failures (a problem of observability)
• We generated more statechart tests than sequence diagram tests
• Although learning the criteria took some effort, once learned,
generating the tests according to the criteria was very easy – even
without automation
• The results support the strategy :
– Use statecharts to derive unit level tests
– Use sequence diagrams to derive integration tests
SNPD 2008
© Jeff Offutt
16
Contact
Jeff Offutt
offutt@gmu.edu
http://cs.gmu.edu/~offutt/
SNPD 2008
© Jeff Offutt
17
Download