System level testing TDDD04 Lecture 9 Ola Leifler måndag 11 maj 15

advertisement
System level testing
TDDD04 Lecture 9
Ola Leifler
måndag 11 maj 15
2
Outline of the Lecture
Test automation
System Testing
•
Function testing
•
•
Model-based testing
Non-functional testing
•
Performance testing
•
Acceptance testing
måndag 11 maj 15
TEST AUTOMATION
3
måndag 11 maj 15
Why automate testing?
•
•
Automated tests are repeatable and cheap to run
•
Every change can be tested cheaply – to ensure it didn’t break something
•
When used for fault isolation this can save considerable time
•
No manual intervention – e.g. overnight, when resources are free
Automated tests can create increased confidence
•
More tests are run more often and more consistently at lower cost
•
Things that are hard to test can be tested automatically (e.g. GUI)
4
måndag 11 maj 15
Problems with automated testing?
•
•
•
Automated tests are expensive to develop
•
Cost 2-3 times as much to develop as a manual test – but only once
•
Maintenance of automated tests can be expensive
People may have unrealistic expectations
•
Automated tests rarely find new bugs – found when the test is developed
•
Technical difficulties are not uncommon (e.g. interoperability)
•
Does not prevent bad testing practices
Organizational issues may prevent success
•
Automated testing often requires technical maturity and infrastructure
•
Introduction of automated testing often requires a champion
5
måndag 11 maj 15
6
Testing activities
Governs the quality of tests
1. Identify
Intellectual activities
(performed once)
2. Design
3. Build
Governs the cost of
executing tests
4. Execute
5. Compare
måndag 11 maj 15
Clerical activities
(repeated many times)
Automated test case generation
•
Code-based input generation (Symbolic execution, JPF)
•
Interface-based: Find inputs that cover an appropriate part of the interface
•
Generation of test data from XML schemata
•
Generation of test data from other interface specifications
•
Generation of test data for GUIs (specified in XML or other methods)
•
Model-based: create a model from the specification and use it to run the tests
7
måndag 11 maj 15
Levels of testing, recap
Unit testing
•
Tests threads of execution in a single unit – e.g. paths in the CFG
•
Inputs and outputs are usually “internal” to the program
•
Requires the use of test scaffolding
Integration testing
•
Tests threats of execution across multiple units – e.g. MM paths
•
Inputs and outputs may be “internal” to the program
•
Requires the use of test scaffolding
8
måndag 11 maj 15
System testing
•
Tests are related to the requirements – i.e. the customer’s values
•
Inputs and outputs are (generally) visible and accessible to the user
•
Often done at the end of development (not a good idea)
Integrated units
Verified and
validated
Function test
Performance
test
Functioning
Acceptance
test
Deployment
test
Accepted
Deployed system
9
måndag 11 maj 15
MODEL-BASED TESTING
10
måndag 11 maj 15
Model-based testing
•
•
•
Generation of complete test cases from models of the SUT
•
Usually considered a kind of black box testing
•
Appropriate for functional testing (occasionally robustness testing)
Models must precise and should be concise
•
Precise enough to describe the aspects to be tested
•
Concise so they are easy to develop and validate
•
Models may be developed specifically for testing
Generates abstract test cases which must be transformed into
executable test cases
11
måndag 11 maj 15
Model-based testing process
•
Model the SUT and/or its environment
•
•
Use an existing model or create one for testing
Generate abstract tests from the model
•
Choose some test selection criteria
•
The main output is a set of abstract tests
•
Output may include traceability matrix (test to model links)
•
Concretize the abstract tests to make them executable
•
Execute the tests on the SUT and assign verdicts
•
Analyze the test results.
12
måndag 11 maj 15
What is a model?
Mapping
•
There is an original object that is
mapped to a model
Reduction
•
Modeled
Attributes
Mapping
Original
Not all properties of the original
are mapped, but some are
Pragmatism
•
The model can replace the
original for some purpose
Mapped
Attributes
Model
Adapted from Allgemeine Modelltheorie by Herbert Stachowiak (1973)
13
See also: http://modelpractice.wordpress.com/2012/07/04/model-stachowiak/
måndag 11 maj 15
Example model: UML activity diagram
•
Original object is a software system (mapping)
•
Model does not show implementation (reduction)
•
Model is useful for testing, requirements (pragmatism)
14
måndag 11 maj 15
Modeling the system
If the model is created for testing only
•
Focus primarily on the SUT
•
Model only the properties of the SUT relevant to testing
•
Replace complex data types with enumerations (where possible)
Pick an appropriate notation – some options:
•
Pre/post notation
•
Transition-based notation
•
History-based notation
•
Functional
•
Operational
•
Statistical
15
måndag 11 maj 15
Notations
•
Pre/post notations – system is modeled by its internal state
•
•
Transition-based – system is modeled as transitions between states
•
•
UML Object Constraint Language (OCL), B, Spec#, JML, VDM, Z
UML State Machine, STATEMATE, Simulink Stateflow
History-based – system described as allowable traces over time
•
Message sequence charts, UML sequence diagrams
•
Functional – system is described as mathematical functions
•
Operational – system described as executable processes
•
•
Petri nets, process algebras
Statistical – probabilistic model of inputs and outputs
16
måndag 11 maj 15
Transition-based example (UML+OCL)
Waiting
keyPress(c) [c=unlock and status=locked] / display=SwipeCard
keyPress(c) [c=lock and status=locked] /display=AlreadyLocked
keyPress(c) [c=unlock and status=unlocked] / display=AlreadyUnlocked
keyPress(c) [c=lock and status=unlocked] / status=locked
cardSwiped /
timer^start()
Swiped
17
måndag 11 maj 15
keyPress(c) [c=unlock] /
status=unlocked
keyPress(c) [c=lock] /
status=locked
timerExpired()
Generate abstract test cases
Transition-based models
•
Search for sequences that result in e.g. transition coverage
Example (strategy – all transition pairs)
Precondition: status=locked, state = Waiting
Event
Exp. state
Exp. variables
cardSwiped
Swiped
status=unlocked
keyPress(lock)
Wai7ng
status=locked
cardSwiped
Swiped
status=locked
keyPress(unlock)
Wai7ng
status=unlocked
18
måndag 11 maj 15
Concretize test cases
Graphwalker
Adaptation
Test cases
Write a layer that adapts the SUT to
the abstraction level of the test cases.
Test cases
Adapter
Transformation
SUT
19
måndag 11 maj 15
Transform abstract test cases into an
executable test script that executes
the SUT directly.
Test script
SUT
Analyze the results
Same as in any other testing method
•
Must determine if the fault is in the SUT or the model (or adaptation)
•
May need to develop an oracle manually
20
måndag 11 maj 15
Why model-based testing?
•
•
•
•
Effective fault detection
•
Equal to or better than manually designed test cases
•
Exposes defects in requirements as well as faults in code
Lower costs than manual testing
•
Less time to develop model and generate tests than manual methods
•
Since both data and oracles are developed tests are very cheap
Better test quality
•
Can measure model/requirements coverage
•
Can generate very large test suites
Traceability
•
•
Straightforward to link requirements to test cases
Requirements evolution
21
måndag 11 maj 15
Limitations of model-based testing
•
Fundamental limitation of testing: won’t find all faults
•
Requires different skills than manual test case design
•
Mostly limited to functional testing – but others may be possible
•
Requires a certain level of test maturity to adopt
Possible “pain points”
•
Outdated requirements – model will be incorrect!
•
Attempt to model things that are hard to model (should go manual)
•
Analyzing failed tests can be more difficult than with manual tests
•
Testing metrics (e.g. number of test cases) may become useless
22
måndag 11 maj 15
Interested in more?
Mark Utting and Bruno Legeard: Practical model-based testing
23
måndag 11 maj 15
24
måndag 11 maj 15
25
måndag 11 maj 15
Demo
26
måndag 11 maj 15
Ola Leifler
ola.leifler@liu.se
NON-FUNCTIONAL TESTING
27
måndag 11 maj 15
Ola Leifler
ola.leifler@liu.se
Performance testing
Testing non-functional requirements
•
Stress tests
•
Environment tests
•
Volume tests
•
Quality tests
•
Configuration tests
•
Recovery tests
•
Compatibility tests
•
Maintenance tests
•
Security tests
•
Documentation tests
•
Timing tests
•
Usability tests
•
Installation tests
•
Accessibility tests
•
Whatever-ity tests
The more we get into these areas, the more system-specific testing becomes
28
måndag 11 maj 15
29
(User) Acceptance testing
Smoke test: tests performed prior to releasing a build to the user
•
Usually a restricted form of acceptance testing
Acceptance testing: tests to determine if the software meets expectations
•
May be performed by developer or by user – usually user
•
In agile processes may be performed repeatedly
Pilot test: test the system in everyday conditions (similar to exploratory testing)
•
Alpha test: usually at the developer’s site in a controlled environment
•
Beta test: usually at one or more customer sites
måndag 11 maj 15
Ola Leifler
ola.leifler@liu.se
WRAP-UP
30
måndag 11 maj 15
•
Compares to programming/research: there are clerical activities we can
automate, and intellectual ones we cannot.
•
You can automate
•
the execution of tests,
•
the generation of test data, or
•
the concrete test cases given a model
•
System-level testing tests non-functional and functional aspects at the
system level
•
Model-based testing uses a model of the software to generate concrete test
cases.
•
State transition diagrams are ONE possible basis for system models
31
måndag 11 maj 15
Download