Uploaded by k sai dharani kumar reddy

3b Software testing Methodologies edited

advertisement
Software Testing Methodologies
(Continued)
What is a Testing Technique?






A procedure for selecting or designing tests
based on a structural or functional model of
the software.
It is successful at finding faults.
It is considered a 'best' practice.
It is a way of deriving good test cases.
It is a way of objectively measuring a test
effort.
Testing should be rigorous, thorough and systematic.
Advantages of Testing Techniques



Different people - similar probability of finding
faults
- Gain independence of thought
Effective testing - find more faults
- Focus attention on specific types of fault
- Know that you are testing the right thing
Efficient testing - find faults with less effort
- Avoid duplication
- Use systematic techniques that are measurable
Using techniques makes testing much more effective
Why Dynamic Test Techniques?


Exhaustive testing (use of all possible inputs
and conditions) is impractical
- Must use a subset of all possible test cases
- Must have high probability of detecting faults
Need thought processes that help us select
test cases more intelligently
- Test case design techniques are such thought
processes
Measurement


Objective assessment of thoroughness of
testing (with respect to use of each technique)
- Useful for comparison of one test effort to another
For example,
Project A
60% Equivalence
partitions
50% Boundaries
75% Branches
Project B
40% Equivalence
partitions
45% Boundaries
60% Branches
Types of Systematic Technique
Static (Non-Execution)
• Examination of documentation,
source code listings, and so on.
Functional (Black Box)
• Based on behaviour/
functionality of software
Structural (White Box)
• Based on structure
of software
Test Techniques
Dynamic
Static
etc.
Reviews
Inspection
Walkthroughs
Structural
Desk-checking
Symbolic
Execution
Non-functional
Functional
etc.
etc.
Control
Flow
Data
Flow
etc.
Behavioural
Static Analysis
Performance
etc.
Statement
Branch/Decision
Definition
-Use
Branch Condition
Equivalence
Partitioning
Usability
Arcs
LCSAJ
Branch Condition
Combination
Boundary
Value Analysis
Cause-Effect Graphing
Random
State Transition
Black Box versus White Box?
Black Box testing is
appropriate at all levels,
but dominates higher
levels of testing.
Acceptance
System
White Box testing is used
predominately at lower
levels to compliment
Black Box testing.
Integration
Component
Black Box Test Design and Measurement
Techniques

Techniques defined in software component
testing standard BS 7925-2 are as follows:
- Equivalence Partitioning (EP)
- Boundary Value Analysis (BVA)
- Decision table testing
- State transition testing
Equivalence Partitioning
- Divides (partition) the inputs, outputs, and so on, into
areas that are the same (equivalent)
- Assumption - if one value works, all will work
- One value from each partition better than all from one
invalid
valid
0
1
100
invalid
101
Boundary Value Analysis
- faults tend to lurk near boundaries
- good place to look for faults
- test values on both sides of boundaries
invalid
valid
0
1
invalid
100 101
Example - Loan Application
Customer Name
Account number
Loan amount requested
Term of loan
Monthly repayment
2-64 chars.
6 digits, 1st
non-zero
£500 to £9000
1 to 30 years
Term:
Repayment:
Interest rate:
Total paid back:
Minimum £10
Customer Name
Number of characters:
invalid
Valid characters:
Conditions
Customer
name
Valid
Partitions
2 to 64 chars
valid chars
1
2
valid
A-Z
-’ a-z
space
Invalid
Partitions
< 2 chars
> 64 chars
invalid chars
64 65
invalid
Any
other
Valid
Boundaries
2 chars
64 chars
Invalid
Boundaries
1 chars
65 chars
0 chars
Account Number
first character:
valid: non-zero
invalid: zero
number of digits:
invalid
Conditions
Account
number
Valid
Partitions
6 digits
1st non-zero
Invalid
Partitions
< 6 digits
> 6 digits
1st digit = 0
non-digit
5
6
7
valid
Valid
Boundaries
100000
999999
invalid
Invalid
Boundaries
5 digits
7 digits
0 digits
Loan Amount
499
invalid
Conditions
Loan
amount
Valid
Partitions
500 - 9000
500
9000
valid
Invalid
Partitions
< 500
>9000
0
non-numeric
null
9001
invalid
Valid
Boundaries
500
9000
Invalid
Boundaries
499
9001
Condition Template
Conditions
Valid
Partitions
Customer 2 - 64 chars
valid chars
name
Account
number
6 digits
1st non-zero
Loan
amount
500 - 9000
Tag
Invalid
Partitions
V1 < 2 chars
V2 > 64 chars
invalid char
V3 < 6 digits
V4 > 6 digits
1st digit = 0
non-digit
V5 < 500
>9000
0
non-integer
null
Invalid
Valid
Tag
Tag
Boundaries
Boundaries
D1
B1 1 char
X1 2 chars
D2
B2 65 chars
X2 64 chars
0 chars
D3
X3
D4
B3 5 digits
X4 100000
D5
B4 7 digits
X5 999999
0 digits
D6
X6
X7
D7
B5 499
X8 500
D8
B6 9001
X9 9000
X10
X11
X12
Tag
Design Test Cases
Test
Case
Description
Expected Outcome
New Tags
Covered
1
Name:
Acc no:
Loan:
Term:
John Smith
123456
2500
3 years
Term:
Repayment:
Interest rate:
Total paid:
3 years
79.86
10%
2874.96
V1, V2,
V3, V4,
V5 .....
2
Name:
Acc no:
Loan:
Term:
AB
100000
500
1 year
Term:
Repayment:
Interest rate:
Total paid:
1 year
44.80
7.5%
537.60
B1, B3,
B5, .....
Why do both EP and BVA?

If you do boundaries only, you have covered
all the partitions as well
- Technically correct and may be ok if everything
works correctly!
- If the test fails, is the whole partition wrong, or is a
boundary in the wrong place - have to test midpartition anyway
- Testing only extremes may not give confidence for
typical use scenarios (especially for users)
- Boundaries may be harder (more costly) to set up
White Box Test design and Measurement
Techniques

Techniques defined in software component
testing standard BS 7925-2 are as follows:
- Statement testing
- Branch/Decision testing
- Data flow testing
- Branch condition testing
- Condition combination testing
- Multiple or modified condition decision testing
- Linear Code Sequence and Jump (LCSAJ) testing
Statement Coverage
Statement coverage
is normally measured
by a software tool.
Percentage of executable statements
exercised by a test suite
Number of statements exercised
=
Total number of statements
 For example
- Program has 100 statements
- Tests exercise 87 statements
- Statement coverage = 87%

Typical ad hoc testing achieves 60 - 75%
?
Example of Statement Coverage
1
read(a)
2
IF a > 6 THEN
3
b=a
4
ENDIF
5
print b
Test
Case
Input
Expected
Output
1
7
7
As all the 5 statements are covered by
this test case, we have achieved
100% statement coverage.
Statement
numbers
Decision Coverage
(Branch Coverage)
Decision coverage
is normally measured
by a software tool.
Percentage of decision outcomes
exercised by a test suite
Number of decisions outcomes exercised
False
?
=
Total number of decision outcomes
True
 For example
- Program has 120 decision outcomes
- Tests exercise 60 decision outcomes
- Decision coverage = 50%

Typical ad hoc testing achieves 40 - 60%
Non-Systematic Test Techniques




Trial and error/Ad hoc
Error guessing/experience-driven
User Testing
Unscripted Testing
A testing approach that is only
rigorous, thorough and systematic
is incomplete.
Error Guessing





Is always worth including
Implemented after systematic techniques
have been used
Can find some faults that systematic
techniques can miss
Is a ‘mopping up’ approach
Supplements systematic techniques
Not a good approach to start testing with
Error Guessing - Deriving Test Cases

Consider the following:
- Past failures
- Intuition
- Experience
- Brain storming
- “What is the craziest thing we can do?”
Summary
Key Points
 Test techniques are considered as ‘best practices’ as
they help to find faults.
 Black Box techniques are based on behaviour.
 White Box techniques are based on structure.
 Error Guessing supplements systematic techniques.
Thank you
Download