Test Inventory

advertisement
Test Inventory
• A “successful” test effort may include:
– Finding “bugs”
– Ensuring the bugs are removed
– Show that the system or parts of the system
works
• Goal of testing (Hutcheson’s):
– Establish a responsive, dependable system which
satisfies and delights the users.satisfies and delights def/metrics?
– Perform the above within the agreed upon
constraint of
• budget,
• schedule, and
• other resources
How do we achieve that goal? – use a process
Plan Test
Organize
Resource
Establish
Test Cases
Execute
Test Cases
Bug?
no
Record
success
yes
Record
failure
no
Bug
Fixed?
yes
Record
Problem
fixed
Integrate fix
and prepare
for rebuild
Retest.
yes
Bug
Fix?
Receive
Response
from dev.
“wait”
Report
Problem to
developers
no
Record
No-fix
reason
Record data
and produce
-by test coverage
Reports
-by test results
-by fix results
-etc.
Planning Test
Test Planning Coverage = # of test cases designed / # of scenarios
- planning mostly based on requirements doc.
- test cases designed with requirements and design docs.
Test Execution Coverage = # of test cases ran / # of designed test cases
- how do we decide how much to run?
- why wouldn’t we run all the designed test cases?
A Real Problem is Getting Bugs All Fixed
• In large complex systems that requires several
steps before one reaches the actual test case,
a failure may not always be reproducible!
– Makes debugging difficult when the developer runs
and it executes! (consider an internal queue size
problem --- when queue is full some external inputs
get dropped. ---- you may not be able to get to full
queue very quickly.)
• Under the gun of schedule, not all problems
can get fixed in time for rebuild and retest.
(low priority ones get delayed and eventually
forgotten!)
Products get released with both “known” bugs and some
“unknown” bugs!
Successful Testing Needs
•
•
•
•
•
Good test plan
Good test execution
Good bug fixing
Good fix integration
Good “accounting” of problems found, fixed,
integrated, and released.
Keeping a “List” or Table of Test Cases
• We must quantitatively keep a list of test cases so
we can ask:
–
–
–
–
–
How many items are on the list
How long does it take to execute the complete list
Where are we in terms of the list (test status)
Can we prioritize the list
Arrange the list to show coverage in a tabular form
Test Case Funct. 1
#1
#2
.
.
Funct. 2
X
Funct. 3
X
X
X
...
How do We Measure Test ?
• Much like how we measure code ---- loc?
– Number of lines of test script written in some language?
• A test case may be measured by the number of steps involved in
executing the test; e.g.
–
–
–
–
–
Step1 input field x
Step2 press submit
Step3 choose from displayed options
Step 4 press submit
(note that not all 4 steps test cases are the same --- much like not all 4
loc are the same.)
• A test case is a comparison of actual versus expected result - - no matter how many steps are needed to get the result.
– This may be vastly different in the test time required.
• Every keystroke and every mouse movement should be counted!
Your thoughts -----?
Some Typical Types of Test
• Unit Test – testing at “chunks of code” level; done
by module author
– Small number of keystrokes and mouse movements
• Functional Test – testing a particular application
function that is usually a requirement statement.
– Often tested as a “black box” test
• “System” Test – testing the system internals and
internal structures. (Not to be confused with total
applications system test)
– Often tested as a “white box’ test
An interesting comparison of 2 tests
item
# of test scripts (actual)
# user functions (actual)
# of verifications / test script
Prod. 1
Prod. 1.1
1,000
132
236
236
1
50
1,000
6,600
Average # of times a test is executed
1.15
5
Total # of tests attempted - computed
1150
33,000
Total # of verifications performed
Average duration of test (known #)
Total time running the test (from log)
# verifications/ hr of testing
20 min.
383 hrs
2.6
4 min
100hrs.
66
Some more interesting numbers
• Efficiency = work done/expended effort = verifications/ hr of testing
– For prod1: efficiency was 2.6. and prod2: it was 66
• Cost is the inverse of efficiency: exp. effort/ work done
– For prod1 : 383 p-hrs/1000 verifications = .383 p-hrs/verification
– For prod1; 383 p-hrs/ 236 functions = 1.6 p-hrs/function verified
• How big is the test? --- number of test scripts identified
• Size of test set – number of test scripts that will be executed to
completion
• Size of test effort – total time required to perform all the test
activities: plan, analyze, execute, track, retest, integrate fixes, etc.
• Cost of total test effort - - - size of test effort in person-hours
multiplied by dollars per person hour.
* Test schedule should be built based on historical information from past &
an estimate of the current effort
How do we create a test inventory
• Data collected from inspections/reviews of
requirements/design/etc.
• Known Analytical methods
–
–
–
–
Path analysis
Data analysis
Usage statistics profile
Environmental catalog (executing environments)
• Non-Analytical
– Experts’ gorilla test
– Customer support’s past intuition/brainstorming
Download