Different Perspectives Approaches to Testing ( Chapters 1)

advertisement
Approaches to ---Testing Software
• Some of us “hope” that our software works as
opposed to “ensuring” that our software works?
Why?
•
•
•
•
Just foolish
Lazy
Believe that its too costly (time, resource, effort, etc.)
Lack of knowledge
– DO NOT use the “I feel lucky” or “I feel confident” approach
to testing - - - - although you may feel that way sometimes.
• Use a methodical approach to testing to back up the
“I feel ‘lucky/confident” feeling
– Methods and metrics utilized must show VALUE
– Value, unfortunately, often are expressed in negative terms
• Severe problems that cost loss of lives or business
• Problems that cost more than testing expenses and effort
Perspective on Testing
•
Today we test because we know that systems
have problems - - - we are fallible.
1. To find problems and find the parts that do not
work
2. To understand and show the parts that do work
3. To assess the quality of the over-all product (A
major QA and release management responsibility )
You are asked to do this as part of
of your assignment 1 – Part II report.
Some Talking Definitions
(based on IEEE terminology)
• Error
– A mistake made by a human
– The mistake may be in the requirements, design, code, fix,
integration, or install
• Fault
Note
– Is a defect or defects in the artifact that resulted from an error
– There may be defects caused by errors made that may or may
not be detectable (e.g. error of omission in requirement)
• Failure
– Is the manifestation of faults when the software is “executed.”
• Running code (includes error of omission and “no-code?”)
• May show up in several places
• May be non-code related (e.g. reference manual) (Not in the text)
• Incident
– Is the detectable symptom of failures
Example? (bank accnt)
Testing in the “Large”
• Testing is concerned with all, but may not be
able to detect all
–
–
–
–
Errors
Faults
Failures
Incidents
• Testing utilize the notion of test cases to
perform the activities of test(s)
– Inspection of non-executables
– Executing the code
– Analyzing results and formally “proving” the nonexecutables and the executable in a business
workflow (or user) setting
Software Activities and Error Injections, Fault
Passing, and Fault Removal
Error
Requirements
Inspection
Error
Inspection
faults
Design
Error
Error
faults
Code
Testing
Fixing
Fixing
faults
faults
Note that in “fixing” faults/failures, one can commit errors and introduce faults
Specification vs Implementation
Specification
Expected
Implementation
Actual
The ideal place where expectation and actual “matches”
The other areas are of concern ---- especially to testers
Specification vs Implementation vs Test Cases
Specification
Implementation
Actual
Expected
2
3
1
5
4
6
7
What do these
numbered regions
mean to you?
Tested
Test Cases
Black Box vs White Box code testing
• Black box testing (Functional Testing)
– Look at mainly the input and outputs
– Mainly uses the specification(requirements) as the
source for designing test cases.
– The internal of the implementation is not included
in the test case design.
– Hard to detect “missing” specification
• White box testing (Structural Testing)
– Look at the internals of the implementation
– Design test cases based on the design and code
implementation
– Hard to detect “extraneous” implementation that
was never specified
We Need Both - - Black Box and White Box Testing
A Sample: “Document-Form” for Tracking Each
Test Case
•
•
•
•
•
•
•
Test Case number
Test Case author
A general description of the test purpose
Pre-condition
Test inputs
Expected outputs (if any)
Post-condition
• Test Case history:
– Test execution date
– Test execution person
– Test execution result (s)
Recording Test Results
• Use the same “form” describing the test case --- see
earlier slide on “document-form” test case and
expand the “results” to include:
– State Pass or Fail on the Execution Result line
– If “failed”:
1. Show output or some other indicator to demonstrate the fault
or failure
2. Assess and record the severity of the fault or failure found
Fault/Failure Classification (Tsui)
• Very High severity – brings the systems
down or a function is non-operational and
there is no work around
• High severity – a function is not operational
but there is a manual work around
• Medium severity – a function is partially
operational but the work can be completed
with some work around
• Low severity – minor inconveniences but the
work can be completed
Fault Classification (Beizer)
•
•
•
•
•
•
•
•
•
•
Mild – misspelled word
Moderate - misleading or redundant info
Annoying – truncated names; billing for $0.00
Disturbing – some transactions not processed
Serious - lose a transaction
Very serious - incorrect transaction execution
Extreme – Frequent & very serious errors
Intolerable - database corruption
Catastrophic – System shutdown
Infectious - Shutdown that spreads to others
Increasing
severity
IEEE list of “anomalies” (faults)
•
•
•
•
•
Input/output faults
Logic faults
Computation faults
Interface faults
Data faults
Why do you care about these “types” of faults (results of errors made)?
Because they give us some ideas of what to look for in inspections and
in designing future test cases ----
Different Levels of Testing
Unit Testing
Functional Testing
System
Component Testing Testing
Program unit A
Function 1
Program unit B
.
.
.
Component 1
Function 2
.
.
Function 8
Program unit T
.
Component 3
Whole
System
Still Need to Demonstrate Value of Testing
• “Catastrophic” problems (e.g. life or business ending
ones) do not need any measurements---but--- others do:
– Measure the cost of problems found by customers
•
•
•
•
•
Cost of problem reporting/recording
Cost of problem re-creation
Cost of problem fix and retest
Cost of solution packaging and distribution
Cost of managing the customer problem-to-resolution steps
– Measure the cost of discovering the problems and fixing them prior
to release
•
•
•
•
•
Cost of planning reviews (inspections) and testing
Cost of executing reviews (inspections) and testing
Cost of fixing the problems found and retest
Cost of inserting fixes and updates
Cost of managing problem-to-resolution steps
– Compare the above two costs AND include loss of
customer “good-will”
Goals of Testing?
• Test as much as time allows us
– Execute as many test cases as schedule allows?
• Validate all the “key” areas
– Test only the designated “key” requirements?
• Find as much problems as possible
– Test all the likely error prone areas and maximize
test problems found?
• Validate the requirements
– Test all the requirements?
• Test to reach a “quality” target
Quality Target?
State your goal(s) for testing. - - - what would you like people to
say about your system ? Your goals may dictate your testing process
Download