Lecture20

advertisement
ECE 453 – CS 447 – SE 465
Software Testing &
Quality Assurance
Lecture 20
Instructor
Paulo Alencar
1
Overview
Test Planning
Test Plan Document
Test Case Life Cycle
Test Case Design
System Test Execution
Ref: This lecture is based on Prof. S. Naik’s notes.
2
Test Plan Objectives and Motivation
• Objective: To get ready and organized for test execution
• Test plan documents
– Provide guidance for the execution management to support the test
project and release the necessary resources
– Provide a foundation for the system testing part of the overall
project
– • Provide assurance of test coverage by creating a requirement
traceability matrix
– • Outline an orderly schedule of events and test milestones to be
achieved
– • Specify the personnel, financial, and facility resources required to
support the system testing part.
3
Test Plan Document Structure
• The structure of a Test Plan document usually is:
1. Introduction
2. Features to be tested and features not to be tested
3. Assumptions
4. Testing Approach (methodology)
5. Test Suite Structure
6. Item pass/ fail criteria
7. Test Environment
8. Test Execution Strategy
9. Suspension criteria and resumption requirements
10. Test deliverables
11. Other Issues:
–
Environmental needs, Responsibilities, Staffing and, Training
needs, Schedule, Risks and contingencies, Approvals
4
Test Plan Document Contents
• For each major set of features:
– specify the major activities,
– techniques,
– and tools which will be used to test them.
• The specification should be in sufficient detail to permit
estimation and scheduling of the associated tasks.
• Specify techniques which will be used to assess:
– the comprehensiveness of testing and
– any additional completion criteria.
• Specify the techniques used to trace requirements.
• Identify any relevant constraints.
5
Test Plan Introductory Sections
• Introduction Section
– What we intend to accomplish with this test plan
– Names of the approvers
– Summary of the rest of the document.
• Feature Description Section: Summary of system,
features to be tested.
• Assumption Section
– Identify areas or features for which test cases will not be designed
in this plan:
• Lack of equipment to perform scalability testing
• Unable to prove 3rd party equipment/software to perform
interoperability testing
• May not be possible to perform compliance regulatory tests.
6
Test Approach Section
• Discuss important lessons learnt from past projects and
how to utilize these lessons (e.g. customers encountered
memory leaks,
– Action. Think about detecting memory leaks by using tools)
– If there are any outstanding issues to be tested differently, then
discuss these. (e.g. if a fix will be available for an outstanding
defect, which requires a specific hardware and software
configurations)
• Identify if there are existing test cases to be re-used
• Discuss the management of traceability matrix
• Identify the first level of test categories likely to apply to
the present situation.
7
Test Suite Structure
• Identify the detailed groups and subgroups of test
cases
• Create test objectives based on requirements and
functional specifications
• Create a traceability matrix to provide or associate
between requirements and test objectives to
provide the highest degree of confidence.
8
Test Environment (1)
• For effective system test, plan for a test bed.
• Budget limitations lead to a challenge in designing a test bed with a
much smaller number of equipments than used in real life. The
challenge is in the form of the need to do more with less
• Plan for using simulators, emulators, and traffic generation tools
• Create multiple test environments for different categories of system
tests for two reasons:
– Due to the different natures of different test categories
– To reduce the length of system testing time.
• Preparing a test environment for a distributed system is a challenging
task:
– There is a need for putting together a variety of equipments (computers,
servers, routers, base stations, authentication server, billing server, …)
– There is a need for careful planning, procurement of test equipments, and
9
installation of these equipments
Test Environment (2)
• A central issue is the justification of equipments. A good
justification can be made by answering the following
questions:
– Why do we need these equipments?
– What will be the impact of not having these equipments?
– Is there an alternative to procuring these equipments?
• To answer the above questions, the leader of the SQA must
embrace on a fact finding mission as follows:
– Review the system requirements and the functional specifications
– Take part in the review process to better understand the system and
raise potential concerns to migrate the system from the
development environment to a deployment environment
– Obtain information about the customer deployment architecture
including hardware, software, and manufacture information.
10
Test Case Life Cycle
• Main idea: test cases as products. Therefore test cases have
a life cycle.
Create
Draft
Deleted
Deprecated
Review
Released
Update
11
• Create: The create phase enters the following information:
–
–
–
–
–
–
Test case ID
Requirements IDs
Title
Originator group
Creator
Test category
• Draft: A test engineer enters the following information:
–
–
–
–
–
–
–
–
Author of a test case
Test objective
Environment
Test steps
Clean up
Pass/Fail criteria
Candidate for automation
Automation priority
12
• Review: Here, the creator is the owner
– The owner invites test engineers and developers to
review the test case
– Ensure that the test case is executable and Pass/Fail
criteria are clearly stated
– Changes may occur
– Once the test case is approved, it is moved to
“Released” state.
• Released
–
–
–
–
The test case is ready for execution
Here, the owner is the test organization
Review the test case for re-usability
If there is a need to update the test case, move it to
“update” state.
13
• Update
– Strengthen the test case as the system functionality or environment
changes
– By executing the test case 2/3 times, one may update the test case
to improve reliability
– One gets an idea about its automation potential
– A major update calls for a review of the test case.
• Deleted: If the test case is not a valid one.
• Deprecated: If a test case is obsolete, move it to this state.
A test case becomes obsolete for several reasons:
– System functionality has changed, but test cases have not been
properly maintained
– Test cases have not been designed with re-usability in mind
– Test cases have been carried forward due to carelessness; long after
their original justification has disappeared.
14
Creation of a Test Suite
Test suite: this is a set of test cases organized in a hierarchal
manner for a certain software release. Two characteristics of
a test suite:
• Test cases in a test suite relate to a certain release of the
system. This is important because not all functionalities are
supported in all the different versions of a system.
• Test cases are organized in a hierarchal manner for three
reasons:
– To obtain a balanced test suite
– To prioritize their execution
– To monitor the progress of system testing.
15
Test Suite Organization
Test Suite
Basic
User
Local
Local
Functionality
……………
Long Dist.
Intl.
Robustness
Performance
Admin
Maint.
Pre-paid
……
Credit Card
…………………….
16
State Transition Diagram of a Test
Case Result
Passed
Execution is successful and the
pass criteria is verified.
Failed
Test case successfully executed;
Pass criteria not achieved.
Report the bug.
Blocked
A bug prevents the execution of
the test case. Report the bug.
Invalid
Test case not applicable for this
release.
Untested
17
Drafting a Test Case
• There are more than 3 classes techniques for generating input/output
behavior of test cases for functional testing:
– Analyzing the requirement specification (internal description) and
functional specification (formal description)
– Analyze the input and output domains
– Analyze the source code (for completeness).
• For the other categories of test cases, test engineers have to be creative.
And, to be creative, remember the basic structure of a test objective:
Verify that the system performs X correctly.
• Now you can modify the above objective to identify another objective:
verify that the system performs X correctly, in a given environment Y.
• Here is a pictorial view of the idea of test generation
18
Test Generation
Requirement
Input
System
Expected
Outcome
Environment
19
Test Execution Strategy
• Decide how the test execution should proceed. In other
words, identify an appropriate sequence of test-related
work activities by considering the following:
– In what order the individual (sub) groups of test cases within a test
suite should be executed?
– How many times do you execute the test suite?
– Executing it once is called a test cycle. The major reason for using
a number of test cycles is to ensure that there are no collateral
damage to the system after fixes are applied.
– When do you start the first cycle? This is answered by considering
the idea of entrance criteria.
20
Entrance Criteria
•
Project plan and systems requirements documents are complete (from
marketing)
•
0D (zero defect) hardware version is available to SQA group (from hardware)
•
From software:
–
–
–
–
–
–
–
–
–
•
Functional specifications and design documents are provided to SQA
All code complete and frozen
Engineering performance and scalability benchmark specifications are in place
A bug forecast plan is in place
Unit test plan(s) reviewed and approved by SQA
100% unit tests executed and passed or defined
SIT test plan(s) reviewed and approved by SQA
100% SIT test cases executed and 95% passed
Endurance test completed
From Tech. publication:
– Test cases are completed and approved by all parties
– Cases referencing between test cases and requirements is in place
21
Characteristics of Test Cycles
•
Goals: the SQA team sets its own goals in terms of the level of quality to be
achieved by maximizing the pass test cases
•
Assumptions: how builds are picked up for system testing
•
Revert criteria: when we must prematurely terminate a test cycle because of
failure in a particular test group.
•
The cycle can restart after some conditions are satisfied. The idea in reverting
a test cycle is this: there is no point in continuing with a certain cycle if the
system is of poor quality
•
Action: give the developers an early warning in case of failure threshold is
reached before giving them the worst information. Developers perform root
cause analysis (RCA)
•
Exit criteria: when does a test cycle complete? Mere execution of all test cases
22
is not enough. Some quality metrics must be observed.
Test Cycle 1
• Goals:
– All the test cases shall be executed
– All the failed test cases will be re-run as soon as fixes are
available.
• Revert criteria:
– At any instant, the cumulative failure cost reaches 20% of the total
number of test cases to be executed.
• Exit criteria:
– All test cases are executed at least once
– 95% of test cases pass
23
Test Effort Estimation
• Testing effort means how much work is required
to be done. In concrete terms, this work has two
components:
– The number of test cases created by one person per day
– The number of test cases executed by one person per
day.
• Other factors included in test efforts are:
– The effort needed to create a test environment
– The effort needed to train test engineers on the project
– Availability of test engineers as when they are needed.
24
System Test Execution
•
•
•
•
•
Metrics to monitor
Getting ready to start
Measuring test effectiveness
Test data adequacy
Oracle assumption
25
Metrics to Monitor
For large projects, test execution is monitored on a
• Weekly basis in the beginning, and
• Daily basis towards the end of testing
The following metrics are of importance to all the concerned parties:
• Defect trend
–
–
The number of test cases in each category in different states (states=passed, failed, blocked,
invalid)
The number of defects in different states (states=open, resolved, irreproducible, hold,
postponed, FAD(function as designed), closed).
•
Test case execution trend: How many test cases are executed on a weekly basis?
•
States of the work related to bug fixing
–
–
–
•
Bug resolution rate (weekly rate of bug fixing)
Percentage of the fixes that do not work on re-test
Turn-around time to fix defects (defect aging).
Total number of test cases designed for the project: As testing progresses, this keeps
increasing.
The goals of monitoring these factors
• To see if a revert criterion can be applied
• To see if the activity is progressing according to the plan.
26
Getting Ready for Test Execution
• The “entry criteria” for the first cycle must be satisfied.
• The final item of the entry cycle is the test execution
working document. An outline of such a document is as
follows:
–
–
–
–
–
–
–
–
Test engineers (names, availability, expertise, training)
Test case allocation to engineers (based on expertise and interest)
Test bed allocation (availability, number, distribution)
Progress of test automation (track it)
Projected execution rate (on a weekly basis)
A strategy for the execution of failed test cases
Development of new test cases
Trial of system image (to get acquainted with the system, to verify
test beds)
– Schedule of defect review meetings.
27
Measuring Effectiveness
• Assuming we have also obtained the number of
new defects reported, we are interested on the
following metric:
1-
#of defects found in testing
#of defects found in testing + #of defects not found
28
Test Data Adequacy
Generate test suite T
Execute T with P
Generate additional
tests T’ s.t
T = T U T’
Does T
reveal
bugs?
Fix bugs
Is T an
adequate
test?
Stop
29
Evaluating Test Adequacy
Two practical metrics of evaluating test adequacy:
• Error seeding
– Deliberately implant bugs in the program
– Run the bugged P with T
– If k% of the implanted bugs are revealed, it is assumed that k% of the
original bugs have been found
– For the above holds, the types and distribution of bugs planted are the
same as those of bugs occurring unintentionally.
• Program mutation (idea: construct a class of most likely errors)
– Make a series of minor changes to P, creating a set of programs called
mutations
µ(P) = µb(P) U µe(P)
where µb : bug, µe : equivalent.
– T is considered to be adequate if M(p0) produce incorrect answers in
response to T.
30
Oracle Assumption
The most difficult part of test design is predicting the
expected outcome:
• An oracle is any entity (person, program, process, or body
of data) that specifies the expected outcome of a set of tests
• What is the oracle assumption?
– It is a belief that “The tester is able to determine whether or not the
test outcome is correct”
•
Partial oracle assumption
– Frequently, the tester is able to state with assurance that a result is
incorrect without actually knowing the correct answer.
• Sources of oracle?
– Specifications, existing program, expert.
31
Test Deliverables
• Test Plan Document
– Test Design Specification
– Test Case Specification
• Test Input Data and Output Data
• Test Procedure Specification
• Test Incident Report
– Test Log
– Test Summary Report
– Test Metrics, Coverage Analysis
• Program Quality Estimate
• Test Tool(s)
32
Download