Test Plan Example 2

advertisement
Test Plan
Binary Search Tree Software Project
By: Lahary Ravuri
Mansi Modak
Rajesh Dorairajan
Rakesh Teckchandani
Yashwant Bandla
June, 2003
Table of Contents


















1 Introduction
2 Test Items
3 Features To Be Tested
4 Features Not To Be Tested
5 Approach
o 5.1 Code Inspections
o 5.2 Unit Testing
o 5.3 System Testing
o 5.4 Regression Testing
o 5.5 Acceptance Testing
6 Item Pass/Fail Criteria
7 Suspension Criteria and Resumption Requirements
8 Test Deliverables
9 Testing Tasks
10 Environmental Needs
11 Responsibilities
12 Staffing and Training Needs
13 Schedule
14 Risks and Contingencies
15 Approvals
To Do List
Revision History
Appendix
1 Introduction
This document sets forth a Testing framework for a Binary Search Tree program. The
project involves a Graphical User interface, developed using Java, which will
communicate with the user and a binary tree repository to add, delete or update integers
as a sorted binary tree. Following are some of the main features of the Tree Applet
developed:
 Adding new nodes
 Deleting nodes
 Searching for a Node
 Loading a Tree
 Saving a Tree
 Traverse a Tree in:
o Pre-Order
o In-Order
o Post-Order
The Tentative UI of the Binary Tree is indicated in the figure (Fig. 1) below:
Binary Tree
Insert
Delete
Find
Traverse In-Order
45
36
13
09
72
38
61
76
27
Load Tree...
Save Tree...
Fig. 1
It is subject to change as the project starts evolving.
This document is a procedural guide for designing test cases and test documentation. To
achieve our goal of 100% correct code, we shall employ both black box as well as white
box testing techniques. Using these techniques will enable us to design test cases that
validate the correctness of the System/Module with respect to the requirements
specification. We have to provide a procedure for Unit and System Testing and identify
the documentation process and test methods for Unit and System Testing.
2 Test Items
Testing will be conducted at the unit and system level. At the unit level, the following
functions will be tested:
 Adding new nodes
 Deleting nodes
 Searching for a Node
 Loading a Tree
 Saving a Tree
 Traverse a Tree in:
o Pre-Order
o In-Order
o Post-Order
3 Features To Be Tested
The following criteria will be applied to test various modules of the Binary Search Tree
Applet:
Adding new nodes:
 Test the basic "Add" functionality, that is, the ability to add a node to the tree
 Test adding a node to an empty tree
 Test adding a node to a non empty tree
 Test to make sure that the node added is in the right position
 Test for valid input from the user, that is, only integers can be added to the tree
Deleting nodes:
 Test the basic "Delete" functionality, that is, the ability to delete a node from the tree
 Test deleting a node from an empty tree
 Test deleting a node from a non empty tree
 Test to make sure if the parent node is deleted, it's children are taken care of
 Test to make sure if the child node is deleted, the tree is still sorted
 Test for valid input from the user, that is, only the nodes that are present in the tree
can be deleted
Searching for a node:
 Test the basic "Find" functionality, that is, the ability to find a node in the tree
 Test to find a node from an empty tree
 Test to find a node from a non-empty tree
 Test for valid input from the user, that is, only the nodes that are present in the tree
can be found
Clearing the Tree:
 Test the basic "Clear" functionality, that is, the ability to clear the tree
Tree Traversal:
 Test to make sure that the user can select a traversal from Pre-Order, In-Order, and
Post-Order
 Test to make sure that upon depending on the chosen traversal order, the user can
view the nodes sorted in the selected order
 Test the basic "Pre-Order" functionality
 Test the basic "Post-Order" functionality
 Test the basic "In-Order" functionality
Load:
 Test "Load" functionality, that is, the ability to load a previously saved tree
Save:
 Test "Save" functionality, that is, the ability to save a tree so that it can loaded at a
later time
4 Features Not To Be Tested
N/A.
5 Approach
This section outlines the overall approach that we will adopt to develop an error-free
Binary Tree Applet.
5.1 Unit Testing
The unit test cases shall be designed to test the validity of the program's correctness. Both
White-box testing as well as Black box testing will be used to test the modules and
procedures that support the modules.
White-box testing will be achieved by using different test coverage techniques including:
- Branch Testing,
- Basis-path Testing
- Conditional Testing
Black Box testing will be achieved by using:
- Boundary Value Analysis
- Equivalence Partitioning method
Test case designers shall generate cases that will force execution through different paths
and sequences:
- Each decision statement in the program shall take on a true value and a false
value at least once during testing.
- Each condition shall take on each possible outcome at least once during
testing.
5.2 System Testing
System Testing verifies that the system meets the previously agreed upon requirements.
To ensure that the correction of defects found during testing is done efficiently therefore
maximizing the iteration of test cycles, a clearly defined set of objectives is established.
They are as follows:
-
Verify that the system satisfies the functional requirements
Verify internal system interfaces
Verify external interfaces with external systems
The system test case designer(s) shall design appropriate test cases to validate the
correctness of the system. Black box testing techniques shall primarily be used in this
perspective. Specifically, an Equivalence Partitioning/Boundary Value technique shall be
implemented. The four primary types of inputs will be:
1. A Simple Case - A test value that establishes the basic correctness of the process.
This shall be the starting point of all system and maintenance tests.
2. Legal input values - Test values within the boundaries of the specification
equivalence classes. This shall be input data the program expects and is
programmed to transform into usable values.
3. Illegal input values - Test equivalence classes outside the boundaries of the
specification. This shall be input data the program may be presented, but that will
not produce any meaningful output.
4. Special Cases - Input cases that are not identified by the specification. Such as
irrelevant data (i.e. negative numbers or characters), depressing several input keys
at once, or any other test case the test designer feels has a good chance of
exposing an error.
6 Item Pass/Fail Criteria
The results for each test will be compared to the pre-defined expected test results, as
documented in the Test Plan. If the actual result matches the expected result, the test
case will be marked as a passed item. A test case will be considered as a failure if the
actual result produced by its execution does not match the expected results, in which
case the actual results are logged in the Test Results. The source of failure may be the
applet under test, the test case, the expected results, or the data in the test environment.
Test case failures will be logged regardless of the source of the failure.
7 Suspension Criteria and Resumption Requirements
See Section 9.
8 Test Deliverables
The following deliverables will be produced:
No
1
2
3
4
5
6
7
8
Deliverable
Test Plan
Program Functional specs
Program Source Code
System Testing Specification
Unit Testing Specifications
Results of Unit Testing
System Testing Results
Final Report
Test reports will be published at the end of each testing cycle and will be made available
under a reports directory in our website. The reports will summarize the test cases
executed and the results of the testing exercise. The ‘bugs’ found during the testing
exercise will be logged in the ‘bug database’ with a reference to when the defect was
uncovered and the steps to reproduce the behavior. It is hoped that by adopting the above
methodology we will be able to deliver a ‘zero-defect’ product at the end of our testing
cycle. Please refer to the Appendix for the Template of the Test case and the Test result
we plan to adopt during our testing.
9 Testing Tasks
9.1 Software Testing Process
We shall adopt the following Release/Testing Process:
 Release process
o Once the developer adds a feature/fixes a bug he/she will check-in the code
into the source code repository currently located in our group website.


o A notification will be sent to all the team members through E-mail once a new
code is checked into the repository.
Testing Process
o The Test Engineer downloads a copy of the working code into his/her working
environment
o He/She then compiles the code using the software development environment
available in their machines
o He/She then runs the pre-defined test cases on the compiled code to verify that
the software meets all normal, boundary, and illegal test specifications
o Any new issues that are discovered during the testing process are reported into
the bug repository which is again located at our group website.
Problem Resolution
o Once a bug is reported into the repository we’ll use the Defect reporting
mechanism reported in the section 5 below to address and resolve the bugs
reported by the test engineers
The repositories for source code, bug database, and all documents are located at our
group website at: http://xxx.xxx.xx
9.2 Defect Reporting
Defect Tracking allows for early detection and subsequent monitoring of problems that
can affect a project’s success. Defect Tracking incorporates the following:




Identifying project issues
Logging and tracking individual defects
Reporting defect status
Ongoing resolution of defects
9.2.1 Severity of Defect
Each defect is assigned a severity code. The severity of a defect will determine
assignment of resources for resolution, and actions that should be taken in the event that
there are any conflicts preventing resolution of the defect prior to its target date. This
priority code is established based on impact on the system / interface.
Defect prioritization is established as “Critical”, "High", "Medium" or "Low".
Characteristics of a defect in each of the priority categories are as follows:
Severity
Critical – 1
Characteristic
 Stops test team from being able to move forward
with any other test cases.
High – 2

One or more of the requirements for the test case
cannot be tested due to functional defects.
Medium – 3

A work around can be used to continue the test case
but specific functionality of the test case cannot be
verified.
Low – 4


Cosmetic in nature.
All functionality of the test case can be verified.
9.2.2 Type of Defect
Each defect is assigned to one or more categories, and a category is a collection of events
that are tracked to identify the trend and impact of defects on existing and future work.
The tester who discovered it will assign a defect to a category or categories. The tester
will then inform the test manager of the defect, who then assigns the defect to the system
developer.
9.2.3 Commutation of Defects
The process of logging and resolving defects is a multi-step process. The process starts
with executing a test case and finding that the actual results do not match the expected
results. The tester will then bring the defects to the attention of the team for evaluation of
the risk and the priority of the defect.
The individual tester, with the aid of the above definitions, will then log the defect into
the defects tracking database in an online defect tracking system. The original tester will
document in the defects database the steps needed to reproduce the error. The defect will
be assigned to the tester and to a developer to find a solution for the error. Subsequently,
the same tester will retest the issue during the formal system-testing event.
Once a tester and a developer have been assigned to the defect, the severity level will
determine the timeline for resolution. Any defects with a severity rating of 1 are dealt
with on an immediate basis. Defects with a severity rating of 2 will have a same day turn
around. Severity 3 and 4 defects will be handled on a weekly status meeting between the
test team and the developers.
To ensure that all defects are being worked and no defects slip through the cracks, a
weekly meeting will be conducted between the developers and the test team.
10 Environmental Needs
10.1 Hardware
Following will be the Hardware resources:
 Intel Pentium based PCs
10.2 Software
Following will be the Software resources:
 Browser: IE (5.x, 6.x)/ Netscape (4.7, 7.0)
 JDK 1.4.x/JRE 1.4.x
 JUnit 3.8.1
 Textpad
10.3 Testing Tool & Environments
We will use JUnit (http://www.junit.org) as a testing framework. JUnit is Open Source
Software, released under the IBM's Common Public License Version 1.0 and hosted on
Source Forge. It is a regression-testing framework used to implement and execute unit
tests in Java. JUnit has been chosen since it is known as an ultimate testing resource for
extreme programming.
Also testing will be conducted in the following environments:
Priority
1
2
3
4
5
Browser/Environment
Internet Explorer
Internet Explorer
Netscape
Netscape
JRE Appletviewer
Version
6.0
5.0
7.0
4.7
1.4.0
Comments
The JRE Applet will be tested only for
Windows platform
11 Responsibilities
To complete the project, the team intends to adopt the principles of extreme programming
method, where team members typically work in pairs achieving Synergy. Following is a
list of the team members and the role played by them in the project:
Team Member
Lahary Ravuri
Mansi Modak
Rajesh Dorairajan
Rakesh Teckchandani
Yashwant Bandla
Role
Developer
Developer
Test Engineer
Test Manager
Test Engineer
12 Staffing and Training Needs
N/A.
13 Schedule
Listed below are some of the major deliverables along with the deadlines:
No
Deliverable
Owner
1
2
3
4
5
6
7
8
Test Plan
Program Functional specs
Program Source Code
System Testing Specification
Unit Testing Specifications
Results of Unit Testing
System Testing Results
Final Report
Test Manager
Developer
Developer
Test Manager/ Engineer
Test Manager/ Engineer
Developer/Test Engineer
Test Engineer
Everyone
14 Risks and Contingencies
N/A.
15 Approvals
Approved by John Howard, Program Manager.
To Do List
See Section 13.
Revision History
Time
(Days)
14
21
21
21
21
28
28
35
Planned
Date
03/06/03
03/13/03
03/13/03
03/13/03
03/13/03
03/20/03
03/20/03
04/03/03
Date
Completed
03/01/03
No.
1.
2.
3.
4.
Date
02/25/03
03/03/03
03/05/03
03/12/03
Name
Rakesh
Rakesh
Rajesh
Rajesh
Modifications
First Draft
Revised Test Plan
Final Test Document
Test Criteria, Problem Report format, Updated
Work Schedule
Appendix
i. Test Case Template
Test Case Name:
Applet Test
The following test data is meant for
testing the invalid test data.
The test data is the starting condition in
this case.
Start Conditions:
Overall Pass Criteria: Buttons should operate correctly
Test Information:
Lahary Ravuri
Name of Tester:
1.5.32
Build Number:
Description:
Comments:
0
0
N/A
Please input a valid integer.
Please input a valid integer.
Please input a valid integer.
Please input a valid integer.
Please input a valid integer.
Please input a valid integer.
Please input a valid integer.
Please input a valid integer.
Please input a valid integer.
0
Fail
"" (nothing)
0.5
String
3+4
4-Mar
3//4
3!
3@
3#
End of Test Case
Not Started
4-Mar-2003
10:21PM
WINDOWS
2000
O/S:
Browse
IE V 5.5
r:
Pass
Step
1
2
3
4
5
6
7
8
9
Status:
Date:
Time:
The test cases below are just a sample.
Lot more need to be implemented.
Expected Result
26
CR#(s):
Number of steps complete by status:
Test Data Used
T/C #:
Pass
Fail
Fail
Fail
Pass
Fail
Fail
Pass
Fail
% Complete
Comments
Sample
Comments
ii. Test Results Template
Summary
Totals
1
1
Percent
Test Cases
Test1
# of Test
Cases
Passed / #
# of Test Cases of Test
Comp. / Total Cases
# of Test Cases Failed
Basic Test
1
0
Total # of
Steps /Count
of Steps
Executed
Count of
Steps
Passed/
% Steps
Passed
Count of
Steps Failed/
% Steps
Failed
25
25
0
100%
0%
Steps
Failed
Date
Tested
100%
P/F
Passed
iii. Problem Reporting format
Number
Steps
of Steps Executed
25
25
Steps
Passed
25
0
1/0/1900
Bld
Ver.
0
Tested By
Notes an
Download