#$$ $ %  #!$ #

advertisement
#$$ $ % #!$ #
"' #$ " $&" " $#
A return-on-investment model is developed and applied to a typical
software project to show the value of doing inspections and unit and
module testing to reduce software defects.
' %# "( $ (%),' $(&)%$ &'%(( ( %# $ #&%')$)
&') % ) (%),' +"%&#$) ." $ ( $
*( ,) +'.$ "+"( % (*(( ,)$ ,"))0!'
$ % ) #$ '(%$( %' )( (*(( ( )) ))$ 0
)( '". ( #&) %$ '*$ ) %() % "$
,) (%),' )( ")' $ ) +"%&#$) ." $
$)). *( #)'( ) '%# (+'" (%),' &'% )(
$ $ $*()'. &'%) $ "%(( #%" )% (%, ) %()
% $$ $ -$ )( ") $ ) +"%&#$) ."
$ *'$ &%()'"(
)()$ )% ()#) ) +"* ')*'$ %$ $+()#$) % $0
+()$ )# $ %') $ '". ) ))%$ )+)(
" %
("( $ $+$)%'. )'!$ &'% ) +%"+ '%#
(&') $))+( . (+'" '$) '%*&( $ (
$))+( '$) % )+( *) "" '" %$ "#$)(
% ) (# ) %#&*)' "' ("( $ $+$)%'. "+"(
% &'%*)( % (#&". ) %"")%$ &'%(($ $
()%' % )( ) ) ,( )% ') $)'"/
(.()# )% %*( ) ) "" ) &&")%$( )) ,%*"
$ )% *( )( $%'#)%$ %*" (( ) $)'" (.()# )( (%,( ) %*' # %' #%*"( ))
#! *& ) (.()# $ ) # %' ) ()%'( ((
. ) (.()# ( #%*"( ,"" ''$ )'%*%*)
)( &&' " &'%+( (*##'. % ) # %' ))'*)(
% ) (.()#
( &&' ('( ) #)%( , + *( )% $)')
$(&)%$( $ &''"( )()$ $)% ) +"%&#$) % $
$%'#)%$ )$%"%. (%),' &'% ) #)'( %"0
") $ ) )%%"( , *( )% %"") ) #)'( ) %$
)( &'% ) ' (' $"". , (' $ &&'%
)% *($ ) #)'( ) %"") *'$ $(&)%$( $
2.0
Customer
Database
Maintain
Field
Information
1.0
Sales
Outlet
Database
Maintain
Dealer
Information
Sales
Representative
Database
4.0
SIT
Database
Load Dealer
Sales and
Inventory
Files
Dealers
3.0
Maintain
Product
Information
Contract
Database
#' ,"))0!' %*'$"
Product
Database
# %' %#&%$$)( %
) ("( $ $+$)%'. )'!$
(.()#
 Hewlett-Packard Company 1994
Table I
Summary of the Sales and Inventory Tracking System
System type
Batch
Hardware platform
HP 3000 Series 980 running the
MPE/XL operating system
Language
COBOL
DBMS
HP AllBase/SQL (relational)
Data communications
(used for the electronic
transfer of data between
the dealer and HP)
Electronic Data Interchange
(EDI) ANSI standard transacĆ
tion sets (file formats):
867 product transfer and
resale report
846 inventory inquiry and
advice
Data access and
manipulation tools
Cognos, PowerHouse (Quiz),
Supertool, KSAM, and VPLUS
Code
25K total lines of source code
(13.6 KNCSS)*
* KNCSS = Thousands of noncomment source statements
•
•
•
•
•
We used the traditional HP software life cycle for our develĆ
opment process. The basic steps of this process include:
Investigation
Design
Construction and Testing
Implementation and Release
Postimplementation Review.
The inspection and prerelease testing discussed in this paper
occurred during the design, construction, and testing phases.
The objectives of the inspection process are to find errors
early in the development cycle, check for consistency with
coding standards, and ensure supportability of the final
code. During the course of conducting inspections for the
SIT project, we modified the HP inspection process to meet
our specific needs. Table II compares the recommended HP
inspection process with our modified process.
Step 3 was changed to Issue and Question Logging because
we found that inspectors often only had questions about the
document under inspection, and authors tended to feel more
comfortable with the term issue rather than defect. The
Table II
Comparison of Inspection Processes
Step Our Inspection Process
The Question and Answer step was added because inspectors
often had questions or wanted to discuss specific issues durĆ
ing the Issue and Question Logging session. These questions
defocused the inspection and caused the process to take
longer.
In the Planning step the author and the moderator plan the
inspection, including the inspection goal, the composition of
the inspection team, and the meeting time. The Kickoff step
is used for training new inspectors, assigning the roles to the
inspection team members, distributing inspection materials,
and emphasizing the main focus of the inspection process.
During the Preparation step, inspectors independently go
through the inspection materials and identify as many defects
as possible. In the Cause Brainstorming step, inspection team
members brainstorm ideas about what kind of global issues
might have caused the defects and submit suggestions on
how to resolve these issues. During the Rework step, the
author addresses or fixes every issue that was logged during
step 3. Finally, in the FollowĆup step, the moderator works
with the author to determine whether every issue was
addressed or fixed.
Along with the modified process, a oneĆpage inspection
process overview was generated as the training reference
material for the project team. This overview was a very conĆ
venient and useful guideline for the project team because it
helped to remind the team what they were supposed to do
for each inspection.
Because of time and resource constraints, not all of the
project's 13 source programs and 29 job streams could be
inspected. The project team decided to use the risk assessĆ
ment done for the master test plan, which uses the operaĆ
tional importance and complexity of a module as a basis for
deciding which programs and job streams to inspect. The risk
assessment used for the master test plan is described later.
Fig. 2 shows the results of this selection process in terms of
inspection coverage and relative level of complexity of the
programs and job streams.
Three forms were used to collect inspection metrics: the
inspection issue log, the inspection summary form, and the
inspection data summary and analysis form. Fig. 3 shows an
inspection log and an inspection summary form.
HP Inspection Process
Modules
Test Plan
and
Test Script
Inspected
Percent of
Programs
Inspected
Percent of
Job Streams
(JCL)
Inspected
Relative
Complexity
Preparation
1.0
Yes
67%
100%
Medium
Defect Logging
2.0*
No
100%
100%
Low
3.0
Yes
60%
33.3%
Medium
4.0
Yes
100%
100%
High
0
Planning
Planning
1
Kickoff
Kickoff
2
Preparation
3
Issue and Question Logging
4
Cause Brainstorming
Cause Brainstorming
5
Question and Answer
Rework
6
Rework
FollowĆup
7
FollowĆup
 Hewlett-Packard Company 1994
term defect seemed to put the author on the defensive and
severely limited the effectiveness of the inspection process.
* This module consisted of only one JCL and one program.
Inspection coverage for the major modules in the SIT system
based on the criteria used in the master plan.
December 1994 HewlettĆPackard Journal
Inspection Summary
Inspection Date:
Kickoff Meeting:
System or Project Name:
Document Inspected:
Preparation Time:
Start Time:
Finish Time:
Moderator
Reader
Author
Total
Inspector
Inspector
Inspector
Investigation
Coding
Documentation
Prototype
Test Plan
Other
ES
Installation
IS
Release
Inspection Issue Log
Page
of
Type of Inspection:
(Circle the Type)
Inspection Date
Document Inspected
Page #
Line #
Type:
Issue
Description
Type
(S) Specification
(D) Data
(SD) Standards Deviation
(R) Resources
Severity: (C) Critical
(E) Enhancement
Severity
Approximate Document Pages
or Program Lines Inspected:
Number of Critical Issues (C)
Specifications
Design Logic
Data
Standards Deviation
Miscommunication
Oversight
Resources
Other
Total:
Resolution
(S)
(DL)
(D)
(SD)
(M)
(OV)
(R)
(O)
NCSS
(S)
(DL)
(D)
(SD)
(M)
(OV)
(R)
(O)
(E)
Number of Enhancements
How many times was this document inspected?
How many times was this document postponed?
Why was it postponed?
How many people were asked to participate in the
inspection but refused?
Total time author spent to fix or address all defects:
Total time moderator spent to follow up with author:
(DL) Design Logic
(M) Miscommunication
(OV) Oversight
(O) Others
(N) Non-Critical
(a)
Pages
Number of Noncritical Issues (N)
Specifications
Design Logic
Data
Standards Deviation
Miscommunication
Oversight
Resources
Other
Total:
(b)
%*'+ &% **, #& %*'+ &% *,$$)0 &)$
%*'+ &% **, #& * ,* &) #& % + **,* &2
*)- 0 + %*'+&)* ,) % + **, % ,*+ &%
& % *** &% %*'+ &% *,$$)0 *) * + &2
,$%+ %*'+ + %*'+&)* ')')+ &% + $ + +0'
& %*'+ &% + %,$) & '* % # %* %*'+ +
%,$) % +0'* & +* %+ % + +&+# + $
,* +& / &) )** ## + +* %*'+ &% +
*,$$)0 % %#0* * &)$ * *')*+ ++ .* ,* +&
&##+ + + %+) * )(, ) +& #,#+ %*'+ &% 2
%0 %*'+ &% + -%** +&+# + $ *- % +
)+,)%2&%2 %-*+$%+ -#, *) #+) # # *+*
+ + &##+ % + + *,$$)0 % %#0* * &)$ &)
+$ %*'+
*#+ &,) "0 %*'+ &% $+) * +& $*,) &,) %2
*'+ &% &)+
%,$) & ) + # +* &,% % /
%,$) & %&%) + # +* &,% % / +&+# + $ ,*
0 %*'+ &%* % +&+# + $ *- 0 %*'+ &%*
,) +*+ % ')&** %#, +*+ '#%% % ,% + +*+ %
$&,# +*+ % % *0*+$ +*+ % *+ '#%% % %-&#-
)+ % $*+) +*+ '#% % & % ) *" ****$%+ +&
+)$ % .) +& &,* &,) +*+ % + $
Master Test Plan. $*+) +*+ '#% .* )+ ,) % +
* % '* .% + +*+ *+)+0 &) + ')&!+ .* &,+2
# % $*+) +*+ '#% %#, + +*+ '#%
$) .#++2") &,)%#
* % % + +0' & +*+* +& ')&)$ &) * %
',)'&** + *0*+$ .* - %+& #& # $&,#* . +
$&,# ')&)$ % *' ,%+ &% * +*+ * % .* #*& &) %+ )&,% + * - * &%
Master
Test Plan
Unit Test Plan
Module
Test Plan
System
Test Plan
Unit Test Case
Worksheet
Module
Test Script
System
Test Script
Unit Test
Module
Test
System Test
Module
Test Report
System
Test Report
*+) +*+ '#% &)% 1+ &%
 Hewlett-Packard Company 1994
Table III
Inspection Summary and Analysis Data
Metric
Units or Source of Data
Module
Submodule
Operational
Importance
Technical
Difficulty
Overall Risk
Rating
1.0
1.1
5
1
6
5
3
8
Inspection time
Preparation and meeting time in
hours
1.2
1.3
5
3
8
Defects
Number of critical and noncritical
defects
1.4
1
1
2
1.5
4
3
7
1.6
3
2
5
2.1
5
2
7
2.2
5
3
8
3.1
5
4
9
3.2
5
2
7
Documentation type
Code, requirements and design
specifications, manuals, test
plans, job streams (JCL), and
other documents
2.0
3.0
Size of document
KNCSS for code and number of
pages for other documents
Amount inspected
KNCSS or number of pages
inspected
3.3
3
2
5
3.4
5
2
7
Preparation rate
= (number of pages) × (number of
people)/preparation time
3.5
4
5
9
3.6
1
1
2
Logging rate
= (critical + noncritical defects)/
(hours/number of people)
4.1
5
3
8
4.2
5
5
10
Moderator followĆup
time
Hours
4.3
5
4
9
Time to fix a defect
Hours
4.4
5
3
8
4.5
3
2
5
Total time used
= inspection time + time to fix +
followĆup time
4.6
2
4
6
Time saved on critical
defects
*
4.7
1
3
4
Time saved on nonĆ
critical defects
*
Total time saved
*
Return on investment
*
Risk ratings by submodule.
* Defined later in this article.
The primary and secondary features to be tested were also
included in the master test plan. The primary features were
tested against the product specifications and the accuracy
of the data output. Secondarily, testing was performed to
ensure optimum speed and efficiency, ease of use (user
interface), and system supportability.
Risk Assessment. A risk analysis was performed to assess the
relative risk associated with each module and its compoĆ
nents. This risk analysis was used to help drive the schedule
and lowerĆlevel unit and integrated tests. Two factors were
used in assessing risk: operational importance to overall sysĆ
tem functionality and technical difficulty and complexity.
Each module was divided into submodules and rated against
each risk factor. For example, the proper execution of the
logic in submodule 4.2 was critical to the success of the sysĆ
tem as a whole, while submodule 1.4 merely supplied addiĆ
tional reference data to the database. Accordingly, module
4.2 received a higher operational importance rating. SimiĆ
larly, submodule 4.2 also received a higher complexity rating
because of the complexity of the coding task it entailed.
Each rating was based on a scale of one to five with one
being the lowest rating and five being the highest rating.
Ratings for each risk factor were then combined to get an
overall risk rating for each submodule (Fig. 5).
 Hewlett-Packard Company 1994
4.0
Unit Testing. Unit testing, as in most software projects, was
performed for all programs and job streams. For the purpose
of this project, each individual program and job stream was
considered a unit. Since programs were often embedded in
job streams, program unit tests were often synchronized with
job stream unit tests to conserve time and effort.
Because of the small size of the project team, almost all tests
were performed by the program or job stream author. To
minimize the impact of this shortcoming, a simple testing
review process was established. A series of standard forms
were created to document each test and facilitate review by
the designer, users, and the project lead at different points in
the unit testing process (see Fig. 6).
Module Testing. An integrated test of all programs and job
streams within each module was conducted to test the overĆ
all functionality of each of the four system modules. Since
each program or job stream had already been tested during
unit testing, the primary focus of module testing was on veriĆ
fying that the units all functioned together properly and that
the desired end result of the module's processing was
achieved.
A brief integrated test plan document was created for each
module. This test plan listed the features to be tested and
outlined the approach to be used. In addition, completion
criteria, test deliverables and required resources were
documented (Fig. 7).
Detailed test scripts were used to facilitate each module test
and make it easy to duplicate or rerun a particular test. Each
December 1994 HewlettĆPackard Journal
Unit Test Case Worksheet
Module Unit: 4.2
Program/Screen/Job Name: SIT4023S
Unit Test Case
VALID Input Conditions
Loc_hdr.id_code< >9
Quantity qualified = 14, 17, 76
INVALID Input Conditions
No transaction header
ANSI code < > 867, 846
Tx_count = spaces
Module Unit: 4.2
Date Inspected: 5/4/92
Program/Screen/Job Name: SIT 4020J
Script #:
Pass: Yes / No
Unit Test Script
Module Unit: 4.3
Case Description
Verify that locations with missing elements are reported.
Program/Screen/Job Name: SIT 4031S
Script #:
Input Conditions
Outlets table “business_name” blank
OID = 0671100920
Procedure
End-user table “company” blank
OID = 067110011S
(1) Set file equations
(2) Run program, parm = c
Output Conditions
(3) Verify file layout matches
OID = 0671100920 appears on report
Resources needed ––
OID = 067110011S appears on report
Programs: GETPROD
Screens: ––
Special Requirements
Database: SITDB, PRIME
Use test file TST40207.TEST.SIT
SITDB Tables: Outlets, Channel_products
Others
"( ('( #&!'
Feature to Be Tested
• Module processes run correctly when run in production order
• Module processes run correctly with actual production data
• All programs, JCLs, screens pass data to next process on timely basis
Approach
• Set up production test environment
• Stream JCLs, run programs, and execute entry screens in production order
• Use production data
• Verify subsets of key table data after each process
Completion Criteria
• All programs, JCLs, screens run without error or abort
• End result of process is correct data loaded into Product_mix, Product_exhibit,
Channel_products, Customer_Products, and Customer_Exhibits Tables
Test Deliverables
• Test script
• Test summary report
Resources
• Staff – Lou, Shripad, Jill
• Environment
Jupiter – SIT account
Blitz – Patsy DB (PATSY##.PATDTA.MAS):Mode 5
– Prime DB (PRIME3##.PR3DTA.MAS):Mode 6
$#&(#" # " "(&( ('( $ "
!& + ((.& #)&"
('( '&$( #)!"( " ) '(" # ( &'#)&'
" #& ( ('( ') ' ')$$#&(" ('' (.
' ( ' ' %)(#"' $&#&!' " '($.-.
'($ $&#)& #& ,)(" ( ('( & $$&#$&(
'$ ( (# "(& "(# ( '-'(! +' " ) "
( $&#)& '# " ) +& ( ,$( &') (' #
('( '($ " ( #*& ('(
" ( "(&( !#) ('(' " #!$ ( ').
'') - #(" (' (## '*& (&(#"' &$#&( ( "
( ('( &') (' +' &( "(&( ('( &$#&( #).
!"( ( ")!& # (!' ( ('( +' &)" *& ((
( #!$ (#" &(& +& !( '( ( ")!& # &(.
" "#"&( (' (( " *& (( ( " ,
System Testing. -'(! ('(" +' #")( " ("! +(
$ #( ('( ( " & "( * )' +& "(& #&
( "& .$)&$#' (' ( ' " #& ( & ".
*# * " ( $ #( ('( &#)(#" ') ' +& )' (#
#"(&# ( # '(&!" (( ,)( ( '-'(!
Testing Metrics. +# "' # !(&' +& ' ( (# !.
')& #)& ('(" #&( (#( ")!& # &( (' "
 Hewlett-Packard Company 1994
total testing time for each test phase. Table IV summarizes
our test metrics for unit and module testing. One critical
defect was found during system testing and the total system
test time was 19 hours.
Table IV
Testing Metrics
Module
Size*
Unit Test
Module Test
Tc
Nc
(ATT)c
Tc
Nc
(ATT)c
18.25
41
7
5.86
1.0
4489
73
4
2.0
639
26
0
0
**
**
**
3.0
3516
31.5
3
10.5
46
2
5.11
4.0
3738
35
36
13
2.77
21
1.67
(3)
We wanted to measure not only how much time was being
spent on inspection and testing but also how much time was
being saved as a result of the defects found during inspecĆ
tions and unit and module testing.
The time used during an inspection includes the sum of the
total inspection time spent by each team member, the time
spent by the author on fixing the defects, and the time spent
by the moderator following up on defect resolution with the
author.
For inspections, we defined the total time saved and the
total time used as:
Total Time Saved by Inspection = Time Saved on
Critical Defects + Time Saved on Noncritical Defects
* Noncomment source statements (NCSS)
** Not applicable
Tc = Total testing time for critical defects in hours
Total Time Used by Inspection = Inspection Time +
Time to Fix and Follow up for Defect Resolution
Nc = Total number of critical defects
(ATT)c = Average testing time per critical defect = Tc/Nc
(ATT)c is defined as zero when Nc is zero
It is now generally accepted that inspections can help softĆ
ware projects find defects early in the development cycle.
Similarly, the main purpose of unit and module testing is to
detect defects before system or pilot testing. Questions that
often come up regarding these defect finding efforts include
how much project time they will consume, how effective
they are, and how we can measure their value. Most of
these issues have been addressed in different ways in the
literature.4,5
In this section we present a returnĆonĆinvestment (ROI)
model we used to measure the value of inspections and
early testing in terms of time saved (and early to market).
The whole idea behind this kind of measurement is that it
should take longer to find and fix a defect at system test
than it does to find the same defect during inspection or unit
testing. This means that for every defect found during an
inspection or at an earlier stage of the testing phase, there
should be a time savings realized at the system test phase.
First we define the Prerelease ROI as:
Prerelease ROI =
Total Time Saved / Total Time Used
Testing ROI = Total Time Saved by Unit
and Module Testing / Total Time Used
by Unit and Module Testing.
where the critical defects are defects that affect functionĆ
ality and performance and noncritical defects are all other
defects.
The time spent finding and fixing a critical defect at system
test is called BBT (black box testing time). Therefore, for
every critical defect found before system test, the total time
saved can be calculated as follows:
Time Saved on Critical Defects = BBT × Number of
Critical Defects - Total Time Used.
The model we used to measure noncritical defects is based
on the assumption that noncritical defects could be found by
inspection but would not be detected by testing. The nonĆ
critical defects will become supportability issues after manuĆ
facturing release. We defined a new variable called MTTR*
(mean total time to rework) to measure the time spent on
noncritical defects.
MTTR = Time to Find Defect + Time to Fix Defect +
Time to Release to Production.
Thus,
Time Saved on Noncritical Defects = MTTR × Number
of Noncritical Defects.
(1)
where:
Total Time Saved = Total Time Saved by Inspection +
Total Time Saved by Unit and Module Testing
and
For testing metrics we wanted to measure not only how
much time was being spent on unit and module testing, but
also how much time was being saved as a result of the deĆ
fects found during these tests. Thus, we defined the total
time saved and total time used for testing as:
Total Time Saved = Time Saved on Critical Defects
Total Time Used = Total Time Used by Inspection +
Total Time Used by Unit and Module Testing.
Total Time Used = Time to Design and Build a Test +
Time to Execute + Time to Find and Fix a Defect.
We calculate the individual ROI for inspection and testing,
respectively, as follows:
Inspection ROI = Total Time Saved
by Inspection / Total Time Used by Inspection
 Hewlett-Packard Company 1994
(2)
* We used an average time of 6 hours for MTTR in our calculations.
December 1994 HewlettĆPackard Journal
The defect data and time data for our sales and inventory
tracking project are summarized in Tables V and VI.
25
Hours/Critical Defect
20
Table V
Defect Summary for the SIT Project
15
During
Inspection
During
Testing
Total
Prerelease
Defects
Number of Critical DeĆ
fects Found and Fixed
12
51
63
5
Number of Noncritical
Defects Found and
Fixed
78
0
78
0
Total Time Saved by Inspection = (20 hours × 12) +
(6 hours × 78) - 90 hours = 708 hours**
Total Time Saved by Testing = 20 hours × 51 - 310 =
hours*
From equations 1, 2, and 3:
ROI for Inspections = 708 / 90 = 787%
ROI for Testing = 710 / 310 = 229%
Prerelease Total ROI = 1418 / 400 = 355%.
Table VI
Time Data and Return on Investment Results
Total Time
Saved (hours)
Return on
Investment (%)
90
708
787
Testing
310
710
229
Prerelease
Total
400
1418
355
Inspection
Fig. 8 shows that with the exception of module SIT4.0 the
average testing time per critical defect decreased from unit
test to module test for the system's major modules. The reaĆ
son that it took 19 hours per critical defect at system test is
mainly the time it took to find and fix one defect that was
overlooked during inspection. Had the project team not overĆ
looked one particular issue related to product structure that
resulted in this defect, the average testing time per critical
defect at system test would have been significantly lower.
Module SIT4.0 went through the most thorough inspection
including a design inspection since it is the most complex of
the four modules. We believe our efforts paid off because it
took less time at unit test and module test in terms of averĆ
age testing time per critical defect for module SIT4.0 than for
the other three modules.
* The time to find and fix a critical defect during system test at HP ranges from 4 to 20 hours.
We used 20 hours in our ROI calculations.
Module
December 1994 HewlettĆPackard Journal
SIT2.0
SIT3.0
System
SIT4.0
System
Testing time by test phase and module.
Fig. 9 is a plot of the ROI column in Table VI. It shows that
inspections have resulted in more than three times the ROI
of testing. This reinforces the notion that a great deal of
money and time can be saved by finding defects early in the
software development cycle.
The inspection and testing processes we used for the SIT
project are not very different from other software projects in
HP. However, we did put more emphasis on early defect
detection and collected a lot of metrics. The following are
some of the lessons we learned from our efforts during this
project.
Project Management. Some of the lessons we learned about
project management include:
• Setting aside time for inspections and thorough testing does
pay off in the long run. Management approval may be diffiĆ
cult to get, especially when under intense time pressure.
One should get commitment to delivering a quality product,
then present inspections and testing as part of delivering on
this commitment.
• Keep highĆlevel test plans short and simple while still proĆ
viding enough direction for the lowerĆlevel plans. By keepĆ
ing these plans short and simple, time will be saved and the
project team can still get adequate direction.
• Adequate followĆup to inspection and testing activities is
important to make sure all issues are resolved. The inspecĆ
tion log, testing error logs, and integration test reports
1,000
Return on Investment (%)
Using the model described above, we can calculate the ROI
values shown in Table VI.
Total Time
Used (hours)
Unit
SIT1.0
With the code size equal to 13.6 KNCSS, prerelease defect
density = 141/13.6 = 10.4 defects/KNCSS.
710
10
800
600
400
200
0
Inspection
Testing
Prerelease Total
Relative impact of inspections and testing.
 Hewlett-Packard Company 1994
helped the project lead keep up on the status of each issue
and ensure that each issue was resolved.
• Establish coding standards at the beginning so that minimal
time is spent during code inspections questioning points
of style. The focus of code inspections should be code
functionality.
programmers, and users helps tremendously towards ensuring
full test case coverage.
• Module testing. Integration test scripts are invaluable. The
effort expended to create the scripts for the SIT project was
significant, especially the first one. However, the reward, in
terms of time saved and rework, more than justified the efĆ
fort. Furthermore, these scripts have been very useful to the
Inspection. Since the inspection process is the most important
support team for performing regression testing when the
tool for defectĆfree software, many adjustments were made
programs or job streams are modified.
here.
• Attitude is the key to effective inspections. No one writes
Success Factors. The SIT product was released to production
errorĆfree code, but many people think they do. Authors
in early March 1992. Since that time the product has been
must realize that they make mistakes and take the attitude
relatively defectĆfree. In reviewing what has been done, we
that they want inspectors to find these errors. The inspecĆ
observed some key factors that contributed to our success.
tors, on the other hand, can destroy the whole process by
These success factors can be summarized as following:
being too critical. The inspectors must keep the author's ego • Strong management support. We had very strong manageĆ
intact by remaining constructive. Perhaps the best way to
ment support for the inspection and testing process and the
keep people's attitudes in line is to make sure they know
time commitment involved. This was the most important and
that they may be an inspector now, but at a later date, their
critical success factor for the implementation of inspections
role and the author's will be reversed. For this reason, imĆ
and metrics collection.
plementing an inspection process for most or all of a projĆ
• Team acceptance. The SIT project team accepted the quality
ect's code is likely to be much more effective than random
concept. We agreed on our quality goals and understood
inspection of a few programs.
how the inspection and testing processes would help us to
• No managers should be involved in the inspection of code.
achieve those goals.
Having a manager present tends to put the author on the
• Focus. The SIT project was selected as the pilot project to
defensive. Also, depending on the person, the inspector
implement the inspection process. Our initial focus was on
either goes on the offensive or withdraws from the process
code inspection. After the project team felt comfortable with
entirely.
doing inspections, other documents such as test scripts and
• In addition to finding defects (or issues"), which helps to
test plans were also inspected.
save testing and rework time, the inspection process has
other, more intangible, benefits:
Increased teamwork. Inspections provide an excellent
We would like to thank the following people for their help
forum for the team to see each other's strengths and weakĆ
and support in developing this paper: Jennifer Hansen, Tadd
nesses and gain a new respect for each other's unique
Koziel, Bruce Morris, Patti Hutchison, John Robertson, Debra
abilities. By adding the question and answer session to the Vent, and Kelley Wood.
inspection process, we provided a forum for the team to
discuss issues and creatively solve them together.
Support team education. Including members of the team
1. M.E. Fagan, Design and Code Inspections to Reduce Errors in
Program Development," Vol.15, no. 3, 1976, pp.
that would eventually support the SIT system allowed
182Ć211.
these people to become familiar with the system and
2. M.E. Fagan, Advances in Software Inspections," confident that it would be supportable.
Testing. The lessons learned from unit and module testing
include the need for expanded participation in testing and
the value of test scripts.
• Unit testing. On small project teams it is difficult to coordiĆ
nate testing so that someone other than the author tests each
unit. Establishing a process that includes the designer, other
 Hewlett-Packard Company 1994
Vol. SEĆ12, no. 7, July 1986, pp.
744Ć751.
3. T. Gilb, AddisonĆWesley Publishing Co., 1988, Chapter 12.
4. F.W. Blakely and M.E. Boles, A Case Study of Code Inspections,"
Vol. 42, no. 4, October 1991, pp. 58Ć63.
5. W.T. Ward, Calculating the Real Cost of Software Defects,"
Vol. 42, no.4, October 1991, pp. 55Ć58.
December 1994 HewlettĆPackard Journal
Download