Software Reviews

advertisement
ESE Module 7-3
Software Reviews
Have you ever typed an important letter with a glaring typo that went unnoticed until after the letter was
sent? You proofed the letter twice and you still didn't
see it. Why? Because you read what you meant, not
what you typed!
We encounter the same problem when we develop computer software. Often, the software engineer
who develops a program or a document, a data
structure or a utility, sees what she meant, not what
she did. And as a result, errors creep into the product.
The error can be as simple as a typo, or possibly, a
complex combination of misinterpretation, erroneous
logic and general sloppiness. The problem is that you
just can't see it.
What's the solution to this all-too-common problem? Use two or more sets of eyes and minds! Have
others review the work product with the explicit intent
of uncovering latent errors that you missed.
The formal technical review (FTR)-often referred
to as a walkthrough or an inspection--is a technical
meeting specifically organized to uncover errors in
software. To be successful, the FTR must be planned,
must have strong leadership, requires preparation,
and must avoid the many pitfalls that cause many
technical meetings to fail. We'll explore how to characterize the review meeting in a way that leads to a
successful outcome--a determination by your colleagues that errors exist, or alternatively, that errors
have not been found and the work product may proceed to the next software engineering task.
Formal Technical ReviewsWhat are They?
The IEEE (ANSI/IEEE standard 729-1983) defines
inspections (a from of formal technical review) as “a formal
evaluation technique in which software requirements,
design or code is examined in detail by a person or group
other than the author to detect faults, violations of
standards and other problems…”
It turns out that the word “formal” takes on a number of
different interpretations. Inspections (the type of review the
IEEE was referring to) are the most formal of all FTRs,
demanding a rigorous format, defined roles for participants
and detailed record keeping. They are also the type of
review that is most likely to uncover an error. And because
you get what you pay for, inspections are the most
expensive of all reviews. Walkthroughs are somewhat less
rigorous but they remain an extremely effective approach.
Before we begin our video presentation, lets conduct a
mini-review.
Exercise 7-5
Conducting a Mini-Review
The following pseudocode describes the actions
required to open a door. It should be reviewed to
ensure that sufficient detail is provided for a person
who has no idea how to perform this operation. Make
a list of all outright errors, ambiguities, and omissions
that you uncover. At the same time, give the description to three or four of your colleagues and have
them do the same thing.
Compare notes. We'll bet that the combined list
of problems is longer than any individual list. If this is
the case, you've experienced your first example of
"review synergy."
procedure: openDoor
walk to door;
determine openinghnechanism;
case of (openingMechanism)
openingMechanism = knob;
reach for knob with dominant hand;
grasp knob and turn counterclockwise;
if knob does not not turn
then reach for keys;
select proper key;
place key in lock;
turn key clockwise;
endif;
determine swingDirection of door;
if swingDirection is "away"
then push the door:
else pull the door;
openingMechanism = push bar;
push down on push bar;
push open door;
openingMechanism = mag card
look for mag card reader;
place mag card into reader and slide
through;
listen for door to click open;
push the door open;
.
.
.
endproc
7-3.2 ·· Essential Software Engineering
Readings
The discussion of formal technical reviews in the video
portion of this module was designed to introduce the
FTR as a pivotal SQA mechanism. The following
excerpt, adapted from Software Engineering: A
Practifioner's Approach, discusses the players and the
mechanics for formal technical reviews.
A formal technical review (FTR) is a software quality assurance activity that is performed by software engineering
practitioners. The objectives of the FTR are: (1) to uncover
errors in function, logic or implementation for any representation of the software; (2) to verify that the software
under review meets its requirements; (3) to assure that the
software has been represented according to pre-defined
standards; (4) to achieve software that is developed in a
uniform manner; and (5) to make projects more manageable. In addition, the FTR serves as a training ground,
enabling junior engineers to observe different approaches
to software analysis, design and implementation. The FTR
also serves to promote backup and continuity, because a
number of people become familiar with parts of the software that they may not have otherwise seen.
The FTR is actually a class of reviews that includes
walkthroughs, inspections, round-robin reviews and other
small-group technical assessments of software. Each FTR is
conducted as a meeting and will be successful only if it is
properly planned, controlled and attended. In the paragraphs that follow, guidelines similar to those for a walkthrough [1, 2, 3] are presented as a representative formal
technical review.
The Review Meeting
Regardless of the FTR format that is chosen, every review
meeting should abide by the following constraints:
 Between three and five people (typically) should be
involved in the review
 Advance preparation should occur but should require no
more that two hours of work for each person
 The duration of the review meeting should be less than
two hours.
Given the above constraints, it should be obvious that an
FTR focuses on a specific (and small) part of the overall
software. For example, rather than attempting to review an
entire design, walkthroughs are conducted for each mod-
ule or small group of modules. By narrowing focus, the
FTR has a higher likelihood of uncovering errors.
The focus of the FTR is on a product--a component of
the software (e.g, a portion of a requirements specification,
a detailed module design, a source code listing for a module). The individual who has developed the product--the
producer--informs the project leader that the product is
complete and that a review is required. The project leader
contacts a review leader, who evaluates the product for
readiness, generates copies of product materials and distributes them to two or three reviewers for preparation.
Each reviewer is expected to spend between one and two
hours reviewing the product, making notes and otherwise
becoming familiar with the work. Concurrently, the review
leader also reviews the product and establishes an agenda
for the review meeting, which is typically scheduled for the
next day.
The review meeting is attended by the review leader,
all reviewers and the producer. One of the reviewers takes
on the role of the recorder, that is, the individual who
records (in writing) all important issues raised during the
review. The FTR begins with an introduction of the agenda
and a brief introduction by the producer. The producer
then proceeds to "walk through" the product, explaining
the material, while reviewers raise issues based on their
preparation. When valid problems or errors are discovered,
the recorder notes each.
At the end of the review, all attendees of the FTR must
decide whether to (1) accept the product without further
modification, (2) reject the product due to severe errors
(once corrected, another review must be performed) or (3)
accept the product provisionally (minor errors have been
encountered and must be corrected, but no additional
review will be required). The decision made, all FTR attendees sign-off, indicating their participation in the review
and their concurrence with the review team's findings.
Review Reporting & Recordkeeping
During the FTR, a reviewer (the recorder) actively records
all issues that have been raised. These are summarized at
the end of the review meeting and a review issues list is
produced. In addition, a simple review summary report is
completed. A review summary report answers three questions:
1. What was reviewed?
2. Who reviewed it?
3. What were the findings and conclusions?
In general, this single-page (with possible attachments)
form becomes part of the project historical record and may
be distributed to the project leader and other interested
parties.
The review issues list serves two purposes: (1) to identify problem areas within the product and (2) to serve as an
action item checklist that guides the producer as corrections are made.
It is important to establish a follow-up procedure to
ensure that items on the issues list have been properly corrected. Unless this is done, it is possible that issues raised
can “fall between the cracks.”
Software Reviews ·· 7-3.3
Review Guidelines
Guidelines for the conduct of formal technical reviews
must be established in advance, distributed to all reviewers, agreed upon and then followed. A review that is
uncontrolled can often be worse than no review at all.
The following represents a minimum set of guidelines
for formal technical reviews:
1. Review the product, not the producer. An FTR involves
people and egos. Conducted properly, the FTR should
leave all participants with a warm feeling of accomplishment. Conducted improperly, the FTR can take on the aura
of an inquisition. Errors should be pointed out gently; the
tone of the meeting should be loose and constructive; the
intent should not be to embarrass or belittle. The review
leader should conduct the review meeting to ensure that
the proper tone and attitude are maintained and should
immediately halt a review that has gotten out of control.
2. Set an agenda and maintain it. One of the key maladies
of meetings of all types is drift. An FTR must be kept on
track and on schedule. The review leader is charged with
the responsibility for maintaining the meeting schedule
and should not be afraid to nudge people when drift sets
3. Limit debate and rebuttal. When an issue is raised by a
reviewer, there may not be universal agreement on its
impact. Rather than spending time debating the question,
the issue should be recorded for further discussion off-line.
4. Enunciate problem areas, but don 't attempt to solve every
problem noted. A review is not a problem-solving session.
The solution of a problem can often be accomplished by the
producer alone or with the help of only one other individual. Problem solving should be postponed until after the
review meeting.
5. Take written notes. It is sometimes a good idea for the
recorder to make notes on a wallboard, so that wording
and prioritization can be assessed by other reviewers as
information is recorded.
6. Limit the number of participants and insist that they prepare. Two heads are better than one, but 14 are not necessarily better than four. Keep the number of people
involved to the necessary minimum. However, all review
team members must prepare. Written comments should be
solicited by the review leader (providing an indication that
the reviewer has reviewed the material).
7. Develop a checklist for each product that is likely to be
reviewed. A checklist helps the review leader to structure
the FTR meeting and helps each reviewer to focus on
important issues. Checklists should be developed for
analysis, design, code and even test documents. [A set of
representative review checklists is discussed in the next
reading.]
8. Allocate resources and schedule time for FTRs. For
reviews to be effective, they should be scheduled as a task
during the software engineering process. In addition, time
should be scheduled for the inevitable modifications that
will occur as the result of an FTR.
9. Conduct meaningful training for all reviewers. To be
effective, all review participants should receive some formal training. The training should stress both process-related issues and the human psychological side of reviews.
Freedman and Weinberg [1] estimate a one month learning
curve for every 20 people who are to participate effectively
in reviews.
10. Review your early reviews. Debriefing can be beneficial
in uncovering problems with the review process itself. The
very first product to be reviewed might be the review
guidelines themselves.
[1] Freedman, D. P. and Weinberg, G. M. (1990) Handbook of
Walkthroughs, Inspections and Technical Reviews 3/e,
Dorset House
[2] Yourdon, E (1989) Structured WaIkthrotrghs 4/eYourdon Press
[3] Gilb, T. and D. Graham (1993) Software Inspection, AddisonWesley
In the video presentation, Dr. Pressman mentions that
conducting an effective FTR is a lot like conducting an
effective meeting. Software organizations that learn
to conduct good reviews invariably also conduct
good meetings--a small perq associated with doing
good SQA!
Review Checklists
As we've seen, formal technical reviews can be conducted throughout the software process. You can
review requirements modules and specifications,
design modules and documents, source code, test
plans and procedures, user documentation and myriad other deliverables that are produced as you do
software engineering. But what information should
you look at and what questions should you ask? The
answers lie in the reading segment that follows
Readings
The following excerpt has been adapted From
Software Engineering: A Practitioner's Approach. It
provides a checklist of questions and issues that
should be addressed as formal technical reviews are
conducted throughout the software engineering
process.
Formal technical reviews can be conducted during each
step in the software engineering process. In this section, we
present a brief checklist that can be used to assess products
that are derived as part of software development. The
checklists are not intended to be comprehensive, but rather
to provide a point of departure for each review.
System Engineering. The system specification allocates
function and performance to many system elements.
Therefore, the system review involves many constituencies
that may each focus on their own area of concern. Software
engineering and hardware engineering groups focus on
software and hardware allocation, respectively. Quality
assurance assesses system-level validation requirements,
7-3.4 ·· Essential Software Engineering
and field service examines the requirements for diagnostics. Once all reviews are conducted, a larger review meeting, with representatives from each constituency, is conducted to ensure early communication of concerns. The following checklist covers some of the more important areas
of concern.
System Engineering Deliverables
 Functional allocation
 System-level schematics
 Hardware scope
 Hardware interface specification
 Timing requirements
 Software scope
 Software interface specification
 System design constraints
 Validation criteria
 User-level constraints and requirements.
Questions and Issues
1. Are major functions defined in a bounded and unambiguous fashion?
2. Are interfaces between system elements defined?
3. Have performance bounds been established for the
system as a whole and for each element?
4. Are design constraints established for each element?
5. Has the best alternative been selected?
6. Is the solution technologically feasible?
7. Has a mechanism for system validation and verification been established?
8. Is there consistency among all system elements?
Software Project Planning. Software project planning
develops estimates for resources, cost and schedule based
on the software allocation established as part of the system
engineering activity. Like any estimation process, software
project planning is inherently risky. The review of the
Software Project Plan establishes the degree of risk.
Software Project Plan Deliverables
 Statement of software scope
 Functional decomposition
 Project estimates
 Resources and responsibilities
 Risk analysis data
 Task breakdown
 Project schedule
 Risk management and monitoring information
 SQA/SCM plans.
Questions and Issues
1. Is software scope unambiguously defined and bounded?
2. Is terminology clear?
3. Are resources adequate for scope?
4. Are resources readily available?
5. Have risks in all important categories been defined?
6. Is a risk management plan in place?
7. Are tasks properly defined and sequenced? Is parallelism reasonable, given available resources?
8. Is the basis for cost estimation reasonable? Has the cost
estimate been developed using two independent methods?
9. Have historical productivity and quality data been
used?
10. Have differences in estimates been reconciled?
11. Are pre-established budgets and deadlines realistic?
12. Is the schedule consistent?
Software Requirements Analysis. Reviews for software
requirements analysis focus on traceability to system
requirements and consistency and correctness of the analysis model. A number of FTRs are conducted for the requirements of a large system and may be augmented by reviews
and evaluation of prototypes as well as customer meetings.
Analysis Deliverables














Graphical models
Results of FAST/QFD
Data flow diagrams
Entity-relationship diagrams
State transition diagrams
Schematics and representations
Specifications
Processing narratives
Data dictionary entries
Description of constraints
Description of external interfaces
Validation and performance criteria
Supporting information
Prototypes.
Questions and Issues
1. Is information domain analysis complete, consistent
and accurate?
2. Is problem partitioning complete?
3. Are external and internal interfaces properly defined?
4. Does the data model properly reflect data objects, their
attributes and relationships?
5. Are all requirements traceable to system level?
6. Has prototyping been conducted for the user/customer?
7. Is performance achievable within the constraints
imposed by other system elements?
8. Are requirements consistent with schedule, resources
and budget?
9. Are validation criteria complete?
Software Design. Reviews for software design focus on
data design, architectural design and procedural design. In
general, two types of design review are conducted. The
preliminary design review assesses the translation of
requirements to the design of data and architecture. The
second review, often called a design walkthrough, concentrates on the procedural correctness of algorithms as they
are implemented within program modules.
Software Reviews ·· 7-3.5
Software Design Deliverables
Code Deliverables



















Graphical models
Data design representations
Software architectural diagrams
Procedural diagrams
Interface representations
Specifications
Algorithm descriptions (PDL)
Interface description
Timing considerations
Supporting information.

Module interface
Traceability to design
Consistency with other "connected" modules
Data representation
Data typing
Data structures
Correctness
Internal documentation
Module prologue
In-line comments
Source code.
Questions and issues--Preliminary Design Review
Questions and Issues
1. Are software requirements reflected in software architecture?
2. Is effective modularity achieved? Are modules functionally independent?
3. Is the program architecture factored?
4. Are interfaces defined for modules and extemal system elements?
5. Is the data structure consistent with information
domain?
6. Is data structure consistent with software requirements?
7. Has maintainability been considered?
8. Have quality factors been explicitly assessed?
Questions and Issues--Design Walkthroughs
1. Does the algorithm accomplish desired function?
2. Is the algorithm logically correct?
3. Is the interface consistent with architectural design?
4. Is the logical complexity reasonable?
5. Have error handling and "antibugging" been specifled?
6. Are local data structures properly defined?
7. Are structured programming constructs used throughout?
8. Is design detail amenable to implementation language?
9. Which are used: operating system or language dependent features?
10. Is compound or inverse logic used?
11. Has maintainability been considered?
Coding. Although coding is a mechanistic outgrowth of
procedural design, errors can be introduced as the design
is translated into a programming language. This is particularly true if the programming language does not directly
support data and control structures represented in the
design. A code walkthrough can be an effective means for
uncovering these translation errors. The checklist that follows assumes that a design wallcthrough has been conducted and that algorithm correctness has been established as
part of the design FTR.
1. Has the design been properly translated into code?
[The results of the procedural design should be available
during this review.]
2. Are there misspellings and typos?
3. Has proper use of language conventions been made?
4. Is there compliance with coding standards for language style, comments, module prologue?
5. Are there incorrect or ambiguous comments?
6. Are data types and data declaration proper?
7. Are physical constants correct?
8. Have all items on the design walkthrough checklist
been re-applied (as required)?
Software Testing. Software testing is a quality assurance
activity in its own right. Therefore, it may seem odd to discuss reviews for testing. However, the completeness and
effectiveness of testing can be dramatically improved by
critically assessing any test plans and procedures that have
been created.
Testing Deliverables













Test Plan
Strategy and sequence for tests
Special test resources
Special test software
Special test hardware
Specialized environments
Test Procedure
Test cases
Completeness
Uncover classes of errors
Explore "corners and boundaries"
Test case design methods used
Special requirements
Questions and Issues --Test Plan
1. Have major test phases been properly
identified and sequenced?
2. Has traceability to validation criteria / requirements
been established as part of software requirements analysis?
3. Are major functions demonstrated early?
7-3.6 ·· Essential Software Engineering
4. Is the test plan consistent with the overall project plan?
5. Has a test schedule been explicitly defined?
6. Are test resources and tools identified and available?
7. Has a test record keeping mechanism been established?
8. Have test drivers and stubs been identified, and has
work to develop them been scheduled?
9. Has stress testing for software been specified?
Exercise 7-6
Conducting a "Live"
Formal Technical Review
1. Select an actual deliverable that has recently
been produced as part of a software project at your
location.
Questions and Issues--Test Procedure
1. Have both white- and black-box tests been specified?
2. Have all independent logic paths been tested?
3. Have test cases been identified and listed with expected results?
4. Is error handling to be tested?
5. Are boundary values to be tested?
6. Are timing and performance to be tested?
7. Has acceptable variation from expected results been
specified?
In addition to the formal technical reviews and review
checklists noted above, reviews (with corresponding checklists) can be conducted to assess the readiness of field service mechanisms for product software; to evaluate the
completeness and effectiveness of training; to assess the
quality of user and technical documentation, and to investigate the applicability and availability of software tools.
Maintenance. The review checklists for software development are equally valid for the software maintenance phase.
In addition to all of the questions posed in the checklists,
the following special considerations should be kept in
mind:
1. Have side effects associated with change been considered?
2. Has the request for change been documented, evaluated and approved?
3. Has the change, once made, been documented and
reported to interested parties?
4. Have appropriate FTRs been conducted?
5. Has a final acceptance review been conducted to
ensure that all software has been properly updated, tested
and replaced?
The quality activities defined in the excerpt establish
the overall scope of SQA activities. You will ask many
other questions that you'll ask as you conduct FTRs on
your software project, but the ones noted in this reading segment should provide you with a starting point.
2. Define a review team comprising people who
have viewed this ESE module video and read the ESE
workbook.
3. Assign roles for a review leader, a producer, a
recorder and others.
4. Distribute the "work product' and conduct all
preparatory activities for a review. For example, the
review leader should establish an agenda.
5. Conduct a review following the guidelines presented in this ESE module.
6. Review your review, addressing the following
questions:
a. Did everyone prepare?
b. Was the agenda and overall review time
appropriate?
c. Did the review leader do his/her job--that is,
did the leader keep the review on track?
d. Did you find any latent errors?
e. What classes of error did you uncover?
f. What about the review didn't seem to work well?
What can you do to correct it?
Software Reviews · · 7-3.7
Post-Test, Module 7-3
This post-test has been designed to help you assess
the degree to which you've absorbed the information
presented in this ESE Module.
Software Reviews
1. Which of the following is not an objective or outcome of a formal technical review?
a. finding errors
b. determining compliance with project
schedule
c. serving as a training ground
d. all are objectives of FTRs
2. The following review is the most formal of all FTRs:
a. walkthroughs
b. round-robin reviews
c. inspections
d. peer reviews
3. The job of the review leader is to:
a. establish an agenda
b. control discussion that drifts
c. ascertain whether reviewers have prepared
d. all of the above
e. none of the above
4. How much preparation is recommended prior to
the review?
a. 20-30 minutes
b. 1-2 hours
c. 45minutes
d. preparation is not always necessary
5. An issues list is:
a, something prepared by each reviewer before
the review
b. something prepared by the recorder during
the review
c. something prepared by he producer during
the review
d. something prepared the day after the review
6. Problem solving is something that:
a. should be encouraged during reviews
b. should be avoided during reviews
c, should be conducted by a subgroup during
the review
d. should be performed only by the review
leader
7. The job of the recorder is to:
a. take detailed notes of all discussion during the
review
b, make a list of all participants
c. record all action items
d. none of the above
8. The Technical Review Summary Report:
a. is produced monthly and summarizes all
reviews conducted
b. is produced at the conclusion of each review
c. is produced by the manufacturer after all
review of his/her products
d, is produced automatically using automated
tools
9. When pointing out an error, a sensitive reviewer
should:
a. ask a question
b. note it directly, without commentary
c. tell the review leader before the review and
ask the leader to mention it
d. not mention it until after the review
10. The following is a sign that a reviewer has not prepared :
a. asking lots of questions
b. reading from notes on the edge of the
product pages
c. in-depth reading of the product pages
d. talking to a colleague
11. In general, the review team can come to n different decisions at the end of the review, where n is:
a. 2
b. 3
c. 4
d. the team need not come toa decision
12. The following is a primary difference between
walkthroughs and inspections:
a. producer presents the product
b. reviewers prepare
c. written notes are taken throughout the review
d. a defined leader coordinates the review
Copyright 1995 R.S. Pressman & Associates, Inc.
No part of this material may be electronically copied,
transmitted, or posted without the express written
permission of R.S. Pressman & Associates, Inc.
These materials have been posted on the Local Web
Server of the Department of Computer Science with
permission of R.S. Pressman & Associates, Inc. and are
for use only by students registered for DCS/235 at
Queen Mary and Westfield College.
Download