Validation, Verification, and Testing for the Individual Programmer

advertisement
Good programming demands sound testing, verification, and validation
techniques throughout the life cycle -even if the development
environment has limited support and resources.
Validation, Verification, and Testing
for the Individual Programmer
Martha A. Branstad
John C. Cherniavsky
National Bureau of Standards
W. Richards Adrion
National Science Foundation
"
I
.1
P,
-
0
lo
W
p- 10
-
I 7
0'0
I' IP, Do
11
xt
The unique situation of the individual programmer
warrants a specialized guide to verification and testing.
The programmer who works alone performs all management tasks, and lacks independent internal or external
quality assurance groups. However, his tasks are usually
of an intellectually manageable size, and he does not face
many of the problems encountered in larger systemscoordination among programmers and massive integration, for example.
Although medium- and large-system verification and
validation approaches do not always apply in the domain
of the single programmer, much of the guidance herein is
applicable to larger projects. Still, there are significant
differences: for instance, tool development is stressed for
large and medium systems, but time and cost restrictions
prohibit specialized tool development in very small programs. Although this article assumes limited resources,
tools which exist locally should not be ignored-a skilled
craftsman uses the best tools available.
We concentrate here on inexpensive verification and
testing techniques to assure the quality of work. For a
more comprehensive treatment of techniques and tools,
the reader is referred to a forthcoming NBS special
Publication, Verification, Validation, and Testing.'
Since we believe verfication should be performed
throughout development, most of this article concentrates on verification and testing appropriate to each of
the four life-cycle stages for software development: requirements, design, construction, and operation and
maintenance. But first, let's review verification and planning techniques employed throughout the life cycle, and
discuss testing in general.
24-
Verification throughout the life cycle
Problem solving is a creative process that generally
follows a five-step paradigm: problem definition,
hypothesis of a trial solution, scrutiny and exercise of the
trial solution, revision of the problem definition or trial
solution, and scrutiny and exercise of the revised trial
solution. When the problem solver is satisfied (and it may
take many iterations of the paradigm), the solution is
prepared in final form and subjected to final testing.
If the original problem is so difficult that no trial solution is obvious, two other steps are necessary: identify the
problem as a particular case of a problem whose solution
is known; solve a specific instance of the problem in the
hope that it will lead to the general solution. Then proceed
as above.
Computer programming is a form of problem solving,
and its solution paradigm is the development life cycle.
There are many life-cycle charts in the literature,2'3 but no
one current chart incorporates all views of the development process. The one given in the left column of Table I
is representative. Problem definition takes place during
the requirements stage, solution hypotheses are formed
and data process structures are organized during the
design stage, and program modules are coded, debugged,
integrated, and their interfaces debugged in the construction phase, when the program is exercised over the test
data selected during this and earlier stages. In the operation and maintenance phase, the program is used and
maintained.
There is a significant difference between the general
problem solution paradigm and the development life cy-
0018-9162/80/1200-0024$00.i5 (1 980 IEEE
COM PUTER
cle: the life-cycle activity is a straight-line solution,
whereas the general paradigm emphasizes .iteration
toward a solution. Anyone who has ever programmed
knows that iteration is an inevitable result of the verification process and error assessment. Unless the correct
problem is stated and its solution achieved on the first try,
modification and iteration occur.
The verification process should occur in each phase of
the cycle rather than in a single isolated stage following
construction because problem statement or design errors
discovered that late may exact an exorbitant price. Not
only must the original error be corrected, the structure
built upon it must be changed as well.
The program development paradigm becomes more
productive if each development stage is viewed as a subproblem. Table I presents the verification activities that
accompany each development stage.
The verification activities shown in Table I will help
locate errors when they are introduced and also partition
the test set construction.
Table 1.
Life-cycle verification activities.
LIFE-CYCLE STAGE
VERIFICATION ACTIVITIES
REQUIREMENTS
DETERMINE CORRECTNESS AND CONSISTENCY
GENERATE FUNCTIONAL TEST DATA
DESIGN
DETERMINE CORRECTNESS
DETERMINE CONSISTENCY, BOTH INTERNAL AND
WITH PREVIOUS STAGES
GENERATE STRUCTURAL AND FUNCTIONAL
TEST DATA
REFINE, REDEFINE TEST SETS GENERATED EARLIER
CONSTRUCTION
DETERMINE CORRECTNESS
DETERMINE CONSISTENCY, BOTH INTERNAL AND
WITH PREVIOUS STAGES
GENERATE STRUCTURAL AND FUNCTIONAL
TEST DATA
APPLY TEST DATA
REFINE, REDEFINE TEST SETS GENERATED EARLIER
OPERATION
& MAINTENANCE
RETEST
REFINE, REDEFINE TEST SETS GENERATED EARLIER
General concepts in testing
Great strides have been made in formal verification
techniques based on ideas of formal semantics and proof
techniques, but these promising research avenues are not
easily applied without supporting tools (verifiers).
Automated verifiers are expensive, not widely available,
and limited in application. For the individual programmer, testing is still the most easily applied verification
technique. Testing shows the presence of errors, but
(unless it is exhaustive) cannot demonstrate the absence
of errors, and is limited in its ability to demonstrate correctness.
A program may be viewed as a function that takes
elements from one set (the domain) and transforms them
into elements of another set (the range). The testing process ensures that the implementation (the program)
faithfully realizes the function. Since programs are frequently given inputs that are not in the domain, they
should act reasonably on such elements. Thus a program
which realizes the function llx should not fail catastrophically when x = 0, but instead should generate an error message. We call elements within the functional domain valid inputs and those outside the domain invalid inputs.
The goal of testing is to reveal errors not removed during debugging. The testing process consists of obtaining a
valid or invalid value, determining the expected (correct)
value, running the program on the given value, observing
the program's behavior, and comparing it with the expected (correct) behavior. The comparison is successful if
the testing process reveals no errors; it is unsuccessful if
errors are revealed.
An important phase of testing lies in planning-how to
apply test data, how to observe the results, and how to
compare the results with desired behavior-not always a
straightforward task. Extensive analysis is often required
to determine adequate tests for design components, and
code must frequently be instrumented to provide observation. Determining the correct behavior to compare with
December 1980
Terms defined
Since certain common terms used in this article appear with
slightly different meanings elsewhere in the literature, the following definitions are provided.
Validation:determination of the correctness of the final program or
software produced from a development project with respect to the
user needs and requirements. Validation is usually accomplished
by verifying each stage of the software development life cycle.
Certification: acceptance of software by an authorized agent,
usually after the software has been validated by the agent, or after
its validity has been demonstrated to the agent.
Verification: in general, the demonstration of consistency, completeness, and correctness of the software at each stage and between each stage of development. Requirements are verified; successive iterations of the design specifications are verified internally, both against the requirements and the earlier iterations;
modules are unit-tested and verified against the design specifications; and the system undergoes intensive verification during integration at the construction stage. For small programs, verification is the process of determining the correspondence between a
program and its specification. Static analysis, dynamic analysis,
and program testing are used during verification.
Testing: examination of the behavior of a program by executing it
on sample data sets.
Proof of correctness: use of techniques of mathematical logic to infer that a relation between program variables assumed true at program entry implies that another relation between program variables
holds at program exit.
Program debugging: the process of correcting syntactic and
logical errors detected during coding. Debugging occurs before the
program or module is fully executable and continues until the programmer feels the program or module is sound and executable.
Testing then exercises the codq over a sufficient range of data to
verify its adherence to the design and requirements specifications.
25
observed results is very difficult. To test a program, we
must have an "oracle" to provide the correct responses
and to represent the desired behavior. This is a major role
of a requirements specification. By providing a complete
description of how the system is to respond to its environment, a good requirements specification forms a basis for
constructing such an oracle. In the future, executable requirements languages may provide this capability directly, but currently we must be content with more ad hoc
techniques, e.g., intuition, hand calculation, simulation
(manual and automated), and alternate solution to the
same problem.
Although we have been discussing testing at the coding
level, the validation methods apply to any stage in the life
cycle. Testing in its narrowest definition is performed
during the construction stage and later, as revisions are
made during operation and maintenance. Derivation of
test data and test planning, however, should cover the entire life cycle.
In its broader sense, testing is an important activity in
each stage, subsuming a large portion of the verification
process. Simplified walkthroughs, code reading, and
most forms of requirements and design analysis can be
thought of as test procedures.
The main problem with testing is that it reveals only the
presence of errors, making a complete program validation obtainable only by testing for every element of the
domain. Since this process is exhaustive, the presence of
no errors guarantees the validity of the program. This is
the only dynamic analysis technique with this guarantee;
unfortunately, it is not practical. Frequently, program
domains are infinite, or so large as to make the testing of
each element infeasible. There is also the problem of
deriving, for an exhaustive input set, the expected (correct) responses, a task at least as difficult as (and possibly
equivalent to) writing the program itself. The goal of a
testing methodology is, therefore, to reduce the potentially infinite exhaustive testing process to a finite testing process. This is done by choosing a test data set (a subset of
the domain) to exercise features of the problem or of the
program. Thus, the crux of the testing problem is finding
a test data set that covers the domain yet is small enough
for practical use, an activity that must be planned and carried out in each life-cycle stage. The following are sample
criteria for the selection of data for test sets.
* The test data should reflect special properties of the
domain, such as extremal or ordering properties or
singularities.
* The test data should reflect special properties of the
function that the program is supposed to implement,
such as domain values leading to extremal function
values.
* The test data should exercise the program in a specific manner, e.g., cause all branches or all statements to be executed.
The properties that test data sets reflect are classified
according to their dependence on either the internal structure or the function of the program. In the first two
bulleted items above, the test data reflect functional properties; in the third, structural properties. Structural
testing helps compensate for the inability to do exhaustive
26
functional testing.
While criteria for an adequate structural test set are
often simple to state (such as branch coverage), the
satisfaction of those criteria usually can only be determined by measurement. Due to the lack of analytical
methods for deriving test data to satisfy structural criteria, most structural test sets are obtained heuristically.
The major current difficulties in functional analysis
techniques lie in the specification of such vague terms as
extremal or exceptional value. Further, for some techniques it may not be possible to obtain a functional
description from a requirement or specification statement. Thus, again a substantial amount of effort in test
set construction is heuristic.
Testing during each life-cycle stage
Requirements stage. Why bother with formal requirements for a small program written by a single programmer? Because a clear, concise statement of the problem to be solved will facilitate construction, communication, error analysis, and test data generation.
Invest in analysis at the beginning oftheproject. What
should be included in the requirements or problem
statement? The following lists information that should be
recorded and decisions that should be made at this stage.
(1) The program's function-what it is to do.
(2) What the input will be like-its form, format, data
types, and units.
(3) The form, format, data types, and units for the
output.
(4) How exceptions, errors, and deviations are to be
handled.
(5) For scientific computations, the numerical method
or at least the required accuracy of the solution.
(6) The computer environment required or assumed,
e.g., the machine, operating system, and implementation language.
Making and recording decisions on the issues listed
above are only part of the requirements-stage tasks. In addition, the core of the test data set should be established
and error analysis performed.
Start developing the test set at the requirements stage.
The requirements state the program's function. Data
should be generated which will determine whether or not
the requirements have been met. To do this, the input domain should be partitioned into classes of values that the
program will treat similarly. A representative element of
each class should be included in the test data. Since
boundary values (elements at the edge of the input class)
frequently require special treatment by the program, and
are thus likely sources of error, they should be included in
the set. In addition, any nonextremal input values that
will require special handling by the program should be included. Output classes should also be covered by input
which causes output at each class boundary and within
each class. Invalid input values require the same analysis
provided for valid values.
COMPUTER
Expected values should be determined for all elements
in the test data set. That is, how the program should treat
each of the elements should be decided and recorded. The
test data set and accompanying expected values form the
core of the test set that will be refined for use with the implementation code. The test set generated during the requirements stage will address the functional aspects ofthe
program; data to test the structural aspects will be
generated during the design and implementation stages.
Generating test data at this stage also insures that the
requirements are testable. Requirements for which it is
impossible to define test data or determine the expected
value are ineffective and should be reformulated.
A reyou solving the correct problem ? At the implementation stage the cost of requirements modification is high
and rework is difficult. Therefore, the requirements
should be analyzed at this stage for correctness, consistency and completeness. Too frequently, the results of such
analysis aren't obtained until the program is executed
with test data. The problem itself, and the nature of the
solution should also be considered: is the problem the correct one? Should the solution be made more general?
More specific? These analyses and questions should be
part of the requirements-stage verification process.
Design stage. Even for small projects, the programmer
should spend the time and energy to produce an explicit
design statement, for such a document will aid construction, communication, error analysis, and test data
generation. The. requirements and design documents
together should describe the problem to be solved and the
organization of the solution. They should convey enough
information so that the reader can understand the nature
of the program, its task, and operation without resorting
to code reading.
The following information should appear in the design
statement, regardless pf whether the design technique is
original or one of the many appearing in recent
literature.4
* Principal data structures should be specified, since
their organization frequently shapes the structure of
the entire program.
* Functions, algorithms, heuristics, and special processing techniques should be recorded.
* The basic program organization, its subdivisions or
modules, and its internal and external interfaces
should be specified.
* Additional information may be needed for particular projects.
Verification activity is important in the design stage
because faults are often more expensively and less
satisfactorily fixed after the program is coded. At the
design stage verification activities are expanded to reflect
an increased knowledge of structure and a refinemement
of the requirement.
The design should be analyzed for completeness and
consistency. The effectiveness of individual algorithms,
heuristics, and special techniquies should be checkedthis could involve hand calculations-in representative
test cases. The module interfaces should be examined to
December 1980
assure that calling routines provide the information and
environment required by the called routines in the form,
format, and units assumed. The data structure should be
examined for inconsistencies or awkwardness. I/O handling (a frequent source of error) should be carefully
analyzed.
The design should be analyzed to determine whether or
not it satisfies the requirements. For example, the constraints specified by the requirements should be reflected
in the design, the design should match the form, format,
and units for input and output stated in the requirements,
and all functions listed in the requirements should be included in the design. Selected test data generated during
the requirements should be hand simulated to determine
whether the design will produce the expected values.
Design-based test data should be generated for data
structures, functions, algorithms, heuristics, and the
general program structure. Standard, extremal, and
special values for the data structures should be included in
the test data set, and boundary value analysis should be
applied to both the structure and the values of the data
structures. For example, if array size can vary, singleelement, maximum-size, and special-valued arrays
should be candidates for the test data set. Although test
data for externally visible functions is generated at the requirements stage, it should also be generated for the internal functions introduced during design. The I/O analysis
suggested earlier should generate the test data for these
functions. Unless they are part of the existing test data,
new tests should be generated to exercise the modular
structure of the design. Expected values should be
calculated for all the data in the test set. The test data
generated during the design stage should test both the
structure and the internal functions of the design.
The test set and expected values generated during the
requirements stage should be reexamined in view of
design-stage decisions, since it may be possible to refine
them and mnake them more exact.
Many programmers and managers rush to the coding
stage early in a new project, but the time spent in careful
analysis during the first two life-cycle stages can increase
the quality of the total project. Design-stage verification
procedures should be executed by a colleague when possible, because independent analysis is more effective than
self-check.
I
'U. -
At San Francisco's
Jack Tar Hotel
Registe-r -NOW
Particular emphasis should be given to design/require-
ment consistency and to completeness of generated test
data.
Construction stage. The construction or coding stage is
what most people think of as programming. However, if
the requirements and design have been done carefully, the
coding will be straightforward, almost mechanical. Since
so much has been written about software engineering,
structured programming, and various coding techniques,
we will assume the use of good programming techniques.5
Check for consisiency with design. The first step in
verification during the construction stage is to determine
that the code is consistent with the design, that both have
the same modular structure and module interfaces. Both
should use the same data structures and implement the
same functions with the same algorithms. I/O handling in
the code should be consistent with the decisions made
during the design stage.
Keep your tests. Testing is often slow and ineffective
because the programmer manufactures test data on the
spot and runs tests as the spirit moves him, with his
memory as the only test record. Such random testing produces random results. Even good test data sets compiled
systematically throughout the life cycle are dependent
upon orderly execution.
Effective tests are organized and systematic-runs are
dated, annotated, and filed; code pieces and modules are
W%.,
op-
RESEARCH
PROJECT LEiADER.
The Corporate CoGmputer Sciences Center
of Honeywell, located in suburban Minneapolis, has an immediate lead engineering
position for a scientist possessing a strong
background in software technologies.
The selected candidate will supervise
the activities of an advanced research
team involved in software design
methodologies.
Desired qualifications include 5 years
experience in software engineering, plus a
PhD degree in Computer Science, Electrical Engineering, or related discipline.
Honeywell is prepared to extend an
excellent salary, and
benefits package to
the successful applicant. For immediate
confidential attention, direct your re'sume
with salary history to: Mr. Tom Wylie,
Honeywell Corporate Sciences Center
(CM), 10701 Lyndale Avenue South,
Bloomington, MN 55420.
Honeywell
An Equal Opportunity Employer MIF.
tested in a rational, scheduled order. If errors are found
and changes mnade to the program, any previous tests involving the erroneous segment must be rerun.
Ask a colleague for assistance. Manual analysis of the
code should precede the first test rtns. Desk checking, in
which a programmer reads his code looking for errors, is
not a particularly effective verification technique. Programmers are poor at finding their own errors in fundamental logic and missing cases because they tend not to
question something they thought correct at an earlier
stage-if it seemed right when it was designed, it will seem
right later. As for syntactic errors, the programmer tends
to see what was meant rather than what was coded.
Inspection, an exercise in disciplined error hunting,
and walkthrough, a form of manual simulation, are formalized manual techniques based on desk checking. In
both techniques, a team of programmers examines the
code (or design or requirements) and looks for errors in an
organized manner.
A technique between desk checking and inspection/walkthrough is appropriate for small programs. An
independent party should analyze the development product at each stage. The programmer should explain the
product to this colleague while the colleague, armed with
a checklist of likely errors, questions the logic and searches for errors. This devil's advocate procedure provides a
fresh perspective from which to see errors invisible to the
programmer.
Use available tools. At last the code is ready to be run.
The programmer should be familiar with the various compilers a-nd interpreters on his system for the language he is
using. If more than one exists, they undoubtedly differ in
their error analysis and code generation capabilities.
Some processors check syntax only with no code generation; others perform extensive (e.g., array-bound) error
checking and provide aids such as cross reference tables.
Debugging compilers or interactive debuggers are very
helpful and should be used.
Apply stress to the program. Testing requires as much
creativity as design-it should exercise and stress the program structure, the data structures, the internal functions, and the externally visible functions. Both valid and
invalid data should be in the test set, which consists of test
data and expected values generated during the requirements and design stages. This test set may require
refinement or redefinition in order to be applied to the
code, and additional data may be required to test structural and functional details visible only in the code. The
test data should conform to the general test organization
to be used during construction. Intermediate values may
be required in order to test ihcomplete pieces of code or
individual modules.
Test one at a time. "Divide and conquer" is an effective
strategy in many tasks, including testing. Since it is easier
to isolate errors when the number of potential interactions is small, pieces of code, individual modules, and
small clusters of modules should be checked separately
and found error-free before they are integrated into the
total program.
COMPUTER
In order to test the smaller units of code, additional
code mdy be required. If the testing is done bottom-up,
drivers miust be written. For small programs, this may on-ly entail producing code to call a procedure to allow
testing of the interface and control transfer. Construction
proceeding top-down requires stubs, which are incomplete or dummy procedures constructed to test flow
of control and argument passing before the entire program has been constructed. Programmers are often reluctant, with small programs, to generate any "unnecessary" code solely for testing. However, it is wise to
organize the testing so small pieces of code are exercised
and deetned error-free before integrating them into larger
units.
Instrumentation, the insertion of code into the program solely to measure various program characteristics,
can be Uiseful for program verification. Automatic instrumentation tools are often employed in medium and
large-scale projects, but the small-project programmer
can do his own. Checking of loop control variables and
array-bounds, determining whether key data values are
within permissible ranges, tracing execution, and counting the executions of a group of statements are all functions of instrumentation.
Areyyou ever through? When is testing complete? Unfortunately, this fundamental question has no clear-cut
answer. If errors occur in each program execution, testing
should continue. -Since errors tend to cluster, those
modules that appear particularly error-prohe should
receive special scrutiny. Even if the entire test set has run
against the program with no more errors, it does not mean
the program is error-free; the test set could be incomplete.
The metrics used to measure testing thoroughness include statement testing, branch testing, and path testing.
Statement testing requires that each statement in the program be executed at least once; branch test metrics determine whether each exit from each branch in the program
has been executed at least once; and path testing requires
that all logical paths through the program be covered,
possibly requiring repeated execution of various
segments. Each succeeding metric subsumes the others.
Since the number of paths grows exponentially with the
number of decision points, path testing tends to be too
costly in time and money and is often impossible to perform. Statement testing, although theoretically insufficient, is the coverage metric most frequently used because
it is relatively simple to implement.
If no dynamic analysis tool for test coverage measurement is available on the system, a straightforward exercise
can be performed to ihstrument a program to calculate
statemnent coverage. Identify each program point intowhich control can be transferred and establish an array
with as many slots as there are transfer points. At each
transfer point insert a statement which will increment the
appropriate slot in the array; the final values in the' array
can be printed at program exit. This technique will not only provide an indication of test coverage, but will also
show the portions of the program with the heaviest usage.
Statement testing will not guarantee a correct program,
but it is a frequently used testing minimum; if code has
December 1980
never been exercised, it is difficult to know that it is errorfree.
It should be emphasized that the amount of testing
depends upon the cost of an error. Critical programs or
code segments require more thorough testing than less
significant functions.
Operation and maintenance stage. Why test during
operation and maintenance? For many production prol
grams, 50 to 85 percent of the life cycle costs are accrued
dduring the maintenance stage, that is coincident with
operation. These costs do not imply error-studded programs that take forever to fix, but evolving programs that
are constantly undergoing the modifications, corrections,
and extensions bound to occur even in small programs.
All these changes require testing.
The systematic approach recommended in this article
facilitates regression testing (i.e., testing during
maintenance) by completing its groundwork in the earlier
stages. The test set, test plan, and test results for the
original program are at hand, ready for modification to
accommodate program changes. All portions of the program affected by the modifications must be retested.
After regression testing is complete, the program and
test documentation must be updated to reflect the
changes. To ignore the need to maintain consistent and
current documentation on both programs and completed
tests is to insure increasing instability of the programs at
each successive modification.
Summary and general rules
We have proposed a general approach to developing
software with limited resources, in which verification
takes place throughout the development process. The
programmer's task of demonstrating that he has produced reliable and consistent code can be mnuch simplified
by early introduction of verification activities, even in
small, noncritical projects.
Our approach relies heavily on testing as the main
verification technique, supported by code reading and inspection. Test data sets are derived throughout the
development process, beginning at the requirements stage
when the data can be derived from functional analysis of
the system requirements. Inconsistency in these re-
somr*s*uspru
February
11-
~23-26
Featuring "VLSI-in the Laboratory,
the Office, the Factory, the Home."
See pg. 65-72 for details
quirements is often detected during this analysis. In the
design stage, test data which stress the program control
and data structures are generated; when the final code
generation and construction stage begins, the test data set
is nearly complete. The remaining test data should be
chosen to exercise those special functions used by the programmer in the actual implementation. Care should be
taken to insure that the data and control structures
represented by the final programs are covered as extensively as posgible by the test data.
The actual testing should proceed in an organized manner small pieces are tested individually, then aggregated
and retqsted. Test coverage metrics, useful for establishing a confidence level for the test data, can be implemented by instrumenting the code. Since testing is aimed at discovering errors, each design or code modification
resulting from such a discovery necessitates retest. The
process becomes an iterative one, whose goal is a high
level of confidence in the finished product.
e
Inspection, close reading and analysis of the code, and
manual simulation are important verification techniques
throughout development. Thorough analysis and hand
simulation are employed during both reqbiretnents and
design in order to discover errors early. Each stage must
be anAlyzed for consistency with prior stages and for internal.consistency. Independent review of the work by a
colleague can reveal design and code errors.
Quality can be improved through planning, a staged
development, and the use of sound verification techniques at each stage, as summarized in these concluding
guidelines:
(1) Generate test data at all stages, especially in the ear-
ly stages.
(2) Develop a means for calculating the expected
values for test data to compare with test results.
(3) Inspect requirements, design, and code for errors
and consistency.
(4) Be systematic in your approach to testing.
(5) Test pieces and then aggregate them.
(6) Save, organize, and annotate test runs.
(7) Concentrate testing on modules that exhibit the
most errors, and on their interfaces.
(8) Retest when modifications are made.
(9) Discover and use tools available on your system.
References
I. W.R. Adrion, M.A. Branstad, and J.C. Cherniavsky,
Verification, Validation, and Testing, NBS Institute for
Computer Science and Technology, Special Publication,
Washington, D.C., scheduled for release in January, 1981.
2. GuidelinesforDocumentation of ComputerProgramsand
Automated Data Systems, NBS FIPS Pub. 38,
Washington, D.C., Feb. 1976.
3. Automated Data Systems Documentation Standards,
Dept. of Defense Standard 7935.1-S, Washington, D.C.,
Sept. 1977.
4. "Program Design Techniques," EPP Analyzer, Vol. 17,
No. 3, Canning Pub., Inc., Mar. 1979.
5. M.V. Zelkowitz, "Perspectives on Software
Engineering," Computing Surveys, Vol. 10, No. 2, June
1978, pp. 197-216.
Selected bibliography
Goodenough, J.B., and S. Gerhart, "Toward a Theory of Test
Data Selection," IEEE Trans. SoftwareEng. Vol. 1, No.2, June
1975, pp. 156-173.
Hetzel,, W.C., ed., Program Test Methods, Prentice-Hall,
Englewood Cliffs, N.J., 1973.
Kernighan, B.W., and P.J. Plauger, The Elements ofProgram-
ming Style, McGraw-Hill, New York, 1978.
Miller, E., and W.E. Howden, Tutorial: Software Testing and
Validation Techniques, 1978.'
Myers, G. J., The Art of Software Testing, John Wiley & Sons,
New York, 1979.
Yeh, R.T., ed., Current Trends in Programming Methodology,
Vol. 2, includes annotated bibliography on validation, PrenticeHall, Englewood Cliffs, N.J., 1977.
*This tutorial is available from the IEEE Computer Society Publications
Office, 5855 Naples Plaza, Suite 301, Long Beach, CA 90803.
Martha Branstad is currently the manager
of the software quality program in the In-
l l
stitute for Computer Science and
Technology at the National Bureau of
Standards. Her current research interests
include software testing, software tools,
aInd program developmnent,' environments.
Before joining NBS, Branstad was in the
computer research division at National
Security Agency. She has been an assistant
professor in the Computer Science Department at Iowa State
University and has taught courses for the University of
Maryland. She received her BS and PhD from Iowa State
University and her MS from the University of Wisconsin.
W Richards Adrion is currently Program
W.
birector, Special Projects, in the Division
oof Mathematical and Computer Sciences
of the National Science Foundation. His
research interests include software
engineering, software tools, program
development environments, and software
testing. Before returning to the NSF,
Adrion was the leader of the Software
Engineering Group at the Natiohal Bureau
of Standards. He has been a full-time faculty mnember at Oregon
State University and the University of Texas at Austin, and has
taught courses at The American and George Washirigton
Universities.
He received his BS and MEE from Cornell University and his
PhD from the University of Texas at Austin. Adrion is a member
of IEEE, ACM, SIAM, Phi Kappa Phi, Sigma Xi, and the New
York Academy of Sciences.
John C. Cherniavsky is an associate professor of computer science at the State
University of New York ai Stony Brook.
He is currently on leave and acting as program directbt for Theoretical Computer
Science at the National Science Foundation. He also holds a guest appointment at
the National Bureau of Standards. His interests include software quality assurance,
theoretical foundations of program
verification, and software metrics. He received his BS in
mathematics from Stanford University in 1969, and MS and
PhD degrees in computer science in 1971 and 1972 from Cornell
University. His professional memberships include SiAM,
ACM, IEEE Computer Society, AMS, arid ASL.
COMPUTER
Download