Uploaded by Anuj Sharma

adt specification

Comput. Lang. Vol. 17, No. 1, pp. 75-82, 1992
Printed in Great Britain. All rights reserved
0096-0551/92 $3.00+ 0.00
Copyright© 1991 PergamonPressplc
SPECIFICATION A N D TESTING OF ABSTRACT
DATA TYPES
PANKAJ JALOTE
Department of Computer Science and Engineering, Indian Institute of Technology, Kanpur-208016, India
(Received 21 January 1990; revision received 6 December 1990)
Abstract--Specifications are means to formally define the behavior of a software system or a module, and
form the basis for testing an implementation. Axiomatic specificationtechnique is one of the methods to
formally specify an abstract data type (ADT). In this paper we describe a system called SITE, that is
capable of automatically testing the completenessof an ADT specification.In addition, an implementation
of the ADT, can also be tested, with SITE providing the test oracle and a number of testcases. For
achieving these two goals, the system generates an implementation of the specifications and a set of
testcases from the given specifications. For testing the completeness of specifications, the generated
implementation is tested with the generated testcases. For testing a given implementation, the generated
and given implementations are executed with the generated testcases and the results compared. One way
to utilize the system effectivelyis to first test the specifications and modify them until they are complete,
and then use the specifications as a basis for prototyping and testing. The system has been implemented
on a Sun workstation.
Abstract data types
Axiomaticspecifications
Completenessof axioms
Softwaretesting
1. I N T R O D U C T I O N
Software specifications form a critical component in developing reliable software. Specifications
provide the means to properly communicate the functionality of the system to be developed, and
form the basis for testing the software. Correct, complete and unambiguous specifications are
essential for developing error-free software. Formal specifications have been proposed as a means
for making the communication about the functionality of the system precise, and less prone to
errors due to mis-interpretations [1].
An important use of specifications is during the testing of software developed to implement the
specifications. During testing, the consistency between the behavior of actual software and the
specification is checked for the given testcases. With formal specifications, a desirable goal is to
automate, as much as possible, the testing of software against its specifications.
However, formal specifications need not always be complete. That is, the specifications, though
formally stated, may not completely specify the behavior of the software. Incompleteness provides
r o o m for misinterpretations, and is likely to cause problems when the software is tested against
the specifications. Hence, another desirable goal with formal specifications is to test the completeness of specifications themselves. It has been argued that testing specifications early is quite
important of reducing the cost of software construction [2], as detecting problems in the
specifications later may result in considerable amount of wasted effort.
In this paper, we focus on abstract data types (ADTs). An A D T supports data abstraction, and
comprises of a group of related operations that act upon a particular class of objects, with the
constraint that the behavior of the objects can only be observed by applications of the operations
[1]. D a t a abstraction is extremely useful for information hiding and supporting high level
abstraction. M a n y languages like Simula [3], C L U [4], Euclid [5], and Ada [6] support data
abstraction.
Abstract data types have frequently been used for formal specifications, and various specification
techniques have evolved for specifying them [1]. One of the methods for specifying A D T s is the
axiomatic specification technique [7-11]. Axiomatic specification techniques employ axioms to
specify the behavior of the operations of the ADT.
In this paper we describe the Specification and Implementation TEsting (SITE) system. There
are two major goals of the system. First, to automatically test the completeness of a given set of
75
76
PANKA,1JALOTE
axions. This testing process is similar to testing programs for correctness, and does not prove
completeness. The second goal of SITE is to test an implementation of the data type whose
specifications are provided.
The SITE system consists of two major components: an implementation generator and a testcase
generator. The implementation generator translates the specifications into an excutable program,
such that if the specifications are not complete, the implementation is not complete and the
behavior of all the sequences of valid operations on the A D T is not defined. The testcase generator
produces a set of testcases which is used for testing completeness of specifications, as well as for
testing implementations. For testing the completeness of specifications, the generated implementation is executed with the generated testcases. For testing an implementation, the user implementation and the generated implementation are executed with the set of generated testcases and the
results compared.
In the next section we describe our specification language, and an example is given. We also
discuss the notion of completeness in more detail. The system and its components are described
in Section 3. In Section 4 we consider the testing of specifications, and in Section 5 testing of
implementations is described.
2. A X I O M A T I C
SPECIFICATION
The specification language we employ has two major components--syntactic specifications and
semantic specifications. The syntactic part defines the name of the data type, its parameters, the
type of the input parameters and the result for operations, and variables. The semantic part
specifies the axions for operations. Axioms attach meaning to operations by specifying the
relationship between operations. The construction of axioms for an ADT is beyond the scope of
this paper and the reader is referred to [9, 11]. As an example, the specifications of the abstract
type queue are shown in Fig. 1.
sl. queue [ Item ]
declare
s2.
newq ( ) -> queue ;
s3.
addq ( queue , Item ) -> queue ;
s4.
deleteq ( queue ) -> queue ;
s5.
emptyq ( queue ) -> boolean ;
s6.
appendq ( queue, queue ) -> queue ;
s7.
frontq ( queue ) -> Item U undefined ;
var
s8.
q , r : queue ;
s9.
,i : Item ;
forall
al.
emptyq ( newq ( ) ) = true ;
a2.
emptyq ( axldq ( q , i ) ) -- false ;
n3. frontq ( newq ( ) ) - undefined ;
a4.
frontq ( addq ( q , i ) ) = if emptyq ( q ) then i
else frontq ( q ) ;
aS.
deleteq ( newq ( ) ) = newq ( ) ;
a6.
deleteq ( addq ( q , i ) ) - if emptyq ( q ) then newq ( )
else axldq ( deleteq ( q ) , i ) ;
a7.
appendq ( q , newq ( ) ) = q ;
a8.
appendq ( r , addq ( q , i ) ) -- axidq ( appendq ( r , q ) , i ) ;
end
Fig. 1. Sp~ifications of the type queue.
Specification and testing of abstract data types
77
In SITE axioms are considered as rewrite rules [12, 13] and we require that the set of axioms
have the properties of finite-termination and unique termination [13]. That is, for any expression
no infinite rewriting is possible, and the rewriting always terminates in a unique expression. Finite
termination ensures that during execution the implementation generated by SITE will not get into
an infinite loop. With unique termination, the axioms can be applied in any order by the
implementation. No assumption is made about the structure of axioms, and the expression of
operations in the axioms need not be limited to, for example, the structure that results from the
axiom construction technique given in [9].
Two important issues for a set of axioms are completeness and consistency. In this paper we
restrict attention on the completeness issue. Consistency means that the semantics specified by the
different axioms are not contradictory. Determining consistency is a theoretically and undecidable
problem, but in practice it is often relatively simple to demonstrate the consistency of axioms [11].
The Knuth-Bendix algorithm [12] can also be used.
For defining completeness, the operations of a data abstraction are divided into two categories-those that return the ADT under consideration, and others which return some other type. The latter
type of operations are called behavior operations. We refer to the former as non-behavior
operations. In the example of the queue, the operations emptyq and frontq are the only behavior
operations.
In this paper, for completeness we use the notion of sufficient completeness [9, 11]. A set of
axioms specifying an abstract type is considered sufficiently complete if, and only if, for every
possible instance of the abstract type (i.e. one that can be created by some sequence of non-behavior
operations), the result of all the behavior operations of the type is defined by the specifications
[9, 11].
This notion of sufficient completeness is from the external viewpoint, i.e. the external behavior
of the ADT should always be specified. The specifications for the type queue given earlier can be
shown to be sufficiently complete. To show incompleteness we need to have an instance such that
the result of some behavior operation is not defined. This is the approach we take during testing.
However, the general problem of determining if a set of axioms is sufficiently complete is
undecidable [9].
The most obvious reason for incompleteness, is that some of the axioms are not provided.
Though completeness is defined from the point of view of externally observable behavior of the
type, incompleteness will result even if the missing axioms are for non-behavior operations as this
could lead to construction of an instance on which the result of some behavior operation is not
defined. In this paper, we only consider the incompleteness caused by missing axions.
3. THE SITE SYSTEM
The SITE system is implemented in C on a Sun Workstation and has two major components-the implementation generator and the testcase generator. The implementation generator is like a
compiler that takes in the specifications of the data type, and produces an implementation in C
for the specifications. As the axioms are tested during the completeness testing, the testcase
generator produces a set of testcases that depend largely on the syntactic part of the specifications.
This ensures that the testcases themselves are not affected by the incompleteness of axioms.
3.1. Implementation generator
In order to be able to generate an implementation of an arbitrary abstract type, we need to have
a general structure capable of representing instances of different data types. For this trees are
suitable. An instance of the data type is represented as a general tree, and the operations on the
ADT as tree manipulation functions. Each node in the tree contains, besides pointers to access the
parent node and different sub-trees, information about the operation that created that node,
including the parameters and name of the operation. This representation is quite general and also
simplifies the translation of operations into tree manipulation functions. Tree representation has
been used for representing expressions in interpretive systems [8, 10, 14]. The tree representation
in SITE is similar to others, though it is used to actually represent an instance of an ADT, and
is manipulated by operations on the ADT. In interpretive systems, the tree is used by the
78
PANl~J JALOTE
interpreter. In addition, in SITE, the input specifications need not be complete and the constructors (constructors are a subset of non-behavior operations using which any instance of the
ADT can be constructed.) of the ADT need not be known (as is the case in the representation
proposed in [10]).
Two sets of functions are generated for defining the operations on the ADT as tree manipulation
functions. First, functions are generated from the axioms. Generating these functions is relatively
straight forward. The function for an axiom is simply the right side of the axiom, with each variable
replaced by a reference string in the tree. The reference string for a variable is determined by the
left side of that axiom, and is the symbolic address in the trees representing the instances of the
data type on which the operation is performed.
The functions from the axioms of an operation are special case functions. These can be used to
implement the operation itself, if it can be decided which of these functions to invoke when the
operation is performed. For example, if emptyq is applied on an instance whose tree is such that
the last operation is newq, then the function corresponding to the axiom a l should be invoked.
To invoke the appropriate axiom function, a function for each operation is generated (which bears
the name of the operation).
These functions employ a "matching function" to decide which of the axioms, if any, is
applicable. During implementation generation, an encoding of the traversal strings of the parse
trees of the parameters in the left side of the axiom are also generated. The matching function
compares these strings with the traversal strings of the instances on which the operation is being
performed to determine which axiom is applicable. If an axiom is applicable, the matching function
returns the number of that axiom, which is used to invoke the function corresponding to that
axiom. If no axiom is applicable, the default condition is invoked.
A new node is created for the tree whenever a default condition is invoked for a non-behavior
operation. With the new node and the trees representing the instances on which the operation is
performed, a new tree is created. This tree represents the instance resulting from applying the
operation.
For the behavior operations the default condition has a different meaning. If the specifications
are complete, the axioms should specify the result of applying any behavior operation on any
instance of the data type. Hence, the default condition means that the set of axioms is incomplete.
This is the property that is used to test for incompleteness. The goal for detecting incompleteness
is to construct an instance of the abstract type on which the result of performing some behavior
operation is not defined. Further details about the implementation generation can be found in [15].
3.2. Testcase generator
The effectiveness of any testing process is highly dependent on the choice of testcases. The need
for proper testcases is even more important when testing for incompleteness. Completeness is
defined from the point of view of external behavior, and can only be detected by showing that
there is an instance of the data type for which the specifications do not define the value for some
behavior operation. In other words, incompleteness can only be detected by applying a behavior
operation. Furthermore, only the result of a behavior operation can be meaningfully displayed and
compared. For displaying or comparing the values of abstract data type instances, special functions
on that type will be needed. For these reasons, each testcase is considered to be an expression of
the form
behavior_operation (an ADT instance)
As our goal is to detect the incompleteness of the set of axioms, it is important that the testcase
generation strategy does not depend heavily on the axioms. The testcase generator in our system
generates the testcases largely from the syntactic part of the specifications. This approach is
different from the testcase generation strategy proposed in [16], where the testcase generation
depends on the structure of the axioms.
The basic scheme for generating testcases is simple. From the syntactic specifications, all the valid
expressions that produce an ADT instance, up to a given maximum nesting depth of operations,
are generated. Testcases are generated by applying the different behavior operations on these
instances.
79
Specification and testing of abstract data types
1. emptyq(newq0)
2. frontq(newq())
3. emptyq(addq(newq(),90))
4. frontq(addq(newq(),90))
5. emptyq(deleteq(newq0) )
6. front q(deleteq(ne'wq0) )
7. emptyq(appendq(newq(),q2))
8. front q(appendq(newq(),q2))
9. emptyq(addq(addq(q 1,75),84))
10. frontq(addq(addq(ql,75),84))
11. emptyq(deleteq(addq(ql,75)))
12. frontq(deleteq(addq(q 1,75)))
13. emptyq(appendq(addq(q 1,75),q2))
14. frontq(appendq(addq(q 1,75),q2))
15. emptyq(appendq(ql,addq(ql,75)))
16. frontq(appendq(ql,addq(ql,75)))
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
emptyq(deleteq(deleteq(ql)))
front q(deleteq(deleteq(ql)))
emptyq(addq(deleteq(ql),81))
frontq(addq(deleteq(ql),81))
emptyq(appendq(deleteq(ql),q2))
frontq(appendq(deleteq(ql),q2))
emptyq(appendq(ql,deleteq(ql)))
frontq(appendq(ql,deleteq(ql)))
emptyq(addq(appendq(ql,q2),74))
frontq(addq(appendq(ql,q2),74))
emptyq(deleteq(appendq(ql,q2)))
frontq(delctcq(appendq(ql ,q2)))
emptyq(appendq(appendq(ql,q2),q2))
frontq(appendq(appendq(ql,q2),q2))
emptyq(appendq(ql,appendq(ql,q2)))
frontq(appendq(ql,appendq(ql,q2)))
Fig. 2. Testcases generated for the type queue.
The immediate question is what should be the depth for which the expressions are generated.
In SITE the testcases up to a depth d are generated, where d is the maximum nesting depth of any
axiom. In the case of queue, the maximum nesting depth of the axioms is 2 (i.e. d = 2). Depth is
the only property of the axioms considered in testcase generation. Usually, many axioms may have
depth d, and omission of a few axioms will not affect d. For example, in the type q, all the axioms
have depth of 2. Some of the testcases generated for the type queue, are shown in Fig. 2.
The intuitive reason for selecting d as the depth up to which the expressions are generated is that
since expressions of up to depth d are considered in the axioms, then the expressions up to that
depth are most significant, and the behavior of different expressions up to depth d will depend on
the particular composition of operations. The depth of axioms for most data types is not less than
2, because the left side of the axioms usually contains some composition of operations. In fact,
if we follow the heuristics given in [9] for writing axioms, then the left side of all the axioms will
have depth 2. Depth of 2 or more ensures that the testcases generated by our scheme will exercise
different compositions of the operations. This is another reason for selecting our testcase generation
criteria. Furthermore, this strategy also produces testcases that exercise the "boundary conditions"
like frontq(newq()), deleteq(newq()) etc. Boundary condition testcases are generally regarded as
"high yield" testcases for program testing [17]. The usefulness of the generated testcases for testing
specifications and implementations is discussed in the next sections.
4. TESTING SPECIFICATIONS
For testing the completeness of specifications, the generated implementation and testcases are
compiled together and executed. The system configuration for testing completeness is shown in
Fig. 3.
Specs
SITE
Generated
Implementation
Testca.ses
Fig. 3. Testing completenessof specifications.
Testing
r results
80
PANKAJ JALOTE
In SITE, incompleteness can be detected only by applying a behavior operation on an instance
of the ADT. This is one reason why the testcases have the structure described above. For testing
incompleteness of specifications, the choice of testcases is very critical, and if proper testcases are
not provided, the incompleteness of axioms may go undetected. For example, suppose that the two
axioms for the deleteq operation are missing, and we give the following testcase for testing.
frontq (addq (addq (newq (), 5), 6))
For this testcase the generated implementation will return the correct answer (which is 5), and the
incompleteness will not be detected. This happens because the functions corresponding to the
missing axioms were not needed during the execution.
To evaluate the effectiveness of our system, we performed some experiments. In our experiments
we only consider incompleteness caused by missing axioms. For the type queue, we formed eight
different incomplete specifications by deleting one axiom at a time. These specifications were then
used to test if the system can detect incompleteness. Clearly, if the omission of any one axiom can
be detected, omission of a set of them can also be detected by the same set of testcases. The set
of testcases shown earlier were used [ql and q2 were set to newqO].The testcases were able to detect
the incompleteness in all the eight sets of axioms.
Similar experiments were also performed on many standard ADTs like stacks, trees, sets, strings
etc., and in all the cases incompleteness caused due to missing axioms was detected.
The information about which testcases detect the incompleteness can be quite useful in
determining what axioms are missing. Hence, the system not only detects the incompleteness, but
provides a good feedback about the source of incompleteness. This property of the system can be
used as an aid for writing complete specifications. Further details about testing of specifications
are given in [18].
5. TESTING IMPLEMENTATIONS
The SITE system can also be used for automating the testing of user implementations of data
types. The concept of using specifications for testing implementations of data abstraction is also
used in DAISTS [19, 20]. The DAISTS approach is to use the user implementation to evaluate the
left and right side of all axioms for some testcases, and then compare the results. This requires an
equality function for determining if two ADT instances are equal. DAISTS requires the user to
provide the equality function as well as the testcases.
The SITE approach for testing implementations is different from DAISTS. The generated
implementation and the implementation under testing are executed with the same set of testcases.
The output of the two implementations is then compared. Since the structure of testcases is such
that the outermost operation is a behavior operation, no equality function is needed to compare
instances of two ADTs. The system configuration for implementation testing is shown in Fig. 4.
The testcases generated by SITE can be used for testing implementations. The user can also
augment the set of testcases, if desired. However, the structure of the testcases must be as described
earlier.
Specs - ~
Generated
Implemlntati°n ]
SITE
'
>[
Testcases
l
User
I
Implementation I
Fig. 4. Testing an implementation.
Comparator
~
.
Testing
results
Specification and testing of abstract data types
81
Testing implementations is more complex than testing for incompleteness, in that the types of
errors in implementations is limited only by the ingenuity of the implementor, and cannot be
predicted. The goal of software testing, in general, is not to show that there are no errors (which
can only be done in very special circumstances), but try to uncover as many errors as can be
detected with a reasonable number of testcases.
The cost of testing software can be quite high and is often more than the cost of designing and
implementing the software. The high cost is largely due to the cost of generating testcases and the
cost of interpreting the result of the testcases. In order to decide if the results produced by an
implementation are correct, testing requires a "test oracle" that can tell the correct output for the
testcases. Frequently, the test oracle is a human being who works out by hand the correct result
of the testcases. This makes testing unreliable and expensive.
In SITE the testcases are automatically generated and a reliable test oracle is also produced,
thereby drastically reducing the effort for testing. To determine the coverage of the testcases
generated by SITE, we performed some experiments. Two data types were chosen--a priority queue
with some added operations to make it more interesting, and a binary tree. A set of 6 sophomore
students wers selected to implement each data type. Each of the students had knowledge of Pascal,
but had not taken the data structures course.
The variables in the testcases (like ql and q2) were initialized by the operation to create an
A D T instance (like newq). Each implementation was first tested with a set of testcases with depth
of 1. The author of the program had to correct any errors detected by these testcases, and then
repeat this process. All the errors detected were reported. Errors introduced during error-correction
were not included. Once an implementation successfully executed all the testcases for depth 1, it
was then tested with testcases of depth 2. Similar process for correcting and reporting error was
performed. Finally, an extensive testing was done with a mixture of testcases generated for depth
3 and some extra testcases. The variables were also assigned different initial values for extensive
testing.
We found that depth 1 testcases, though they detected on an average 0.8 errors, left some errors
undetected. With testcases of depth 2 an additional 2 errors were detected, on an average. And
extensive testing detected an additional 1 error. Further details of the experiments are given in [21].
Clearly, depth 1 testcases are insufficient. The depth 2 cases are also insufficient if the variables
are initialized with the operation that create an A D T instance. However, we found that if these
variables were given a more "interesting" initial value, more errors were detected. By assigning an
expression consisting of many operations to initialize the variables, we effectively increase the depth
of the testcases, without having the combinatoric explosion which occurs by merely increasing the
depth in the testcase generator. In most of the experiments, these testcases were able to detect all
the errors that were detected by extensive testing.
Though reducing the number of testcases is not a goal of SITE, as the testcases as well as the
test oracle are both automatically generated, we would still like to keep the number of testcase low,
without sacrificing the effectiveness of testing. From the experiments we believe that in most
situations testcases with depth 2, with carefully selected initial values, will detect most of the
implementation errors. However, if desired, the depth of the testcases can be increased and testcases
can also be added by the tester.
6. C O N C L U S I O N
Abstract data types are frequently used for formal specifications. One of the methods for
formally specifying ADTs is the axiomatic specification technique, which employs axioms to specify
the behavior of the operations of the ADT. An important issue with axiomatic specifications is if
the set of axioms is complete. Determining completeness of an arbitrary set of axioms is
undecidable, and frequently, formal verification on a case-by-case basis is done to determine if a
set of axioms is complete.
In this paper we describe the SITE system, which can automatically test the completeness of a
given set of axioms. The testing process is similar to testing programs for correctness, and does
not prove completeness. Another property of SITE is that it can test a given implementation of
the data type, from the specifications of the ADT.
82
PANK~ JALOTE
To provide these services the SITE system has an implementation generator and a testcase
generator. The implementation generator faithfully translates the specifications into an executable
program, such that if the specifications are not complete, the implementation is not complete and
the behavior of a sequence of valid operations on the ADT is not always defined. The testcase
generator produces a set of testcases which is used for testing completeness of specifications, as
well as for testing implementations.
The generated implementation is executed with the generated testcases to test the completeness
of specifications. In our experiments we found that incompleteness caused due to missing axioms
was detected for a variety of data type specifications. For testing an implementation, the given and
generated implementations are executed with the set of testcases and the results compared. We
found that in most cases the testcases generated by the testcase generator are capable of detecting
all the errors that we were able to detect by extensive testing, if the initial values assigned to the
variables used in the testcases are properly chosen by the tester.
We believe that a system like SITE can be very useful in software development, as it can be used
for writing formal specifications, rapid prototyping and software testing. Due to automated testing,
SITE can considerably reduce the cost for testing, and improve software reliability.
REFERENCES
1. Liskov, B. H. and Zilles, S. N. Specification techniques for data abstractions. IEEE Trans. Softw. Engng SE-I: 7-19;
1975.
2. Kemmerer, R. A. Testing formal specifications to detect design errors. IEEE Trans. Softw. Engng SE-II: 32-43; 1985.
3. Dahl, O. J., Myhrhaug, B. and Nygaard, K. The SIMULA 67 common base language. Norwegian Computing Center,
Oslo, Publication S-22; 1970.
4. Liskov, B. H., Snyder, A., Atkinson, R. and Schaffert, C. Abstraction mechanisms in CLU. Comman. ACM 20:
564-676; 1977.
5. Chang, E. and Zilles, S. N. Abstract data types in Euclid. In Notes on Euclid (Edited by Elliot, W. D. and Bamard,
D. T.), Technical Report 82. Computer Systems Research Group, Department of Computer Science, University of
Toronto; 1977.
6. U.S. Department of Defense. Reference Manual for ADA Programming Language. Bedim Springer; 1983.
7. Gehani, N. Specifications: formal and informal--a case study. Softw. Pract. Exper. 12: 433-444; 1982.
8. Goguen, J. and Tardo, J. J. An introduction to OBJ: a language for writing and testing formal algebraic program
specification. In Proceedings of Specification Reliable Software, pp. 170-189; 1979.
9. Guttag, J. V. and Homing, J. J. The algebraic specifications of abstract data types. Acta inform. 10: 27-62; 1978.
10. Guttag, J. V., Horowitz E. and Musser, D. R. Abstract data types and software validation. Commun. ACM 21:
1048-1063; 1978.
11. Guttag, J. Notes on type abstraction (version 2). IEEE Trans. Softw. Engng SE-6: 13-23; 1980.
12. Knuth, D. E. and Bendix, P. E. Simple word problems in universal algebra. In Computational Problems in Abstract
Algebra (Edited by Leech, J.), pp. 263-297. New York: Pergamon Press; 1970.
13. Musser, D. A. Abstract data type specification in the AFFIRM system. IEEE Trans. Softw. Engng SE-6: 24-32; 1980.
14. O'Donnel, M. J. Equational Logic as a Programming Language. The MIT Press; 1985.
15. Jalote, P. Synthesizing implementation of abstract date types from axiomatic specifications. Technical Report
UMIACS-TR-87-16, University of Maryland, College Park; 1987.
16. Bouge, L., Choouet, N., Epibourge, L. and Gaudel, M. C. Test sets generation from algebraic specifications using logic
programming. Universit6 de Paris-Sud, LRI, No. 240; 1985.
17. Myers, G. J. The Art of Software Testing. New York: Wiley; 1979.
18. Jalote, P. Testing the completeness of specification. IEEE Trans. Softw. Engng 15: 526-531; 1989.
19. Gannon, J., McMullin, P. and Hamlet, R. Data abstraction implementation specification and testing. ACM Trans.
Progr. Lang. Syst. 3: 211-223; 1981.
20. McMullin, P. R. and Gannon, J. D. Combining testing with formal specifications: a case study. IEEE Trans. Softw.
Engng SE-9: 328-335; 1983.
21. Cabalero, M. G. A case study in automatic test case generation for data abstraction. Project Report, Program Library,
Department of Computer Science, University of Maryland.
About the Author--Pankaj Jalote got his B.Tech. in Electrical Engineering from the Indian Institute of
Technology Kanpur in 1980, an M.S. in Computer Science from the Pennsylvania State University in 1982,
and a Ph.D in Computer Science from the University of Illinois at Urbana-Champaign in 1985. From
1985 to 1989 he was an Assistant Professor in the Department of Computer Science at the University
of Maryland at College Park. Since 1989 he has been an Assistant Professor in the Department of
Computer Science and Engineering at the Indian Institute of Technology Kanpur. His research Interests
are Software Engineering, Fault-tolerant Computing Systems and Distributed Systems.