A Professional Approach - Information System Architects

advertisement
COMPUTER SYSTEM ARCHITECTS
www.InformEng.com
GETTING
THE REQUIREMENTS RIGHT
A PROFESSIONAL APPROACH
This document is a copy of a paper presented at the 1997 IEEE conference on Software Technology and
Engineering Practice (STEP ’97) and held at the Holiday Inn, Kings Cross, London, UK on 14-18 July 1997.
ABSTRACT
System engineers and system analysts are continually inundated with demands to
adopt this design methodology or that implementation support tool. There is no
shortage of options. However, unless it is very clear what it is that is supposed to be
designed and/or implemented, such techniques and tools are likely to be wastefully
employed producing the wrong thing. It is incumbent upon all professional engineers,
before committing other people's money and resources, to be able to confirm that they
are setting to work with a good requirement specification.
What is a 'good' requirements specification and how may we ensure that we have one?
This paper describes a radically different approach to producing re-usable
requirements specifications that achieves levels of clarity and precision hitherto
unattainable. Above all the approach provides the ability to demonstrate clearly to a
client the actual content of a specification as opposed to its supposed content - in an
information sense. Such ability is vital if the professional engineer is to be able to
carry his client with him or if the client is to be convinced when changes are required
- before any serious commitment to consequential or candidate design is made.
LBH/lld/3/04-10-1997
1
CSA/030155.299/46w
1
INTRODUCTION
All project related activity should proceed from a clear, quality assessed, statement of
requirements. When embarking upon the procurement of a system whose requirements need
to be formally stated, either as a basis for inviting tenders, responding to tenders or as the
starting point for design or implementation, one cannot overestimate the importance of
producing a good requirements specification document or the importance of ensuring its
completeness, consistency and correctness or of the impact of all three attributes on long term
costs and productivity. The Standish Group, in its definitive report CHAOS [see Ref. 11]
draws attention to the fact that the single biggest problem besetting project managers is the
inability to produce an adequate specification of requirements.
Errors in requirements are pervasive, dangerous and costly [Faulk, 1995]. It is
beginning to be accepted that the majority of errors are introduced during the
requirements definition phase of any project [GAO, 1992]. There is also growing
evidence that errors in requirements can be the cause of serious accidents. A 1992
study concluded that the major source of software related errors in NASA's Voyager
and Galileo spacecraft were errors in functional and interface requirements [Lutz,
1993]. If requirements errors are not corrected until the system has been implemented
the cost can be extremely high. It has been estimated [Boehm, 1981 and Fairley,
1985] that correcting requirements errors during implementation can cost up to 200
times as much as if they were corrected during requirements definition.
Given the high incidence of requirements errors, the seriousness of failures that may
result and the high cost of late correction, techniques for improving the quality of
requirements documents, and for early detection of requirements errors, are crucial.
This paper describes some of the fundamental concepts that underlie the GenericModel Approach to Requirements Capture (G-MARC). These concepts were
developed and integrated into a requirements engineering methodology [Hunt, 1992]
by a team of professional system and information engineers for use by the same kind
of people. This methodology is now commercially available and is supported by a PC
based software package whose use ensures adherence to the methodology. Without
such support the manual workload is impossibly complex.
The importance of the investment involved in the production of a Requirements
Specification is almost invariably underestimated - to the ultimate detriment (in both
financial and performance terms) of the system concerned. With modern, highly
complex systems the need for properly structured, carefully controlled specifications,
which are internally correct, consistent and complete and which adequately satisfy the
requirement, is vital. Furthermore there is a need to ensure that such specifications
can be readily modified and re-used without incurring too much consequential effort
and/or delay or without introducing spiraling increases in cost and complexity. In
order to approach this virtual Nirvana, computer aided support is essential. This is the
case primarily because of the enormous decrease in routine manual calculation and
information sorting that is thereby achieved. The human participants in the process
are correspondingly relieved of the error-prone, tedious and, more often than not,
fruitless series of trial solutions that are nearly always necessary in order to generate
an adequate context for balanced judgments to be made during the development of a
system specification.
LBH/lld/3/04-10-1997
2
CSA/030155.299/46w
2
PROBLEM AREAS
Aside from the usual project management issues there are a number of other areas of
consideration that need to be taken into account by the professional engineer when
setting up the conceptual framework that defines a given project. Three of the most
important of these areas are:
-
Psychological
Philological
Acceptance
Firstly, Psychological considerations involve a proper understanding of the limitations
to which everyone's thinking is subject. These limitations become manifest when
attempting to describe a new idea during the process of passing it on to someone else as in a requirements specification. We are all constrained in this context by our
previous experience. As a consequence we tend to understand and describe things in
terms of our previous experience and, in so doing, we inadvertently close the door to
other possibilities. The professional engineer needs to be aware of such issues in
order to be able to take appropriate precautionary measures during the process of
interpreting incoming information.
Another area of psychological difficulty is the problem that everyone has to confine
any given set of statements to the same level of detail. Again and again in almost any
technical description that may be selected it will be found that there are passing
comments and observations which properly belong to some other level of detail. The
presence of such passing comments, far from improving clarity, only serves to divert
attention from those issues that should be being addressed at the current level of
detail.
Secondly, Philological considerations include all those difficulties that arise when
setting down natural language descriptions of anything. Formal, mathematically
provable, languages are not sufficiently useful. This is because it has been finally
demonstrated that it is not possible to use mathematically provable languages for open
systems [Hogan, 1995]. Open systems are those that incorporate non-deterministic
processes – i.e. all natural systems!
We are all encouraged to make our documents 'readable'. Thus, it is hoped, others
will find it easy to read such documents and will thereby be encouraged to learn more
about the subject matter. The problem with readable documents is that they
encourage a narrative style that leads to compound and complex sentences. An
example of a compound sentence in this context is:
The system will exchange invoices, receipts and special billing
instructions with regional offices.
This sentence is compound because it can readily be rephrased in the form of three
simple sentences; one referring only to 'invoices', one to 'receipts' and one to 'special
billing instructions'. Such simple sentences are referred to as 'atomic' sentences.
Complex sentences on the other hand are not capable of being so easily decomposed.
Complex sentences typically need to be completely rephrased in order to make their
LBH/lld/3/04-10-1997
3
CSA/030155.299/46w
meaning unambiguous and in order to be able to decompose them into atomic
components.
Non-atomic sentences can be, and indeed are, difficult to assign contractual liability to
in a precise manner. This is because their meaning is typically obscured by
grammatical and semantic complexity. Professional engineers should employ their
utmost endeavors to eliminate spurious complexity of this kind if they are to minimise
and manage the real complexity of any consequential design. By so doing the
opportunity for error is minimised and long-term costs are significantly reduced. All
of this must surely be in the interests of the client and therefore of the professional
engineer.
Another philological problem is the cavalier over-use of certain words to which we
are all prone. We all use the same word to mean many different things (e.g. the word
'system') and we often use different words to refer to the same thing. Such looseness
inevitably introduces confusion into descriptions that need to be, by virtue of the
financial consequences of misinterpretation, as precise and unambiguous as we can
possibly make them.
Thirdly and finally there are Acceptance considerations. Every specification
document that the author of this paper has examined to date has been found to contain
a large number of subjective requirements. These are requirements whose intention is
inevitably subject to interpretation by the reader. Examples of such requirements are:
-
The system must be user friendly.
The hardware must be easy to maintain.
Rapid response is essential.
The ultimate realisation of requirements of this kind, in the system that is being
specified, is incapable of being verified in any objective manner without recourse to
other more definitive information. If such information is not provided, or referred to,
then uncertainty will exist and there is potential for error. The above examples are
necessarily rather obvious - in order to make the point. In practice much more subtle
forms exist to plague the activities of the professional engineer intent on identifying a
particular need.
Another important aspect of Acceptance is that associated with the dynamic viability
of the system being specified. This issue is associated with the problem of
demonstrating that the given specification will lead to the realisation of a system that
will perform in the correct manner. That is: does the requirement call for something
which is dynamically viable - and if not why not?
All the above issues - and more - are capable of being addressed and resolved prior to
any commitment to design [Collins, 1994]. Therefore, if the ability exists, it must
surely be in the interests of the client (and therefore the professional engineer) to carry
out such resolution - at least to some extent.
LBH/lld/3/04-10-1997
4
CSA/030155.299/46w
3
THE METHODOLOGY
3.1
General
The previous section has outlined just a few of the problems that beset the
professional engineer's attempts to obtain a clear and objective statement of
requirements. In this section the G-MARC approach to resolving such problems is
introduced.
All good engineers know that, when first presented with any new body of information,
a valuable first step is to attempt to identify patterns and/or structure in the raw
information. The presence of structure invariably improves the ability to process and
relate information content - one component with another - and, consequently, to
identify omissions, inconsistencies and errors as well as to carry out many other
processes of a similar nature.
How should we go about identifying structure and content in a statement of
requirements? In order to answer this question we need firstly to have identified some
form of generalized framework - upon which each application can be hung - and
secondly we need to identify the components of the given application in order that we
can categorize them using the framework. Once they have been categorized, the
components can thereby be readily compared and related both to each other and to
generic forms. Such generic forms develop with time and constitute the application
area expertise repository or knowledge base. The professional engineer often has an
informal knowledge base of this kind in his head where it is invisible to others. GMARC provides the ability to make such knowledge explicit by capturing it into a
generic database, thereby making it available to other practitioners [Peltu, 1995]. As
a result the opportunity for mutual benefit and consensus is dramatically improved.
3.2
The Basic Framework
The research that formed part of the G-MARC development work led to the
conclusion that there is a generally applicable and fundamental framework into which
all requirements information can be sorted prior to identifying and analyzing the
structure of the system being specified. In order to get a feel for the nature of this
framework let us imagine that we can define a three-dimensional array where the
three dimensions are:
-
Application Aspect.
Support Layer.
Detail Level.
Figure 3.2:1 overleaf illustrates a simple example where these three dimensions have
each been assigned three category names. As can be seen the category names can be
LBH/lld/3/04-10-1997
5
CSA/030155.299/46w
interpreted as defining an array of small rectangular cells, each such cell
corresponding to a particular combination of category names.
Having identified a framework, let us now turn to the components that we wish to
arrange within this framework. Firstly let us imagine that all the sentences in a given
specification document have been reduced to a set of atomic requirements - as
indicated in Section 2 above. Every member of this set of atomic requirements may
be categorized by assigning to it a three-digit code (where each digit is either a 0,1 or
2) and where each digit indicates a category name (attribute) in the corresponding
dimension. Thus the code 112 for example would be assigned to any atomic
requirement that has the three attributes:
-
Function,
Capabilities,
Component.
When each requirement has been assigned an appropriate categorization code, it will
have been effectively assigned to one of the small rectangular cells illustrated in
Figure 3.2:1. Some of the cells will be empty, some will have a few entries and some
will have a lot. The 'density distribution' of requirements in the array will then be
found to reveal some very interesting facts about the document - but see Section 4
below.
Figure 3.2:1 THREE DIMENSIONAL ARRAY
LBH/lld/3/04-10-1997
6
CSA/030155.299/46w
It has been established in fact that there are four dimensions rather than three, the
fourth one being 'Viewpoint'. The corresponding four-dimensional array is illustrated
in Figure 3.2:2 below. Therefore, to fully categorize each atomic requirement, we
actually need to attach to it a four-digit code of the form 0112. This particular code
means any requirement that has the following attributes:
-
Operations,
Function,
Capabilities,
Component.
Figure 3.2:2 FOUR DIMENSIONAL ARRAY
We are able, of course, to define any number of category names in each dimension
and Figure 3.2:3 below indicates a possible framework in which six category names
are employed in each case.
There is no reason why the number of category names needs to be the same in each
dimension and, in practice, it is often the case that they are indeed different as will be
illustrated later in this paper.
LBH/lld/3/04-10-1997
7
CSA/030155.299/46w
Figure 3.2:3 GENERIC FRAMEWORK
3.3
Evolution
Perhaps one of the most important aspects of any specification is the extent to which it
is re-usable. All professional engineers should be concerned to maximize the reusability of the results of their efforts. This applies equally well to specifications as
much as it does to designs and to implementation.
Any set of requirements can be divided into two subsets: those that identify goals and
those that identify constraints. The Goals are the aspirations of the system specifier.
They refer to things that need to be achieved - even though some of them may come
into conflict with each other. The Constraints are the limitations imposed upon the
Goals by virtue of real world, environmental or implementation considerations. It is
the constraints that realize a particular instance (i.e. an application) of a given set of
Goals. Different sets of constraints realize different instances of the set of concepts,
or pivotal model, identified by the Goals. Any pivotal model can be used to generate
a multitude of variants - each variant dependent upon its own particular set of
constraints.
As an example of a goal consider the following requirement:
'Any record must be able to be retrieved from the database within 5 seconds'.
An example of an associated constraint could be:
'All database records must be held on magnetic tape units - type xyz'.
These two requirements would doubtless come into conflict with each other for
databases of any useful size. All such clashes would eventually need to be detected
and reconciled - preferably before embarking on any consequential design.
LBH/lld/3/04-10-1997
8
CSA/030155.299/46w
The set of goals for any system (the pivotal model) constitutes the only truly portable
(re-usable) part of the system by virtue of its constraint-free nature. Figure 3.3:1
illustrates the separation of Goals and Constraints into separate locations for the
purpose of being able to identify and possibly reconcile any adverse effects of one
upon the other. The extent to which a set of constraints causes a pivotal model to
change is the extent to which the resulting system is not re-usable under different
circumstances.
Figure 3.3:1 SEPARATING GOALS FROM CONSTRAINTS
The Pivotal Model should initially be produced in isolation in order to obtain a
constraint-free view of the structure of the system being specified. If we consider
only one layer at a time, starting at the highest, then the constraints should be applied
to each layer, one detail level at a time (top down) to produce a constrained
specification for the layer. At each level of detail, in each layer, it will be possible to
make informed decisions as to which course to pursue as the reconciliation process
unfolds - modifying as necessary or, at least, drawing attention to consequences in
each case.
The major advantage that results from the production of the pivotal model first, and
then the application of the constraints to it, is the fact that the impact of the constraints
becomes highly visible and, in particular, those constraints that cause unacceptable
changes to the pivotal model can be rapidly identified and, if necessary, removed or
modified. It is precisely these constraints that impair re-usability. Of course it is not
possible to eliminate the effects of all constraints - otherwise the system would not be
realizable. But we frequently have the power to change their form or to ameliorate
their influence when it impacts on the re-usability of any module.
The above argument applies equally well to each one of the support layers and thus,
as we progress down the development path, and each support layer moves into the
foreground of consideration, we are able to review re-usability in the above way and
in the light of knowledge gained from preceding layers. When every layer of the
LBH/lld/3/04-10-1997
9
CSA/030155.299/46w
specification has been processed we not only have a complete and well-structured
specification, we also have one in which the re-usable components have all been
clearly identified.
As an illustration of what happens during conventional progressive evolution, let us
imagine the all-too-typical situation where the goals have not been separated from the
constraints at specification time. Therefore the impact of either set, on the behavior
and fabric of the resulting system, cannot be separately identified. Let us imagine
also that the first implementation has been a success. So much so that the
manufacturer (or the client) is encouraged to develop a variant for another situation.
Let us imagine finally that the goals for the second variant remain more or less the
same. Thus it is only the constraints that change.
What actually happens in practice is that the system, as designed and built for the first
variant, is modified - to save time and resources. The outcome is that the fabric and
the behavior of the new variant has influences built into it which not only result from
the set of system goals and the new set of constraints, but also from the old and now
out of date constraints from the first variant (V1). These out-of-date constraints come
into conflict with the new V2 constraints in quite invisible ways and, consequently,
debugging of development changes takes longer than expected - because the system
reacts to such changes in an unpredictable manner. Nevertheless, let us assume the
resulting V2 performs acceptably well and the client (or manufacturer) is encouraged
to consider investing in a further (V3) variant.
V3 suffers from similar debugging difficulties to those experienced when producing
V2. However in this case the problem is much more pronounced and debugging takes
much longer. We now have three sets of constraints built invisibly into the fabric and
behavior of V3 - two of them inappropriate!
If the above process is continued, with each new variant utilizing the previous one (to
take advantage of the most recent thinking and technology), it will not be long before
the debugging process becomes so lengthy that everyone involved will become afraid
to make any change at all. This is because each change causes so many unpredictable
consequences that the time required in order to arrive at a stable state each time
becomes unacceptably high and the associated resource costs too great.
The history of system development is littered with examples of situations of the above
kind ranging from mainframe computer operating systems, through stock exchange
applications, to fighter aircraft autopilots. After a number of evolutionary steps, these
systems became so fragile that all development had to be terminated. It is believed
that a major contribution to this result was, in each case, that rapid obsolescence was
unavoidably, and invisibly, built into the development process by a failure to
disassociate constraints from goals at specification time. Re-usability consideration
must not be left until design time. If it is then it will be too late.
LBH/lld/3/04-10-1997
10
CSA/030155.299/46w
4
A REAL APPLICATION
In this section we take a brief look at the results of subjecting a real requirements
specification document to the classification process described in Section 3.2 above.
The application chosen to exemplify the process is a military air traffic control
system. The document used was part of an Invitation To Tender that was put out to a
worldwide bidding contest by the air force of a European country. Figure 4.1:1 below
contains part of the result of subjecting the document to G-MARC classification.
Only the Operations viewpoint is presented in Figure 4.1:1 and, as can be seen, only
four support layers, five application aspects and six levels of detail were employed.
The numbers in the boxes represent the numbers of goals and the number of
constraints that were assigned the corresponding set of attributes. Thus, for example,
in the User layer, in the Function column and in the fourth level of Detail, there are 91
goals and 48 constraints i.e. 139 requirements in total.
Figure 4.1:1 REQUIREMENTS DENSITY MATRIX
LBH/lld/3/04-10-1997
11
CSA/030155.299/46w
The most obvious benefit arising from Figure 4.1:1 is that it provides a succinct
overview of the coverage of the subject provided by the document. In addition
attention is drawn to a multitude of features of the information in the document that
would otherwise not be evident. For example:
It is clear that the thinking of the person (or persons) who wrote the document is
dominated by previous experience. We can see this because the largest proportion of
the requirements are positioned in the Function Aspects column with very few in the
Purpose Aspects column. This implies that the specifier knows, from previous
experience, 'how' the system should work but not 'why'. Inevitably, therefore, we are
impelled to ask how can it be verified that the system is appropriate to the current
task?
Similarly it is clear that the specifier is a low level details type person. In the User
layer (as mentioned above) there are 139 function requirements at the 4th level of
detail but not a single one in either the zero level or the first level of detail. This
implies the conclusion that the writer does not know what the overall function of an
Air Traffic Control System is - nor indeed what are the functions of the major
subsystems. This conclusion is hardly credible in a document of this stature.
Nevertheless this is the actual content of the document.
We may also ask why there are no requirements at all in the Infrastructure layer. This
situation may be perfectly valid and the subject may have been intentionally left for
one of the other viewpoints to handle. On the other hand there may be a genuine
oversight here.
In addition, it is rather alarming to note - with so much functionality having been
specified - that so few Performance Aspects have been specified. Referring to the 139
Function Aspects, we are able to see that there are only 3 associated Performance
Aspects.
There are many more questions of a similar nature that it is possible to raise as a result
of the way that the information is presented in Figure 4.1:1 - and this is just one
viewpoint. The essential point, that this paper draws attention to, is the huge increase
in visibility that results from the production of a G-MARC requirements density
matrix.
Having separated the requirements, in each location of the requirements density
matrix, into goals and constraints - for say the Function aspects - it is then possible to
group similar requirements together to identify re-usable objects. Figure 4.1:2
illustrates an object hierarchy that could typically be developed from say the four
levels of functional goals that are present in the User layer of Figure 4.1:1. A similar
hierarchy could then be developed for the constraints. Overlaying the two hierarchies
will draw attention to any obvious differences and/or conflicts. Furthermore, each
level of detail in the hierarchy can be regarded as a model of that aspect of the system
being specified. The G-MARC methodology’s support tool enables its Users to
automatically create and animate such models in order to explore the dynamic
viability of the specification at each level of detail. In fact the G-MARC support tool
LBH/lld/3/04-10-1997
12
CSA/030155.299/46w
contains many more facilities of this kind - including knowledge accumulation and
direct elicitation of knowledge from the minds of its users. However these are
subjects of other papers in this series.
Figure 4.1:2 OBJECT HIERARCHY
LBH/lld/3/04-10-1997
13
CSA/030155.299/46w
5
CONCLUDING COMMENTS
This paper aims to draw attention to the fact that it is possible to considerably improve
the quality of a requirement specification prior to its formal adoption by the associated
project. In particular it is suggested that the professional engineer has a duty to his
client to ensure the production of such high quality specifications prior to the
commitment of resources to design and implementation. There is a great deal that it is
possible to achieve in this context. However there is - perhaps understandably detectable reluctance to expend resources on specification improvement. There is
reluctance by procurement organizations - who wish to pass on to their suppliers as
much responsibility as possible - and there is reluctance by supplier organizations who wish to leave the door open for possible price increases (as a result of changes) at
a later date. In addition, few will want to spend time improving a specification until
they have been awarded the contract. After contract award all the pressure is then
devoted to producing the system as quickly as possible. Such pressures have always
held back the development of high quality specifications. Nevertheless, with the
increasing move towards fixed price contracts, it is becoming more important to
initiate projects with specifications that are consistent, complete, coherent and correct
if overall risk and cost is to be minimized.
It is also important to recognize that, if unconstrained evolutionary potential is ever to
be achievable, the professional engineer must encourage his clients, when specifying a
variant of a system, to recognize the need to return each time to the basic set of the
goals before applying a new set of constraints. This process will not only encourage
the re-use of requirements information it will lead to the identification of really reusable design components and really re-usable products [Hugo, 1995].
Finally this paper draws attention to the fact that, due to the rising perception of the
need to produce improved specifications, appropriate tools are now being developed.
The professional engineer should be aware of these tools and their capabilities. At
this point in time there are very few tools that directly address the issue of reusability
of requirements. G-MARC is one of the first and, currently, its capability is unique.
LBH/lld/3/04-10-1997
14
CSA/030155.299/46w
6
REFERENCES
1. Boehm, B.W. (1981), Software Engineering Economics, Prentice Hall, Eaglewood Cliffs,
N.J.
2. Collins, A (1994), Systems Disasters can be avoided; Computer Weekly, 08-12-1994.
3. Faulk, S.R. (1995), Software Requirements: A Tutorial. Tech. Rep. NRL-7775, Naval
Research Laboratory, Washington D.C.
4. Fairley, R. (1985), Software Engineering Concepts, McGraw-Hill, New York.
5. GAO (1992), Mission Critical Systems: Defence attempting to address major software
challenges, Tech. Rep. GAO/IMTEC-93-13, U>S> General Accounting Office, Washington,
D.C. Dec.
6. Hunt, L.B. (1992), G-MARC - Final Report of Project IED4/1/1151. Produced under the
auspices of the Department of Trade & Industry's Information Engineering Advanced
Technology Programme (IEATP) of the Joint Framework for Information Technology (JFIT).
7. Hugo, I (1995), Back in circulation; Computing, 17-08-1995, page 16.
8. Hogan, J (1995), From Complexity to Perplexity; Scientific American, June 1995, pages
74-79.
9. Lutz, R.R. (1993), Targeting safety related errors during software requirements analysis. In
Proceedings of the First ACM SIGSOFT Symposium on the Foundations of Software
Engineering (Los Angeles, Cal., Dec.), ACM, New York.
10. Peltu, M (1995), Make yourself re-useful; Computing, 09-02-1995, page 40.
11. The CHAOS Report - see http://www.InformEng.com/documents/CHAOS.htm
LBH/lld/3/04-10-1997
15
CSA/030155.299/46w
Download