The Quest for Software Quality

The Quest for Software Quality
Kamesh Pemmaraju
The goal of every commercial software development project is to ship a high quality product on time and within
budget. According to a recent Standish group research study [1], only an average of 16.2% software projects
complete on-time and on-budget. Companies developing commercial software (especially software that is not subject
to strictly enforced standards and contractual requirements) will seriously consider spending precious resources and
time on a quality assurance program only if they believe that the approach to quality will provide substantial return
on investment without eroding expected profits. Companies also expect quality assurance to mitigate risks associated
with deploying a software product of questionable quality while not impacting on the schedule and the budget of
their projects.
This article focuses on the dilemma of defining and assessing software quality and the rapid advances being made in
the software industry to address these issues.
The definitions of software quality
Software quality means different things to different people. This makes the entire concept highly context-dependent.
For example, in the context of automobiles, a Mercedes Benz or a Cadillac may be symbols of high quality, but how
many of us can really afford to buy one of these fine vehicles? Given a somewhat less ambitious budget, a Toyota or
a Chevy might serve most of our needs with adequate quality. Just as there is no one vehicle that satisfies everyone’s
needs, so too there can be no one universally-accepted
definition of quality. Even so, it is important to formalize your
definition of software quality so that everyone understands your
priorities and relates your sense of quality to their own.
The IEEE, ISO, DoD and several other agencies and individuals
have offered definitions for software quality. Some of these
definitions “conformance to requirements”, “meeting users’
composite expectations”, “value to some person”, and “fitness
for use” are useful but extremely vague because there is no
definite way to state whether or not the final software product
conforms to these definitions.
In an attempt to impart formalism and to provide a systematic
Time behavior
definition for software quality, the ISO 9126 defines 21
attributes that a quality software product must exhibit. These
Resource behavior
attributes (shown in Table 1) are arranged in six areas:
functionality, reliability, usability, efficiency, maintainability,
and portability. Recent advances in software quality
measurement techniques allow us to measure some of these
attributes (These attributes are shown in bold in table 1).
However, there still seems to be no straightforward means to
measure the remaining attributes, let alone derive a metric for
overall quality based on these attributes. Without clear methods
and measures, we are back to square one, with no means to say
Table 1 ISO 9167 quality attributes
anything quantitative about the final software product’s quality.
In the end, we have only a vague idea of how to define software
quality, but nothing concrete. We have some idea of measuring
it, but no clearly defined methods. What we are left with is a grab bag of diverse standards and methods. Faced with
a bewildering array of definitions and standards, software projects, already under heavy burden of schedule and
budget pressures, are faced with a dilemmathe dilemma of how to create and assess the quality of software
The dilemma of assessing software quality
Ever since the birth of software engineering some thirty years ago, the methods and tools for achieving quality have
been based on "less than sound" scientific principles. Software quality practitioners search for help from other
engineering disciplines and hardware systems and adopt quality assurance approaches well-established in these other
disciplines. In particular, software quality people focus on the quality of manufacturing "process" (SEI CMM, ISO
9001 etc), a concept essentially borrowed from the industrial-age manufacturing processes. The assumption here is
that there is basically no difference between building software and building other physical systems such as bridges,
automobiles, radios, or digital circuits. Another premise is that using good processes will always produce good
quality code. Most of the existing software quality assurance practices are based upon these assumptions. However,
as we shall show in the following sections, software is different and we need to revisit some of the current quality
practices in the light of this fact.
Developing software is not like building bridges
In 1986, Alfred Spector, president of Transarc Corporation, co-authored a paper comparing bridge building to
software development. The paper explores the reasons why bridges are normally built on-time, on-budget, and do not
usually fall down and why that is not the case with software development. (This is not to say that bridge building has
not had its share of failures. What deserves attention is the massively large rate of software failures when compared
to bridge failures)
Engineers involved in design of
bridges are guided and constrained
by the natural laws of the materials
with which their designs must be
implemented. On the other hand,
software engineers work with
abstract materials that have no
natural limits imposed by the
external world. Because software
is infinitely flexible and malleable,
the limits are purely determined by
human mental abilities. Today, we
don't perhaps understand the
human mind as much as we
understand the physics of a moving
Based on the conclusions of the
paper by Alfred Spector and our
knowledge of today’s software
systems, the major differences
between building bridges and
summarized in Table 2.
Bridge Building
Software Development
3000 years of experience in bridge
Solid Theoretical understanding of
the physics of bridge structures
30 years of software development
Some theoretical understanding of the
software structures and logic. However
real-world software is too complex to
allow for proof of correctness.
There are hardly ever well-defined
specifications or a frozen designs before
the software is built. Software is viewed
as “flexible” enough to keep up with
Accurate and detailed modeling
and design based on well
established and mathematically
proven laws and well-defined
Knowledge of tolerances and load
capacities of bridges is gathered
before building the bridge
Bridge failures are thoroughly
investigated and causes of failure
are reported.
Difficult to determine resource and time
consumption of software before it is built.
Software failures are covered
ignored, and/or rationalized
Table 2 Bridge building and software development
Software does not "break" like physical systems
Physical systems tend to degrade with time and might eventually break due to aging; they might also break down due
to attrition. For example, the timing belt in your car wears out after a certain number of miles. Manufacturers can
predict the approximate number of miles (for example, 60,000 miles for a Toyota) that the timing belt will last
because they understand its physical characteristics very well. Timing belts may sometimes even break down due to
excessive stress well before the predicted time. One the other hand, software does not “break” nor does it degrade
with time. You can run a program any number of times without wearing it out and without noticing any degradation
in its performance. (But the program might behave differently to changing environmental stimuli).
reliability may actually improve with time as bugs are fixed.
Software is digital
Software is digital and discrete, not analog and continuous [2]. In continuous analog systems, it is possible to predict
the immediate future behavior based on historical data. For example, if a car travelling on a freeway due north at any
point in time t, there is reasonable assurance (unless there is a major accident) that it will continue to travel in the
same direction at time t + t. There are physical constraints and laws of physics that ensure that the car will not
suddenly change its direction from north to south. On the other hand, imagine a virtual car controlled by software.
Assume the direction in which the car traveling is represented internally by a Boolean variable DIRECTION and that
the binary value of the variable DIRECTION is 0 (false) when the car is going north and the value is 1 (true) when
the car is going south. All it takes for the car to change direction from north to south is one computer instruction that
flips a single bit from 1 to 0. What if there is bug in the software that flips the bit unexpectedly depending on certain
conditions? Suddenly, your virtual car and your virtual world will turn upside-down. At one instant the car is
travelling north and then suddenly the next instant it is going south!
There are two implications of this unique nature of software on software quality/reliability. First, traditional
reliability models of physical systems or hardware systems and process ideas from the manufacturing world are not
directly applicable to software. Second, no matter how good a process we may apply to software development, a
single bug in the product may possibly invalidate everything else that works correctly. The car direction example
above serves to clarify this point very well. A single latent implementation bug can unexpectedly and suddenly lead
to a serious malfunction of a physical system.
A good development process does not guarantee good product
Companies are championing the “process improvement” mantra as if it were a magic solution to all their software
development and quality problems. A process improvement methodology is based on establishing, and following, a
set of software standards. Most of these software standards usually neglect the final product itself and instead
concentrate mainly on the development process. Again, as mentioned earlier, this is directly the result of the
manufacturing view of quality, where the focus is on "doing it right the first time”. The emphasis in all these process
standards is on conformance to processes rather than to specifications. A standard process is certainly necessary but
is not sufficient. There is no hard evidence that conformance to process standards guarantees good products. Many
companies are finding out the hard way that good processes don’t always result in a quality product [2].
Product assessment
Whereas the manufacturing view examines the process of producing a product, a product view of quality looks at the
product itself. The product assessment advocates stress the fact that in the end, what runs on a computer is not the
process, but the software product.
There are two ways of directly examining the product—static analysis and dynamic analysis.
A static view considers the product's inherent characteristics. This approach assumes that measuring and controlling
internal product properties (metrics) will result in improved quality of the product in use. Unfortunately, this is not
always true. Much of the legacy code developed over 20 years ago is still functioning correctly and reliably today.
Legacy code was originally written in unstructured languages (e.g Fortran, Basic) of the 70’s and made extensive use
of the much-maligned GOTO construct. Such software is sometimes termed as spaghetti software. Modern structural
metric tools, when applied to spaghetti software, would probably turn out some pretty dismal structural quality
measuresa quality assessment far removed from reality.
A dynamic view considers the product’s behavior. Dynamic analysis requires the execution of software. Testing is a
widely recognized form of dynamic analysis. The easiest way to assess software quality is to test the product with
every conceivable input and ascertain that it produces the expected output every time. Such exhaustive testing has
been long shown to be both practically and theoretically impossible to achieve. Real-world testing constrained by
schedule/budget pressures may hit a limited portion of the input space or cover a limited portion of the software.
Quality assessments based on the results of such testing will again be far removed from reality.
The path to the future
The discussion in the previous section clearly points to one irrefutable factassessing software quality is extremely
hard and expensive, and the state of practice falls short of expectations. As noted, there are some inherently difficult
problems with software quality assessments that defy solutions. However, all is not lost. The field of software
assurance is advancing rapidly and a number of software quality/reliability groups both in the academic and
commercial worlds are actively researching next generation techniques and tools to better create and assess software
quality. There are dozens of software quality and testing companies producing various tools for software metrics,
GUI testing, coverage analysis, and test management. Some companies are producing tools that support the entire
development process while integrating testing and quality assurance processes. This is a rapidly expanding market
and it is estimated that the burgeoning test tool industry is on pace to hit one billion dollar revenue by the year 2000.
A detailed listing of categories of tools available in the market today and some leading companies supplying these
tools is available at
While all these of test tools, methodologies, and theories are useful, they will not be effective unless software
projects focus on quality right from the beginning and everyone works towards a creating a product of the highest
possible quality. Bringing in fancy tools will not solve the poor quality problem.
Software systems are increasingly being deployed in areas that are effecting us in our daily lives. The implications of
the failures of such critical software systems on consumers can range from mere annoying inconveniences to serious
life-threatening situations. The existing software quality practices are proving to be inadequate to solve complex
quality problem of today's software applications. However, as a result of several research initiatives and explosion in
commercial tools, software quality techniques are gradually gaining credibility, becoming integrated into the
development process, and contributing value throughout the software development life-cycle.
“Software Fault Injection: Inoculating Programs Against Errors”, Jeffery Voas and Gary McGraw (Wiley, 1998)
About the author
Mr. Kamesh Pemmaraju is a senior consultant at Reliable Software Technologies ( Prior to joining RST, Mr. Pemmaraju
worked as a Software Quality Manager at Siemens A.G. in Germany. Mr. Pemmaraju holds an MS degree in Computer Science from the Indian
Institute of Science, Bangalore, India. His areas of interest include fault-tolerant systems, embedded real-time systems, and software
quality/reliability. He can be reached at