Software Technology Management: by Michael A. Cusumano

advertisement
Software Technology Management:
"Worst" Problems and "Best Solutions
by
Michael A. Cusumano
WP 1972-88
YaT___D__··_sP_____
Revised Sep. 1988
CHAPTER TWO
SOFTWARE TECHNOLOGY MANAGEMENT:
"WORST" PROBLEMS AND "BEST" SOLUTIONS
Contents:
I ntroduction
Characteristics of the Product and Process Technology
The "Worst Problems" in Software Development
"Best-Practice" Solutions from Software Engineering
"Rationalization" Obstacles at the Company Level
Summary
INTRODUCTION
Software consists of programs,
hardware through a series of tasks.
or instructions,
that guide computer
Because there are so many different tasks
and machines that require programming to operate, software is not a single type
of product or market, nor should there be a single "best" approach for product
development, construction, and marketing.
Process notions, as discussed in
Chapter One, such as customization versus reusability and standardization, or
product innovation and functionality versus development costs, thus need to
consider the specific characteristics of applications and markets in which the
software appears, and the different competitive strategies, corporate cultures,
and organizational structures of the software producers.
This chapter centers on the set of problems that have consistently
recurred in large-scale software development efforts since the 1960s, and some
of the major solutions generated in the field of software engineering.
The
recurrence of particular problems, mainly in design and project management,
appear to relate to the still unsettled state of hardware and software
technology, and user requirements.
In addition, companies that have tried to
"rationalize" the process of software development have also faced obstacles more
of a management, organizational, or cultural nature,
1
such as an inability to
Chapter Two
focus development efforts on specific types of products, or an emphasis on
highly skilled employees who may be good at product innovation but difficult to
manage.
This chapter begins by examining some basic characteristics of software
product technology and the development or production process.
sections review some of the most frequent or "worst problems"
The next two
in software
development over the past two decades,
and
solutions
The fourth section examines the
proposed to deal
following question:
with these.
some of the "best-practice"
If tools and techniques exist to solve problems in software
development, why don't firms use them more systematically?
issue focuses on three company examples --
Discussion of this
International Business Machines
(IBM), General Telephone and Electric (GTE), and Digital Equipment Corporation
(DEC).
The conclusion reinforces the argument that tackling software problems
effectively requires more than ad hoc approaches.
An alternative that some
firms have chosen is the software factory, which appears useful as a mechanism
for bringing together "good" tools and techniques, optimizing them for different
products and markets, combining these tools and techniques with extensive
training,
and thus
introducing a more
systematic approach to software
development.
2
Chapter Two
CHARACTERISTICS OF THE PRODUCT AND PROCESS TECHNOLOGY
Software is usually divided into two general types:
applications.
Systems
(or "basic")
systems and
software serves to control the basic
operations of computer hardware, and includes operating systems, database
management
systems,
telecommunications
monitors,
computer-language
translators, and utilities such as program editors. Applications software sits, at
least figuratively, on top of the basic operating system, and performs specific
"user-oriented" tasks. Applications include standardized or "packaged" programs
for tasks such as accounting or spreadsheet analysis, as well as customized or
semi-customized programs specific to the task or industry application, such as
on-line banking programs.
different size machines,
Most of these types of programs operate on
ranging from small personal computers to large
mainframes.
Software products can thus be characterized by function (system or
application)
as well as by machine size, although increasing portability of
programs as small computers become more powerful is making machine size less
of a distinguishing characteristic.
One might also characterize software by
producers, which include computer manufacturers, independent software houses,
systems integrators, telecommunications firms, semiconductor manufacturers,
publishing houses, and corporate information systems departments. Any of these
firms might supply applications programs.
operating systems,
Basic systems software, such as
are usually developed by computer manufacturers
and
independent software firms working with the manufacturers.l
The degree of standardization of product technology differs with product
types, and has several meanings in the context of software.
A program is
constructed by writing lines of code in a particular language, such as COBOL or
FORTRAN, and usually for a particular operating system, such as MVS (IBM),
3
Chapter Two
UNIX (AT&T), or VMS (DEC).
There are often different versions of languages
for different machines, and portability or reusability of a program (or parts of
it) across different machines is usually constrained by the operating system and
features of the machine hardware.
Computer vendors such as IBM, DEC, AT&T,
and Honeywell/NEC have developed their own operating systems standards, with
IBM and AT&T (Unix) systems adopted widely by other hardware manufacturers.
There are also efforts underway within and among firms, such as IBM's
proposed Systems Application Architecture, and the international Open Systems
Interconnection protocols, to standardize interfaces so that software can be
more easily moved from machine to machine,
and machines and peripheral
equipment can be more easily linked together.
Progress in this area, while
slow, holds great promise for relieving the shortage of software programmers in
that programs written for individual computers could be used more widely.
Many companies are also shifting programming operations to the C language and
adopting the Unix operating system, both of which make it easier to transfer
programs across different hardware.
On the other hand, differences in machine
architectures exist because they optimize certain applications,
transactions processing,
graphics simulations.
therefore,
data base management,
such as
scientific calculations, or
Too much standardization of hardware and software,
while desirable from a production viewpoint,
may not meet the
different customer needs that exist.
Another aspect of standardization is product features, such as how an
operating system or applications program works internally or from the
perspective of the user. In the personal computer industry, programs have been
shorter than for larger machines, and thus development times for new products
and the apparent rates of product innovation have been faster.
This has been
spurred on by relatively rapid development of new hardware,
4
leading to
Chapter Two
significant changes in operating systems and applications programs.
In contrast,
software programs for larger mainframes or integrated systems,
such as
hardware and software for a factory or power plant, often take many months or
years to develop and test.
Therefore, it is not surprising that life cycles for
new systems or applications products have been longer than for software
designed for personal computers.
2
With rapidly changing standards,
hardware capabilities, and customer
requirements, as well as with competition that focuses on best-selling packages,
as it does in the personal-computer business, opportunities and needs for
software producers to pursue factory-type process rationalizations should be
few.
On the other hand, the somewhat more stable environment of mainframes
and minicomputers,
as well as markets that rely less on packages, should
present more opportunities for "rationalized" or factory-type approaches in
development, testing, or maintenance operations.
Building
a complex
software program for the first time
requires
considerable abstract thinking, as programmers break down a problem or task
into successively smaller problems that can then be expressed in a computer
language.
One way to view this is as a translation process:
translating a
particular problem to design specifications (design phase), design specifications
to form "source code" written in a "high-level" (more English-like) programming
language (implementation phase), and then the high-level program to a lowerlevel machine-language program called "object code," consisting of zeros and
ones that serve as instructions for the computer hardware (compilation). 3
A
typical software development life cycle has six steps: 4
(1)
Requirements Analysis:
requirements"
The first step is to understand the "system
-- the task or problem that the software is supposed to
5
Chapter Two
11
solve.
This may have been defined by a customer or may be constrained
by an existing system.
(2)
Specification:
Next,
software
designers
attempt to define what
specifications are needed to meet the requirements defined earlier.
These
specifications describe what the software system is supposed to do, as
viewed from the outside.
(3)
Design:
program,
The main focus here is on the internal structure of the
specifically,
the design of separate parts or modules that,
operating in combination, will fulfill or "implement"
the specifications.
Modules may be broken down into smaller and smaller parts, sometimes
referred to as subroutines and procedures, to make coding easier, since
many software programs are quite long.
(4) ImPlementation:
The objective in this phase is to code each module in
a language suited to the particular application or system.
(This phase is
similar to construction or manufacturing in a typical hardware development
process.)
Higher-level languages then are usually "compiled" to run the
computer hardware, and the more like a natural-language they appear, the
farther away from the machine-readable code one gets.
languages
read,
are easier to write,
test,
and
modify,
considerable improvements in programming productivity.
Higher level
and involve
On the other
hand, they often make it difficult to utilize particular features of the
hardware and may
result in
code that is longer than
in lower-level
languages.
6
Chapter Two
(5) Testing: This activity, as in other production processes, attempts to
discover and fix any errors or faults ("bugs") in the software.
Modules
are usually tested separately and then in combination, to make certain the
parts interact correctly.
It appears that most errors detected are the
result of mistakes made in the requirements specification and design stages
of development; various data have also shown these errors to be about 100
times more expensive to fix than simple coding errors.
(6) Maintenance:
This final phase begins with the delivery of the software
product to customers and refers to the process of fixing bugs in the field,
or adding additional functions.
In terms of total costs for developing a software product, excluding those
incurred during operations and maintenance (fixing bugs or adding features),
testing
is usually considered the most labor
intensive,
followed
implementation (detailed design and coding) and high-level design.
by
Over the
entire lifetime of a product that continues in service and continues to be
modified or repaired, maintenance is by far the most costly endeavor, consuming
perhaps 70% of total expenditures (Table 2.1).
This breakdown reflects the
importance of writing of programs in a standardized, "structured" manner, with
accurate documentation,
so the software can be more easily modified or
corrected.
7
Chapter Two
Table 2.1: SOFTWARE LIFE CYCLE COST BREAKDOWN ()5
Key: A = Excluding Operations/Maintenance
B = Including Operations/Maintenance
A
B
9
9
15
21
46
3
3
5
7
15
Phase
Requirements Analysis
Specification
High-Level Design
Implementation (Coding)
Testing
A Total
100 %
67
100 %
Operations/Maintenance
B Total
THE "WORST PROBLEMS" IN SOFTWARE DEVELOPMENT
Perhaps the most predictable events in a large software project are that
management problems and errors of various sorts will occur.
Even without a
shortage of skilled personnel, developing large programs, such as operating
systems for mainframe computers and even new personal computers, or real-time
control systems for factories or banking networks, can present enormously
difficult managerial and technical difficulties simply because they often require
hundreds of designers, programmers, and testing personnel working two or three
years to write hundreds of thousands to several million lines of code.
Despite
extensive research on tools and techniques, formal surveys and other literature
suggest that software developers for both systems and applications products
have encountered problems in every phase of development, with some more
serious than others, and that these difficulties have remained distressingly
similar over the past two decades.
One of the earliest published discussions on problems in software
8
Chapter Two
engineering was the 1968 NATO Science Committee conference report, in which
Mcllroy of AT&T proposed components factories, cataloging, and reusability as
one approach to developing software systems more efficiently. This report
discussed all aspects of software development and was remarkably comprehensive
in the problems identified.
The largest number of topics discussed fall into the
categories of design and project (production) management, as listed in Table 2.2.
Additional issues discussed included labor supply, quality control (reliability),
hardware constraints, and maintenance.
The conference members also began a
discussion of methods and solutions, which later authors have continued as the
field of software engineering has grown.
TABLE 2.2:
NATO SCIENCE COMMITTEE REPORT (1968)6
Design and Project Management
0
Lack of understanding in system requirements, first on the part of users,
who often do not understand their own needs, and then designers, who do
not understand user needs.
0
Large gaps between estimates of costs and time with actual expenditures
for large software projects, due to poor estimating techniques, failure to
allow time for changes in requirements, and division of programming tasks
into blocks before the divisions of the system are well-enough understood
to do this properly.
0
Inability to calculate in advance the costs of developing software for new
applications.
0
Large variations, as much as 26:1
productivity levels.
0
Difficulty of dividing labor between design and production (coding), since
design-type decisions must still be made in coding.
0
Lack of ability to tell how much progress has been made in reaching the
end of a software project, since "program construction is not always a
simple progression in which each act of assembly represents a distinct
forward step."
0
Rapid growth in the size of software systems.
0
Poor communication
among
in one study,
groups working
9
in programmers'
on the same project,
Chapter Two
exacerbated by too much uncoordinated or unnecessary information, and a
lack of automation to handle necessary information.
0
Large expense of developing on-line production control tools.
O
Difficulty of measuring key aspects of programmer and system performance
in software development.
0
A tradition among software developers of not writing systems "for
practical use," but trying to write new and better systems, so that they
are always combining research, development, and production in a single
project, which then makes it difficult to predict and manage.
Labor Supply and Product Demand
0
Rapid growth in the need for programmers and insufficient numbers of
adequately trained and skilled programmers.
Quality Control
0
Difficulty of achieving sufficient reliability
tolerance) in large software systems.
(reduced errors and error
Hardware Technology Constraints
0
O
Dependence of software on hardware makes standardization of software
difficult across different machines.
Lack of inventories of reusable software components, to aid in the building
of new programs.
The Cost of Maintenance
0
Software maintenance costs often exceeding the cost of the original system
development.
Another well-known discussion of software engineering was published in
1975, The Mythical Man-Month, by Frederick P. Brooks, Jr., of IBM and later
the University of North Carolina at Chapel Hill.
In developing the operating
system for IBM's 360 family of mainframes in the mid-1960s,
which required
some 2000 people over a period of several years, Brooks encountered difficulties
in scheduling and cost control similar to those noted at the NATO conference.
10
Chapter Two
He attributed these to huge differences in productivity among programmers,
absence of good coordination mechanisms for large numbers of software
developers, inability to deal easily with design changes, and general quality
problems.
Some of the major issues Brooks raised, as well as a few solutions
he found useful, are summarized in Table 2.3.
Table 2.3:
BROOKS IN THE MYTHICAL MAN MONTH (1975)
7
Scheduling:
Poorly developed estimating techniques and methods to
monitor schedule progress
Solution:
Consideration of sequential constraints and the number of
independent sub-tasks; use of concrete milestones, PERT
charts or critical path analysis
Productivity:
Wide discrepancy in the performance even of experienced
programmers; difficulty of increasing individual programmer
productivity
Solution:
Team approach; software tools such as high-level languages
Integrity:
Difficulty of achieving "efficiency and conceptual integrity"
in a large project with many participants
Solution:
Precise written specifications, formal definitions, system of
formal conferences and other techniques to improve
communication
Changes:
Too frequent alterations in specifications delayed progress in
development and testing.
Solution:
Use of pilot systems and systems designed for change
through careful modularization using top-down design,
extensive sub-routines and definition of inter-module
interfaces, complete documentation, standard calling
sequences and table-driven techniques, use of very-high-level
languages and self-documenting techniques
Quality:
Difficulties in system debugging and program maintenance
Sol ution:
Better testing techniques and tools to reduce initial bugs
such as top-down design and structured programming
11
Chapter Two
The NATO committee report,
and then Brooks' essay, helped set the
agenda for managers and researchers on software development at IBM and many
firms around the world.
Problems became more and more acute due to the
rising capabilities of hardware, and the demand for increasingly sophisticated
programs,
with
"real-time"
Development Corporation,
processing and other capabilities.
System
after facing a series of increasingly complex
applications requirements from different customers, identified five fundamental
problems recurring in its projects, and then launched the Software Factory
(discussed in Chapter Three) to address them:
Table 2.4:
SDC SOFTWARE FACTORY (1975)8
0
Lack of discipline and repeatability across different projects
0
Lack of development visibility in project management
O
Difficulty of accommodating changes in specification requirements
0
Lack of good design and program verification tools
O
Lack of software reusability across different projects
Evidence that similar problems continued through the 1970s comes from a
frequently cited doctoral dissertation by Richard Thayer of the University of
California at Santa Barbara.
In surveying industry practitioners as well as 60
software projects in the aerospace industry, Thayer identified seven major
problems, six under the area of planning and another under control.
Again, all
resembled the difficulties NATO, Brooks, and SDC had noted:
12
Chapter Two
Table 2.5:
THAYER STUDY (1979)9
0
Reauirements: incomplete,
requirements specifications
O
Project:
0
Cost:
O
Schedule:
0
Design:
lack of decision rules to help in selecting the correct design
techniques, equipment, and other design aids
0
Test: lack of decision rules to help in selecting the correct procedures,
strategies, and tools for software testing
O
Programmers: lack of standards and techniques for measuring the quality of
performance and the quantity of production expected from programmers.
ambiguous,
inconsistent,
or unmeasurable
poor project planning
poor cost estimation techniques
poor scheduling estimation techniques
Yet another example of this type of categorization can be found in the
Computer Science and Engineering Research Study (COSERS),
funded by the
National Research Council and published in 1980 by MIT Press (Table 2.6).
report blamed
This
growing program complexity as the major reason why the
software field had reached a "crisis" stage by the late 1960s.
Before the
appearance of FORTRAN and other high-level languages beginning in the late
1950s,
programming was done in the machine languages unique to each
computer.
The processing and memory capacities of computers were also rather
small until the mid-1960s, and only required programs that were relatively short
and simple, compared to those of later years.
With high-level languages, it
became possible to train a programmer in a few weeks,
elementary level.
As a result, the COSERS'
at least to an
authors claim, managers and
programmers did not have to pay much attention to readability, program
documentation, or even formal "correctness."
The lack of systematic thinking
about design techniques, tools, work environments,
13
and managerial control
Chapter Two
proved to be a problem as computers became more powerful, and programmers
took on larger and more complex problems:
Systems which people felt would be easy, but tedious to build -- the
command control system, the information management system, the
airline reservation system, the operating system, the world champion
chess player -- turned out to be extremely difficult or even
Major undertakings often took 2 to 10 times their
impossible.
expected effort, or had to be 1bandoned... What happened was
complexity growing out of scale.l
Increases in product demand thus led to personnel shortages so severe that
it became common for managers with no first-hand knowledge of programming
to be managing large projects, and programmers with only a few weeks training
to be maintaining highly complex programs.
With so little experience, it became
frequent for managers to underestimate how difficult programming actually was,
exacerbating the inadequate training of programmers and wide-spread tolerance
of what the COSERS' study called "sloppy and wasteful practices."
In addition,
the duplication of similar but incompatible products even within the same firm,
and increasing amounts of manpower spent on maintaining these programs, was
bringing the industry closer to the state where "maintenance
eventually
precludes all possible future development."
COSERS also noted what Brooks had called the "n-square law" effect.
Since n objects have n(n-1) possible interactions, programs and projects become
almost impossible to manage as programs grow, and more and more people are
added to handle their development.
reached the point where it
introducing another.
Many large software programs had even
was impossible to correct one bug without
Finally, it was also the case that, while users welcomed
and promoted innovations in hardware, every advance -- even extremely useful
features such as interrupts, virtual memory, and better addressing schemes-14
Chapter Two
created "a whole new set of problems for the software developer."
Table 2.6: COSERS STUDY (1980)11
0
Demand exceeding supply leading to the widespread use of unskilled
personnel as both managers and programmers.
0
The inherent difficulty of programming and misconceptions that it is easy,
resulting in sloppy practices.
O
Continual growth in program size and complexity, leading to enormous
resources expended on maintenance of existing programs rather than new
development.
O
The scale of software projects creating problems in that, for example, n
components or people have n(n-1) possible interactions.
0
Continual development and innovations in hardware evoking new problems
for software developers.
The NATO conference, Brooks, SDC, Thayer, and COSERS identified a
range of specific problems
stemming from the nature of software as a product
technology, as a process technology, and as an industry that was still evolving.
Some of these underlying problems seem amenable to solution, perhaps through
factory approaches, while others can, at best, be addressed only partially.
One fundamental problem, leading to difficulties in project scheduling and
budgeting, and quality control, is the variability of individual performance in
software projects -- often estimated to be nearly 30 times between the worst
and best programmers.
With this degree of discrepancy, it is a wonder at all
that managers can even come close to accurate estimates.
Yet it is possible to
reduce the uncertainty by keeping detailed data on the performance of specific
individuals in different types of projects, as well as the average performance of
individuals with certain years of experience or training. Standardized training,
which would guarantee that programmers
15
had learned a set of methods;
Chapter Two
management
controls,
used these
which would guarantee that programmers
methods; and standardized tools, which might boost the performance of less
skilled individuals, should also help reduce drastic or unpredictable variations in
the work output of individuals and the groups in which they are organized.
Reuse of designs and code is another way to lessen this variability by cutting
the amount of new software that must be written, although even reused designs
and code must still be reread and retested.
These solutions --
systematic
collection and utilization of individual performance data; standardized training,
controls, and tools; and reusability -- appear frequently in factory approaches
in other industries.
The obverse is organization of development by projects
formed and disbanded with each work assignment.
The inherent complexity of large software programs
and development
proiects is a second fundamental problem leading to a host of difficulties, such
as scheduling and budgeting, management, changes delaying project progress,
and quality control.
Software engineers have developed high-level languages,
modularization and other structured design and programming techniques, and a
variety of project-management tools, as partial solutions.
Some firms have also
tried reducing the number of programming personnel to ease the management
tasks, which appears to work to some extent.
Yet, for very large software
systems that must be written as quickly as possible, development by a small
number of highly-skilled people may take too much time.
be too expensive, or unavailable when needed.
The people may also
While there seems no simple way
to eliminate this problem, some firms have resorted to factory-type solutions:
standardizing methods and tools, providing support for less-experienced people,
and developing sophisticated on-line project control systems, which, along with
standardized tools and procedures, appear to facilitate communication among
large groups.
Again, reusability may also help by eliminating the need to write
16
Chapter Two
similar complex software more than once.
A third fundamental problem would seem to be the variability of userrequirements and applications.
procedures,
This hinders reusability and repeatability of
tools, or code from project to project,
and has reduced the
incentives for firms to invest in standardized tools or reuse systems.
large extent this variability is caused by different,
To a
incompatible hardware
architectures, across which it is difficult to redeploy tools and code.
If a firm
operates as a job shop and accepts a wide variety of customers desiring
programs for a wide variety of machines, then managers will continue to face
this problem, and factory-type standardization will probably prove wasteful and
too constraining.
On the other hand, if potential volumes are high enough, a
firm may create facilities --
for example,
focused and somewhat flexible
software factories -- that specialize in certain types of programs, and reuse
tools, designs, and code, and perhaps limit the types of machines they write
software for.
A fourth fundamental problem might be termed the still-evolving state of
the hardware and software technology.
In segments of the industry that are
rapidly changing, such as the personal computer field, firms often appear
reluctant to invest in methods, tools, or program libraries of reusable code,
because these might soon become outdated by new developments, especially in
hardware. There are clearly limits to what firms can do to deal with this issue,
although some factory concepts might still be useful.
It should be possible, for
example, to identify where there is some stability in the development process or
product components -- such as system designs, or procedures for requirements
analysis or project management --
and attempt to exploit existing "good
practices," technology, or experience as much as possible.
design tools to be updated.
Engineers can also
In fact, periodic reevaluation of all procedures,
17
Chapter Two
tools, and reuse libraries can help a firm progress as technology or markets
progress,
while still achieving some economies of scope or scale, at least
temporarily.
Many of the problems faced by software developers also appear in other
industries, suggesting they go along with activities such as design and testing,
or complex project management, in general.
Problems in schedule estimation,
dealing with changes in specifications, improving productivity and quality in
design, or maintaining effective communication among large numbers of project
members or subgroups, are commonly reported in other product development
efforts.
How to improve reusability of components or tools are topics that
surface in any engineering and manufacturing organization attempting to
eliminate redundant work and build new products at least in part from "off-theshelf" components.
Even the lack of "development visibility," which SDC and
NATO conference members cited as a problem due to software's supposedly
intangible nature, appears in any design process that transpires in the minds of
people and memory banks of computer-aided tools, rather than on a laboratory
workbench or a shop floor.1 2
While it is distressing that problems which surfaced in the 1960s continue
to appear in the 1980s, research in universities, government laboratories, and
companies has helped firms deal with these difficulties, in a general attempt to
move beyond unpredictable craft-like methods and tools to where software
development
more closely resembles
manufacturing discipline.
a science, engineering,
or even
Numerous textbooks and journal articles describe the
tools and techniques of software engineering in far more detail than is possible
here.
The following section, however, summarizes some of the major concepts
and tools, and how they evolved, as background for later discussions of the
obstacles companies have encountered in introducing these approaches, both in
18
Chapter Two
non-factory settings, and in the software factory efforts treated in Part Two.
"BEST-PRACTICE" SOLUTIONS FROM SOFTWARE ENGINEERING
In the late 1960s, IBM's development of the operating system for the 360
family of computers,
and other large-scale projects at other firms and
institutions, prompted the first serious considerations of applying engineeringtype practices to software, such as seen in the 1968 NATO Science conference.
The evolution of the field now known as software engineering,
in the
interpretation of R. Goldberg, a consultant at the IBM Software Engineering
Institute, also followed a particular pattern that, in recent years, has led to
greater emphasis on what the Japanese have for years termed
"factory"
approaches.
In the early years of programming, in Goldberg's recollection, managers
focused on improving performance of the individual engineer or programmer,
while technological research focused on developing experimental techniques such
as structured programming.
extent that
There was a gradual shift during the 1970s, to the
experimental techniques were refined enough to become formal
methodologies serviceable as company standards, such as step-wise development
or structured analysis.
At this time, recalls Goldberg, managers began to direct
more of their attention to understanding better the processes involved in each
step of software life cycle, from basic design through maintenance.
been yet another shift in the 1980s,
There has
as IBM and other companies have
recognized the need to integrate tools and methods with improvements
software production environments.
in
Accordingly, managers have shifted their
concerns more toward process-management issues.13
Driven by the needs of individual managers and programmers, and by the
19
Chapter Two
III
simple accumulation of experience and research at universities, companies, and
other institutions, software engineering thus evolved into a major discipline by
the 1980s,
covering tools, techniques,
and programming environments.
A
substantial and growing literature supports the field, with most research dealing
with measurements of software quality and productivity; program specification
and language development; automation of aspects of the development process;
and human factors in software development. 1 4
Specific tools and techniques
have included structured programming, project management, software engineering
economics, requirements definition, new languages, modular programming, data
structures, quality and productivity metrics, team programming concepts and
tools, software maintenance, cost estimation, life-cycle control, configuration
management, large-program management, documentation, flow diagrams, testing,
reusability, and prototyping.
15
Tool development, essentially the writing of
software programs that aid in the writing of other software, has focused on six
main areas: source program analysis and testing; software management, control,
and maintenance;
requirements/design
specification and analysis; program
construction and generation; software modeling and simulation; and software
support system/programming environments. 16
"Structured" programming techniques for analysis and design have done
much to improve both program designs and the software development process.
These are generally thought to include the disciplining or systematization of
coding, design, program verification (testing of logical correctness), and project
organization, including the avoidance of "go to" statements in programs (which
make them difficult to follow and test); and the use of "top-down," modularized
design structures.
they
Structured design and programming are important because
have facilitated
programmer
productivity
through
clearer,
compartmentalized structures that make testing and maintenance easier; in some
20
Chapter Two
cases they have facilitated division of labor among high-level design, coding,
and testing.
Some date the origin of structured programming to the work of D.V.
Schorre at UCLA in 1960.17
The earliest publication on the subject was a 1965
conference paper by E. Dijkstra, who suggested that, since programmers were
and thus
humans
had
limited abilities,
they were better off following a
mathematically structured "divide and conquer" approach to programming rather
than unsystematic methods.
In 1969, at the second NATO Science conference,
Dijkstra argued in a paper titled "Structured Programming" that program logic
"should be controlled by alternative, conditional, and repetitive clauses and
procedure calls [IF-THEN-ELSE
and DO-WHILE],
rather than by statements
transferring control to labelled points" [GO-TO statements].
To suggest what is
now called top-down design, he also used the analogy of a program constructed
as "a string of ordered pearls, in which a larger pearl describes the entire
program
pearls.
18
in terms of concepts or capabilities implemented
in lower-level
Most programming at the time was done in a "bottom-up" fashion,
where program units were written and then integrated into subsystems that
were in turn integrated at higher and higher levels into the final system. 1 9
Structured programming was considered more an academic topic than a
practical concept until the 1972 publication of an article by Terry Baker in the
IBM Systems Journal.
In this same year, Baker published another article
describing the development of an on-line information system using structured
programming and top-down design and testing techniques.
A 1974 article in
IBM Systems Journal by Stevens, Myers, and Constantine, titled "Structured
Design," also described how to break down a program into smaller parts, or
what have come to be known as "modules."
Another
influential article written
21
20
in the mid-1970s was "Software
Chapter Two
Engineering," published in 1976 by Dr. Barry Boehm, currently of TRW. Boehm
defined the field as "the practical application of scientific knowledge in the
design and construction of computer programs and the associated documentation
Following the lead of the
required to develop, operate, and maintain them."
1968 NATO conference,
he also recommended analyzing and controlling the
process by viewing it as a "life cycle" consisting of distinct phases:
system
requirements; software requirements; preliminary design; detailed design; code
and debug; test and pre-operations; operations and maintenance. Boehm offered
the further observation that the cost to fix bugs or errors in a software
program was one hundred times greater in the operating stage than in earlier
In addition, he noted that complete,
phases, due to program interactions.
consistent, and unambiguous specifications were essential, otherwise top-down
design and modularization would be impossible (for lack of a clear "top" to the
program); testing would be impossible (for lack of a clear idea of what to test);
users would not be able to participate in designing the program; and managers
would lose control, because there would be no clear statement of what was
being produced.
Overall, as of 1976,
Boehm complained that software
requirements specifications were usually ambiguous and error ridden, based on
"an ad hoc manual blend of systems analysis principles and common sense.
(These are the good ones; the poor ones are based on ad hoc manual blends of
politics, preconceptions, and pure salesmanship.)"
21
Boehm's article is especially useful because he attempted to summarize the
"state of the art" by contrasting current practice with the "frontier technology"
-- the most advanced practices for various aspects of software development
(Table 2.7).
For requirements
specifications, in contrast to the ad hoc
techniques normally used, Boehm thought the frontier consisted of problem
statement languages and analyzers or system design support tools, as well as
22
Chapter Two
automatic programming research.
For software design, current practice was
mainly manual, bottom-up, and error-prone, whereas the frontier consisted of
top-down design techniques, modularization, and design representations such as
flow charts.
In programming,
unstructured code.
much of then-current practice consisted of
The frontier technology was the development of versions of
computer languages and new languages that facilitated structured code, such as
Pascal.
For software testing and
reliability, 1976 practice consisted of
enormous amounts of wasted effort due to the lack of advance testing plans.
The frontier consisted of reliability models, analysis of software error data, and
automated tools such as for static code analysis, test-case preparation, or test
monitoring.
Table 2.7: BOEHM IN "SOFTWARE ENGINEERING" (1976)22
Function
Current Practice
"Frontier" Solutions
Requi remEents
Specificat ions
Ambiguous, Errors,
Ad-hoc
Problem Statement Languages
and Analyzers; Design Support
Tools; Automatic Programming
Design
Manual, Bottom-Up,
Error-Prone
Top-Down, Modularization,
Flow Charts and Other
Representations
Implementation
(Programming)
Unstructured Code
Structured Programming
Testing
Unplanned, Much
Wasted Effort
Reliability Models, Automated
Tools
Maintenance
Unsystematic, Costly
Structured Programming,
Automated Formatting and
Documentation Tools,
Modularization, Program
Libraries, Testing Tools
Project Management
Poor planning, contr ol,
resource estimation,
accountability, and
success criteria
Management guidelines;
Integrated development or
"factory" approaches combining
design, process, and tool
standards
23
Chapter Two
11
Boehm,
as others
before and after him, cited software maintenance
(corrections, modifications, or updates) as a major area of concern, since it was
enormously costly and increasing every year.
understanding
the existing software,
He discussed three functions: (1)
which
implied
a need for good
documentation, traceability between requirements and code, and well-structured
code; (2) modifying existing software, which implied the need for structures that
were easy to expand, plus up-to-date documentation; and (3) revalidation of
modified software,
which implied the need for software structures that
facilitated selective retesting.
The state of industry practice in 1976 was for
70% of all software costs to be consumed by maintenance, based on a survey of
several firms, yet there was little systematic study of how to improve efficiency
for this activity.
programming,
The frontier of the technology consisted of structured
automated formatting or documentation tools, modularization,
program support libraries, and testing tools.
Boehm concluded that management,
rather than technology, was key to
improving software development: "There are more opportunities for improving
software productivity and quality in the area of management than anywhere
else."
He attributed this conviction to the nature of the problems he listed as
most commonly faced in program development:
poor planning, resulting in large
amounts of wasted or redundant effort; poor control; poor resource estimation;
poorly trained management personnel; poor accountability structure (too diffuse
delineation of responsibilities); inappropriate success criteria (for example,
minimizing development costs may result in programs that are hard to maintain);
procrastination in key areas of project development.
The frontier for software project management in 1976, in Boehm's view,
consisted of clear management
guidelines and integrated development
approaches, combining management objectives with design and process standards,
24
Chapter Two
automated tools, reporting systems, budgeting procedu res, documentation guides,
and other elements.
Although he noted that it was too early to tell if this and
similar approaches would be successful, Boehm cited the "SDC Software Factory"
as one example of an integrated approach, consisting of "an interface control
component... which provides users access to various tools and data bases:
a
project planning and monitoring system, a software development data base and
module management system, top-down development support system, a set of test
tools, etc."
23
other studies,
estimating,
In a later article, Boehm also listed conclusions from several
using a model he developed for software cost-productivity
which supported the factory model as a long-term approach to
improving the process and environment for software development:
(1)
Significant productivity gains require an integrated program of
initiatives in several areas, including tools, methods, work
environments, education, management, personal incentives, and
reusability.
(2)
An integrated software productivity program can produce
productivity gains of factors of two in five years and factors of
four in ten years, with proper planning and investment.
(3)
Improving software productivity ivolves a long, sustained effort
and commitment (italics added).
Almost a decade after Boehm's article on software engineering, a group
from the University of California at Berkeley, led by C.V. Ramamoorthy, offered
another interpretation of the "state of the art." In a 1984 article in Computer,
they concluded that considerable progress had been made, and identified several
new directions (Table 2.8).
The main advance in requirements specifications
seemed to be formal specification languages and systems to replace natural
languages, which are often ambiguous. 2 5
These include systems such as
Software Requirements Engineering Methodology and SADT (Structured Analysis
25
Chapter Two
and Design Technique), which simplify the development of specifications and
permit these to be analyzed for internal consistency and accuracy.
(input-output
specifications include "functional"
"nonfunctional"
The
as well as
behavior)
(any other system features) characteristics, and are usually
written following one of two "models":
control flow or data flow.
It is useful
to be able to represent the specifications in multiple ways, for example, in
reports including text and graphs, which help potential users and programmers
visualize the problem before the writing of detailed designs.
The objective of
"executable specifications" is to allow programmers to take an initial inputoutput structure and run it on a computer to simulate the performance of the
system before it is completed.
The design process seemed particularly difficult to rationalize, however.
As suggested earlier, this involved breaking down the specifications into parts
that can later be coded.
One problem was that, to do this, programmers used a
variety of criteria -- such as functionality, data flow, or data structures. Then,
in partitioning the resulting "procedures"
into modules, they might take into
account whatever nonfunctional requirements the user has specified (such as
maximizing maintenance, or reliability, performance, reusability, memory, etc.).
The final step in this phase is to design the actual data structures and
algorithms, and transcribe them into text or graphic forms.
No current design
methodologies were totally satisfactory, according to Ramamoorthy,
in dealing
with complex control structures and real-time computer applications, or new
distributed processing requirements.
Nonetheless, new research on specification
languages and program generators, which automatically produced code from a
formal
requirement documents,
were promising areas of research and
experimentation.
26
Chapter Two
Table 2.8: RAMAMOORTHY ET AL. (1984)26
Function
New Directions
Requirement
Specifications
Executable specifications; multiple representation;
Report generation
Design
Knowledge-based automatic design (program generators)
Testing
Automatic test input generators and verifiers; Testing at
requirement and design levels
Maintenance
Configuration management systems; system upgrading
Quality Assurance
Alternate test strategies; Metrics for design and quality
control
Project Management
Cost estimation based on requirement complexity
Prototyping
Enhanced methods;
specifications
Reusability
Software libraries;
generators
More
systematic executable
Design Methodologies;
Program
In testing, the key task is to select a number of test cases to run on the
program so that a large portion of it is "verified" as logically correct and
meeting user requirements.
This is a labor-intensive, expensive process,
although programmers have developed numerous tools, including some that
automatically produce test data, locate errors, and verify results.
These tools
perform operations such as static analysis (analysis of program structure),
dynamic analysis (analysis of a program while in operation), and inter-module
interface checks.
As many authors have stated, maintenance problems appeared to be the
most costly.
According to Ramamoorthy, these stemmed primarily from four
sources: insufficient documentation; inconsistency between documents and code;
designs difficult to understand, modify, and test; and insufficient records of
past maintenance.
This article's authors viewed management enforcement of
27
Chapter Two
11
more precise development methodologies,
planning for maintenance in the
development stage, and better documentation as important solutions, including
the use of configuration management databases to store programs and project
management information.
Other tools that analyzed source code and generated
information such as the control flow or data flow were also useful.
For quality assurance and project management,
various "metrics" or
measurement systems were under development, covering phase productivity, cost,
and quality.
Quality measures generally included factors like correctness,
modifiability, reliability (mean time between failures), testability, performance,
complexity, and maintainability.
For project management, there were various
estimation models derived from empirical data for different phases in the
development cycle, although these were not particularly accurate in the earlier
stages .27
In general,
productivity, quality, and costs were difficult to measure
precisely in software and especially hazardous to compare across different
projects and firms.
In the area of productivity and costs, for example, a
program written in one month by a highly skilled and highly paid engineer
might implement a set of functions with, say, 1000 lines of source code.
The
same set of functions, using the same computer language, might take a less
experienced, lower-paid programmer 1500 lines of code, and 1.2 months.
The
nominal productivity of the second programmer is higher -- 1250 lines per manmonth -- and costs per line of code would probably be much lower.
But the
first programmer's code would run faster on a computer and might have fewer
defects --
common measures of higher quality.
On the other hand, if the
defect numbers are similar, and speed is not a high priority, the longer, less
costly program might be preferable for the customer,
and the producer.
Defects, furthermore, are difficult to compare because definitions vary and
28
Chapter Two
different companies or projects within the same firm may not be using similar
definitions .28
Data and procedural abstraction, as well as "object-oriented programming,"
were other concepts being adopted by software producers of all sizes, including
factory-type facilities in Japan, because of their usefulness in making software
more modular as well as easier to test, modify, reuse, and maintain.
idea of abstraction was quite simple:
The basic
Design modules hierarchically and so that
they operate with generically defined data types or procedures.
Specific data
types should be hidden from higher-level modules, placed in a program only at
lower levels, and contained in "objects" with distinct names.
Although it was
possible to design programs in this way with a variety of computer languages,
new languages such as Ada, completed in 1980 and supported by the U.S.
Department of Defense, were devised specifically to facilitate data abstraction.29
For example, one might write a generic program for entering any type of
data for a series of objects; at lower levels, one might then define the objects
as, say, students in a particular class, and define the data to be entered as
grades for a particular term.
This type of program could be used for a variety
of data entry applications.
In contrast, one could also write a program
specifying initially that it is for entering grades for Mary, John, and Joan.
This might be shorter, simpler, and cheaper to write, but would not be useful
except for this one application.
While abstraction in complex programs generally required considerable skill
on the part of the programmer,
languages
object-oriented programming, supported by
such as Smalltalk and C + + ,
represented a further step toward
simplification as well as facilitation of software modification, customization, and
reusability. This approach required software developers to define "objects,"
which could represent numbers, text, subprograms, or anything desired, rather
29
Chapter Two
The objects were isolatable modules in
than abstract procedures or data types.
the sense that they included both instructions for manipulating data and the
actual data; thus it was possible to change programs simply by redefining,
adding, or subtracting objects, rather than rewriting complex procedures and
redefining
every time changes were made,
individual sets of data
as
conventional languages required, including Ada. 3 0 Even though most firms used
languages that did not have specific features to support object-oriented
programming, languages and tools based on this concept were becoming more
available. 3
1
It was also possible to incorpoate at least some object-oriented
programming concepts, as well as abstraction techniques, in many conventional
programming applications.
There were two other
issues that traditional models of software
management did not address easily.
One is the lack of feedback from customers
into the process before the actual coding stage, by which time making changes
was usually difficult and expensive, at least relative to making changes in
earlier phases.
Another is poor facilitation of recycling existing specifications,
designs, code, documents, or tools, to eliminate having to "reinvent the wheel"
on more than one occasion.
Rapid prototyping and reusability techniques both
deal with these problems.
Prototyping is essentially a feasibility study or model of a program.
Some
people argue prototypes are too expensive to build, while others claim they
provide much better views of the program, from the users point of view, than
conventional requirements specifications.
In general, the modeling involves
outlining the major functions or interfaces in a computer language, which can
then be run on a computer, and added to or modified as needed, before the
programmer builds the complete system.
It is also possible to combine the idea
of prototyping with reuse of code and program generators, which take existing
30
Chapter Two
code and produce a rough model of a program from a specification document or
men u.
As in any mass-production engineering and production of final goods from
standardized components,
"reusability"
offers a great deal of potential for
improving both productivity and quality in software development.
Boehm's
cost breakdown cited earlier,
reusing
Following
requirement and design
specifications (as opposed to actual code) had the potential of saving a great
deal of effort, since they accounted for about 35% of development costs prior
to maintenance, compared to about 20% for coding.
There could also be large
savings in testing, nearly 503%of costs prior to maintenance, if the reused code
was thoroughly tested before recycling.
recent article even argued that,
In terms of overall productivity, a
since research on software tools and
methodologies has brought only limited improvements for each of the various
phases in the software life cycle, reusability was perhaps the only way to
achieve major advances:
[S]oftware development tools and methodologies have not continued to
dramatically improve our ability to develop software. Although in the
field of computer hardware fabrication, new methods continue to raise
productivity, in software we have not experienced even an order of
magnitude improvement over the past decade... [O]ne concept which we
believe has the potential for increasing software productivity by an
order of magnitude r2 more...has come to be known by the phrase
"reusable software."
But while reuse of standardized components in different products is
elementary logic to managers
of engineering and manufacturing in most
industries, it is a topic of major debate in software.
One disadvantage is that
programs containing reused code may be longer, or not as good functionally,
compared to software optimized by a highly skilled programmer for a specific
job. This is, however, an issue of price versus product performance faced in all
31
Chapter Two
industries.
The balance a firm aims at should depend on their overall strategy
and organizational capabilities.
the equivalent of
standardized
Rolls
Not all companies, for example, need to produce
Royce automobiles or supercomputers.
More
and less expensive products will probably suffice in many
applications.
It can be argued further that, if software products are best produced in
"lots of one," since each program (excluding mass-replicated software packages)
should be unique, then reusability would be a bad process strategy compared to
job-shop production.
But several studies at individual firms and of a variety of
programs indicate that somewhere between 40% and 85% of all code written in a
given year consists of similar functions that "could be standardized into a fairly
small set" of reusable modules. 3 3 (The surveys discussed in Appendices A and B
confirmed these studies of high reuse rates in some firms, indicating that
project managers, especially in Japan, reused large amounts of code as well as
designs, for all types of products.)
Reusability is also possible for different
portions of a software product, in addition to code and designs, such as data,
system
architectures,
standardization
and documentation. 3 4
To facilitate reusability,
of module interfaces were particularly important,
components could fit together.
so that
Other necessary tools were libraries to store
programs and documents, and reusable parts coding and retrieval systems.
Reusable data refer to the standardization of data interchange formats, to
facilitate both reuse of programs and sharing of data.
There is no universally
accepted way of interchanging data, so this in large part depends on individual
and intra-company efforts.
Reusable architectures refer to research on defining
a set of generic functions that could be assembled to create any type of
program.
Most of this work is focused on well-defined applications software
and involves designing all data descriptions, constants, and input/output controls
32
Chapter Two
to be external to the modules intended for reuse.
Estimates of the number of
generic functions needed to build all potential applications from standardized
parts range from less than 100 to over 1500.
Reusable designs involve developing standard patterns to serve as
references or as the actual basis for new programs.
For example, a 1983 survey
identified standard references for construction techniques for only a few types
of programs:
assemblers/compilers, databases, operating systems, sorts, and
graphics processing.
Many other common applications software lacked such
detailed reference literature,
which could significantly facilitate the design
process.
Reusable programs
or systems
refer to the idea of utilizing more
standardized programs or packaged software in place of customized programs.
This appears to be an increasingly common practice, especially with personal
computers, but is hampered by factors such as machine and data incompatibility,
and lack of standardized data interchange and function formats.
A 1983 study
of 11 Fortune 500 companies indicated that about 70% of functions in the areas
of finance, personnel, and sales reporting were supported by standardized
programs, although most software supporting manufacturing and engineering
functions was not from standardized systems.
Reusable modules refer to the recycling of software parts rather than
whole programs.
The simplest is a common subroutine or function, such as
square root extractions, date conversions, or statistical routines, that could be
linked into a program without having to be coded more than once.
This
practice has been common since the 1950s, and high-level languages, as well as
operating systems such as Unix, tend to facilitate this reuse of subroutines.
By
the 1980s, there were several collections of basic functions available in the
public domain, such as the 350 reusable functions in the Unix programmers
33
Chapter Two
III
workbench, or the standardized COBOL routines available through Raytheon's
ReadyCode Service, which allowed users generally to write semi-customized
programs, with only about 45% new code.
For internal company use, or for
internal production operations for outside customers, the Bank of Montreal,
Hartford Insurance, Toshiba, Hitachi, Hitachi Software Engineering, and many
other firms
around the world,
relied on extensive libraries of reusable
modules .35
Another development in the software field of enormous potential were
program generators which helped to automate the production of new programs.
At present, these were mainly used for fairly standardized applications, such as
word processing, payroll accounting, or inventory control.
Some of the more
sophisticated tools automated the translation of designs into code, and were
commonly referred to as computer-aided software engineering (CASE)
tools.
These were integral tools in the software factories at Hitachi, NEC, and Fujitsu
(discussed in Part Two), as well as at other producers in Japan and the U.S.
Recent reports on these tools have claimed dramatic savings in total costs and
time required for development. 3 6
There were various types of program generators. A simple type translated
flow-chart designs into code; all the Japanese factories introduced these types
of tools several years ago.
Another type primarily aided the software developer
in locating reusable modules from a reuse library.
The CASE tools attempted to
generate new code automatically from standardized design formats or menus.
Some of the program generators in use, such as Hitachi's EAGLE system, were
capable of all three program-generation functions.
Notable among U.S. vendors selling design-automation tools were Index
Technology, Nastec, KnowledgeWare, Cadre, Texas Instruments, Cortex, and
Tarkenton Software.
A Texas Instruments' product was one of the first to
34
Chapter Two
tackle the entire systems development life cycle, including business objective
Cortex's system, called the Application
analysis and data base generation.
Factory,
ran
automatically
professionals. 3 7
on
Digital
generated
Equipment
code
Corporation's
from
VAX
specifications
minicomputers
set
by
and
computer
In addition to CASE products available for sale are those firms
have made primarily for in-house use.
In the U.S., for example, TRW designed
ARGUS as an advanced software engineering workbench and integrated approach
to improve quality while reducing the cost of software development. 3 8
AIDES,
the Automated Interactive Design and Evaluation System developed and used at
the
Hughes Aircraft
Company,
automates
many of the manual
accompanying a structured design methodology developed at IBM.
procedures
39
The next
step in automation many firms are taking is to link CASE tools with artificial
intelligence or expert systems technology.
This was the focus of efforts at
Hitachi, NEC, and Fujitsu, as well as at U.S. firms such as KnowledgeWare, Inc.
and Tarkenton
Software,
where companies were either developing or using
workbenches and computer-aided tools for systems analysts that automated steps
from initial design through the generation of computer code.
35
Chapter Two
11
"RATIONALIZATION" OBSTACLES AT THE COMPANY LEVEL
Despite progress in tool and technique development, at the company level,
full-scale process standardization and "rationalization" efforts, such as achieved
decades ago in mass-production
achieve.
oriented
industries,
were still difficult to
As discussed earlier, some of the problems software developers face
seem to defy comprehensive solutions, probably due to the newness of the
technology and its markets.
Yet practices common in the industry also seemed
to create obstacles to moving beyond "craft" approaches.
to deal
For example,
with the
problems of managing
large,
complex
projects, managers have come to prefer projects with as few, and as excellent,
people as possible.
This approach may indeed
result in superb,
innovative
products, but, from a process viewpoint, there are numerous difficulties.
If
managers must rely on a few highly-skilled people, they may have to turn down
work if the supply of these people is inadequate;
costs may also become
enormous as software systems grow in length; customers who require rapid
delivery may also defect to other suppliers.
Furthermore,
if employees do not
design and document products in a consistent manner, a company might suffer
huge maintenance costs after delivery, especially if the original developers move
on to other work.
This is critical in software, where maintenance costs can be
very high.
Part of the problem is the often-cited tendency of highly skilled software
developers --
craftsmen -- to prefer their own design methods and tools, to
dislike filling out detailed reports,
"innovative"
systems, rather than reusing code and designs to make products
similar to previous systems.
the
and to build ever more sophisticated and
1968 NATO
conference
One of the more insightful observations made at
was
on this very
point,
i.e.,
that
software
developers, by preferring to write new systems, were constantly immersing
36
Chapter Two
themselves in "research," thus reducing their ability to capture the benefits of
previous experience, investments in designs, code, and tools, or economies of
scale and scope in general:
Instead of trying to write a system which is just like last
year's, only better implemented, one invariably tries to write
an entirely new and more sophisticated system. Therefore you
are in fact continually embarking on research, yet your
salesmen disguise this to the customer as being just a
production job... This practice leads to slipped schedules,
extensive rewriting, much lost effort, large numbers of bugs,
and an inflexible and unwieldy product.
It is unlikely that
such a product can ever be brought to a satisfactory state of
reliability or that it can be maintained and modified . . [T]his
mixing of research, devejppment, and production is a root case
of the 'software gap'...
This scenario prevailed at many firms even in the 1980s.
One example is
Bell Communications Research (Bellcore), which develops data processing and
communications software for Bell telephone operating companies.
According to
Tom Newman, a manager responsible for developing one of the largest software
projects undertaken in the company, a tradition dating back to origins as part
of AT&T's Bell Laboratories -- of hiring the best people managers could find-has led to the situation where Bellcore had "too many chiefs and not enough
Indians."
In other words, too many software developers who wanted to work
only on interesting projects, and with extensive independence, made it difficult
to collect productivity and quality data systematically, analyze where their costs
were coming from, and reuse code across different projects by focusing less on
innovation and more on building simple, practical, and reliable systems. 4
1
Another example is TRW, where a 1977 study to define a set of principles
to "guarantee"
a successful outcome to any software development effort
recommended that managers use better and fewer people in projects, as well as
practices such as management through a sequential life-cycle plan, performance
of continuous validations, maintenance of disciplined product control, use of
37
Chapter Two
III
enhanced top-down structured programming, clear accountability for results, and
sustained commitments
to improving the process continually. 4 2
Yet TRW
managers, led by Barry Boehm, concluded that reliance on outstanding project
personnel, or on individual techniques such as top-down structured programming,
were themselves insufficient to ensure a successful software development effort.
To achieve significant improvements in project management it seemed equally
important to maintain a commitment to continued learning and application of
software engineering
principles over the entire life-cycles of successive
projects.43
Another potential disadvantage of relying on highly-skilled individuals is if
this preference leads to tool and method development on an ad hoc individual
or project basis, to support only the individual programmer or the small group,
rather than a larger part of the organization.
One might argue that, in a large
software project -- and all projects, even for personal computers, are becoming
larger with each new generation of computers -- organizational communication
and teamwork are equally or more important than individual performance.
Companies might thus benefit enormously from using corporate resources to
develop and train people in general-use tools and methods, to improve software
operations at the facility, division, or even firm level, where the combined
efforts and resources of an entire organization might be mobilized and focused
on specific goals, such as reusability or product reliability.
Such goals, even when tried, have sometimes proved less than successful,
due to the continued variety of software products, applications, and object
machines (hardware), even within a single firm.
The existence of such variety
suggests diverse market needs, rapid technological change, as well as cultural
issues, such as the preference of engineers for particular machines or programs.
But while every new industry encounters difficulty in balancing product
38
Chapter Two
innovation with process rationalization,
there is some evidence that even
software firms have found ways to rationalize by tailoring processes and
organizations to specific product-market segments, such as high-end custom
versus low-end standard goods, as well as by focusing on the development of
versatile tools, techniques, and workers that facility both efficiency and the
production of innovative or differentiated products.
In fact, the survey reported in Appendix A indicated that most managers
producing software for large computers emphasized control and standardization
over at least some aspects of the process and some tools.
especially in Japan, also emphasized reuse of code.
Many managers,
The following examples
from several U.S. companies developing large amounts of software illustrate
some of the issues companies with diverse programming needs have faced in
attempting corporate-wide efforts at process improvement and standardization.
The cases in Part II continue this discussion in more detail and in the specific
context of attempts to move software development operations from job-shop or
craft stages to factory-type design and production systems.
International Business Machines
International Business Machines (IBM), with software and data-processing
revenues of over $5.5 billion in 1987, was by far the largest software producer
in the world.
Yet the company has continued to struggle with process issues,
due to great variety in the software needs of its customers,
architectures of its hardware systems.
and in the
Since software comprised only a few
percent of IBM's revenues, historically, the company may have devoted more
attention to managing its hardware and marketing operations, than software
production technology, although in the mid-1980s this was changing.
A major
step was the integration of applications development into a separate division in
39
Chapter Two
1987.
Even reusability,
not usually emphasized at IBM,
strategic process and product issue,
laboratories. 4 4
was becoming a
at least in the company's German
Yet continually vexing to IBM, and the subject of its Systems
Application Architecture initiative, were several problems resulting from the
diversity and structure of its product lines and operations.
First, was the broad scope of programming operations within IBM
worldwide, for a variety of large, medium-size, and small computers. 4 5 These
operations included nearly all types of business and technical applications
software, as well as systems software, for private and government customers,
many with their own standards and requirements.
Adopting one software
development process for so many different machines and customers clearly was
impossible for IBM and undesirable for its customers.
Second,
IBM has been limited in its ability to standardize software
development operations,
and reuse code across different machines,
by the
evolution, since the 1960s, of incompatible hardware lines, even within the
group of business computers. 4 6
At one time this was an effective marketing
strategy, to meet different market segments with different products.
But it
created opportunities for DEC and other firms that offered compatible small and
larger machines.
IBM's approach also forced its personnel (and other software
vendors and users) to develop nearly identical applications and systems products
more than once, for different hardware lines.
Third,
due to the sheer size of IBM's
hardware development and
programming operations, and the desire of management to tap expertise around
the company and focus groups at different sites on specific missions, IBM has
distributed software development to several hardware sites.
This dispersion,
while unavoidable to some extent and beneficial in some respects, has made it
difficult to standardize practices and tools, independent of technical constraints
40
Chapter Two
and even within a single business area or related areas, such as mainframe
software.
The Poughkeepsie, New York, site had over a thousand programmers
working on the IBM MVS operating system for IBM's largest mainframes. Nearby
Endicott developed intermediate mainframes and had 500 or so programmers
working on VM operating systems software.
The Raleigh, North Carolina site
had over a thousand programmers working on communications systems software.
Other sites in Rochester, New York, Toronto, Canada, and in Germany developed
mid-range office computer systems hardware and software, while the San Jose
and Santa Teresa Laboratories in California developed database software.
Links
among these facilities, such as the sharing of tools, methods, and code, were
minimal .47
Fourth, beginning with Brooks, IBM managers have also found that too
many programmers in one site were difficult to coordinate, and that too much
emphasis on factory-type structures, such as a functional organization, made it
difficult to serve various customer groups effectively. In the late 1970s, IBM
had experimented with large functional groups in its Santa Teresa facility,
moving toward a large-scale factory-type organization separating design from
implementation, but then shifted to a less rigid structure based on products,
retaining specialized personnel within product groups.
According to Glenn Bacon, an IBM executive who presided over this shift
at Santa Teresa, IBM adopted the new structure for two main reasons.
was that "the
One
product manager organization was likely to be much more
responsive in making major strategy changes that we intended.
Second, the
separation between design and implementation impeded the kind of teamwork
and customer orientation that the marketplace required at that point."
Nonetheless, functional organization had brought noticeable improvements in
quality and productivity, and IBM continued to rely on this concept and
41
Chapter Two
economies of scale through maintaining relatively large product-oriented groups,
as Bacon recalls:
The functional separation had been in place several years and had
made significant accomplishments in establishing a programming
These were measurable as
process discipline and architecture.
consistent gains in both quality and productivity. These processes
and measurements were not changed when we went to a product
manager organization. Further, the product management groups were
still large enough to maintain specialists in all aspects of design and
production, thus I saw no loss in economies of scale. We also kept a
group dedicated to exploring process improvements.48
General Telephone and Electric
General Telephone and Electric (GTE) provides yet another case of a firm
encountering problems when attempting to introduce corporate standards and
general-use tools to diverse software-development
operations. 4 9
In its
Corporate Software Development Methodology, published in 1980, GTE brought
together ideas embodied in existing methodologies used in several GTE divisions.
According to William G. Griffin, director of the Computer Science Laboratory at
GTE Laboratories, Inc., the methodology consisted of several general objectives
that represented good practice and met little opposition within the company:
1.
A specified software architecture for all systems, represented as a
hierarchy of logical components, each level of the hierarchy being a
decomposition of the preceding higher level
2.
A detailed description of the development process based on the
software life-cycle model
3.
A description of the necessary aspects of configuration management
4.
A list of the necessary documentation
development process
5.
A glossary of terms and prescribed structure chart symbols
for each phase of the
Development of this manual was one of several steps GTE took following
42
Chapter Two
the formation of a Software Steering Committee, whose tasks included studying
software engineering and management issues, examining the various software
activities within the corporation, and promoting the sharing of experiences and
software engineering tools.
The greatest benefit Griffin cited from this
centralized approach was the ability of software engineers and managers to
communicate through the use of common terms.
But another attempt to
standardize not just terminology but practices across the firm, called "STEP"
(Structured Techniques of Engineering Projects), met with much less success. A
major part of the reason was that STEP's objectives, listed below, were bolder:
1.
Enforce a set of standards throughout program development
2.
Provide an integrated tool set built around a portable data management
system for configuration management
3.
Ensure that documentation was current
4.
Provide relatively easy transportation across operating systems
5.
Provide management reports on project status.
Griffin explained STEP's failure to gain widespread acceptance within GTE
as the result of several factors:
the decentralized nature of GTE's software
development activities and groups; the preference of software engineers for
familiar tools;
performance problems
users encountered with
STEP,
in
comparison to tools integrated with the data management services of a specific
operating system; STEP's narrow interpretation of the software development
methodology; the slow transfer of technology from the research phase to its use
in systems development; and the different perceptions among GTE business units
as to what their real economic needs were.
GTE managers continued to debate whether they should have a centralized
group to develop software tools and define methods, although, as Boehm would
43
Chapter Two
Ill
recommend,
the company was apparently committed
to heavy,
long-term
investment in R&D to improve software quality and development productivity.
While Griffin expected high-performance personal work stations to provide the
bulk of GTE's future improvements in software development, he also believed
decentralized or distributed environments would greatly complicate configuration
management for large-scale product development.
engineering themes in GTE
The four dominant software
laboratories were attempts to deal with these
problems, recognizing the different needs of different groups:
techniques for
specification accuracy; technologies for integrated programming environments;
promotion of reusable software components; and the application of knowledgebased systems to the software development process.
Digital Equipment Corporation
Digital Equipment Corporation (DEC),
founded in 1957, was the second
largest U.S. computer producer in 1986, with sales of nearly $7 billion, including
nearly $600 million in software revenues.
DEC divided software operations into
basic software and applications; the latter was primarily customized programs
written by the Software Systems Group.50 Because the applications group was
widely dispersed geographically at regional sales offices, with many programs
written at customer sites, historically there was "very little consistency between
individual groups."
DEC centralized most systems software development in a
large facility in Spitbrook, New Hampshire, and connected some dispersed groups
to Spitbrook through an on-line network.
As at many other companies,
including IBM and NEC, DEC organized its systems software groups by product,
with each product linked to specific hardware systems such as the VAX and
PDP-11 minicomputers, or personal computers.
51
Anne Smith Duncan and Thomas J. Harris, managers in DEC's Commercial
44
Chapter Two
Languages
and Tools Group
(CLT),
offered in a 1988 in-house report a
remarkably candid account of software development practices at DEC,
both
before and after efforts to introduce far more systematic management methods.
The picture they paint of the DEC before the mid-1980s was of a typical
project-centered or job-shop structure, relying on highly skilled people with
great amounts of autonomy.
Members of each software project largely
determined their own standards and conventions.
The company offered no
centrally supported tools,
so tool development depended on uncoordinated
efforts at the project level.
Few activities or operations benefited significantly
from automation.
There was, for example, no system for collecting information
on bugs and analyzing this systematically on computer-based tools.
In addition,
projects often invested hundreds of man-hours writing similar code, with no
mutual knowledge or sharing.
As a result of these "craft-like" practices, each
group tended to view its problems, and its solutions, as unique.
This culture
left DEC the company with no organizational capability to identify and transfer
good practices and technologies, redeploy existing investments in designs, code
or tools, or even compare current levels of performance with the past, as
Duncan and Harris recount:
The Digital engineering culture allows each software project team
substantial freedom to determine its own conventions, standards, and
infrastructure. In this culture, moving a successful "process" from
one completed project to a new one depended on the people who
moved between them.
In the 1970s and 1980s few supported tools
were available, and tool development was done at the project level, if
at all. Some processes were automated, most were not. Regression
testing (regression tests reveal whether something that previously
worked still does) was done by hand, bug lists were compiled on
blackboards, and debugging major integrations at base levels was
The project members paid minimal
difficult and time consuming.
attention to tracing how and when things happened, and they
documented this activity on paper, if at all.
Another aspect of this culture was the sense that each project team
had to write all the code needed for that project. This attitude
meant that code to do common routines was duplicated from project
to project. Each team believed that its problem was unique, that it
45
Chapter Two
III
could not share code with any other team. The belief was pervasive
that each problem was different and 5 1at each project team had
found the only appropriate techniques.
How this "culture" contrasts with more "rationalized" approaches can best
be seen through comparisons of specific functions and practices at DEC with
those at organizations emphasizing factory structures and objectives.
The
factory examples are drawn from the cases in Part Two, especially Toshiba.
The software factories in Japan appear to have
Product-Process Strategy:
distinct
strategies
"factory-like"
underlying
their
organizational
and
technological infrastructures, centered on process efficiency rather than product
At Toshiba, the main objectives were productivity improvement and
innovation.
cost control, achieved through extensive reuse of code and tool support.
Hitachi,
NEC,
and
At
Fujitsu, the focus was on process and quality control,
achieved through extensive data collection and analysis, tool support, worker
training, program testing, and other policies or procedures.
At DEC, in contrast, the central concern underlying the organization of
software development by largely independent groups appeared to be the product
technology.
software
DEC claimed to have a corporate strategy of "developing innovative
products
that demonstrate the
capabilities of Digital
hardware."
Another objective was to produce compatible software for its entire range of
machines, to support a highly successful "networking" capability. This required
coordination
at the top,
applications programs.
among
groups
developing different
systems
and
But how people wrote programs was not the main
concern of management; what programs did, however, was critical. 53
Tool and
Process
standardized
sets
Standardization:
of tools
and
there were
In the software factories,
processing
46
routines
for each
phase of
Chapter Two
development,
as well as functional divisions or sharing of managerial
responsibility.
Use of these tools and procedures was not optional, but was
essential to maintaining a consistent factory-mode of production.
its integrated process and tool set the Software Workbench.
Toshiba called
At Hitachi, the
factory revolved around tools and methodologies such as the Computer-Aided
Software Development System
(CASD),
Computer-Aided
Production Control
System (CAPS), Hitachi Phased Approach for High Productive Computer System
Engineering (HIPACE), and Effective Approach to Achieving High Level Software
Productivity (EAGLE).
NEC relied on Standardized Technology and Engineering
for Programming Support (STEPS), Hopeful Engineering for Advanced Reliability
Engineering
(HEART),
Software Engineering
Architecture
I (SEA-I),
and
Software Development and Maintenance System (SDMS).
DEC, prior to 1985, lacked a centralized support system and a set of
corporate tools available to all software engineers. 5 4
There was an electronic
mail system through which employees could circulate DEC's library of languages
and programs to employees throughout the world.
The two DEC operating
systems, VMS for the VAX machines and RSX for the PDP-11s, also relied on
uniform user interfaces, which required some standardization in program design.
Other tools were available to all who wanted to use them.
In addition, the
Code Management System (CMS) served as a code library and helped in project
management by storing and sorting code as it was developed,
while the
Datatrieve tool served as a database management system for project data.
In
general, however, individual product groups developed their own tools, such as
program generators
updating.
and test generators,
and were responsible for their
55
Individual product managers also had ultimate responsibility for each phase
of development, and there was no staff group that supervised the design or
47
Chapter Two
llI
production process.
Only by imposing controls on what final products were
supposed to look like did DEC management exert influence -- indirectly -- on
the procedures followed in the process of development.
Most important was a
set of corporate guidelines for software development outlined in a manual,
Phase Review Process (Table 2.9).
This listed each phase for review, based on
the life cycle model, and specific "milestones" in each phase a product had to
pass through before it could be moved on to the next phase.
The manual also
required documents attesting to completed work and their review at each phase.
Though DEC products may have met their functional objectives, company
managers and customers complained of budget or schedule overruns, maintenance
costs, and reliability problems. 56 This suggests that DEC's phase review process
fell short of a disciplined, integrated factory-like design and production process.
One reasons may be that the phase review process provided only guidelines and
checklists, to make sure that all related groups had input into the product
while it was being designed; it did not require any particular procedures to be
followed in development.
management data.
Nor was there any central data base of production-
Second, while there was an independent Software Quality
Management group which tested completed products, and "knowledge of its
testing procedures does influence the early stages of product development," it
was not involved in monitoring the development process.
Third, without formal
monitoring, the phase reviews were "much looser" in practice than the controls
appeared to be at Toshiba,
Hitachi,
NEC, or Fujitsu factories.
Product
managers were responsible for monitoring themselves; they retained "the final
approval authority for moving on to a new phase."
Although these decisions
were often negotiated, only at Phase 0 did the product managers hold open
meetings,
which were
intended to gain agreement on the functional
specifications among other parties interested in the software product.
48
57
Chapter Two
Table 2.9: DEC PHASE REVIEW PROCESS
58
LIFE CYCLE MANAGEMENT
Phase
0
1
2
3
4A
4B
5
Title
Strategy and Requirements
Planning
Design and Implementation
Qualification
Engineering, Manufacturing, Transition
Manufacturing Volume Production
Reti rement
REQUIRED MILESTONES DOCUMENTATION
Phase 0
1. Business Plan Draft
2. Market Requirements
3. Product Requirements
4. Alternatives/Feasibility
5. Manufacturing Impact
Phase 1
1. Final Business Plan
2. Functional Specifications
3. Project Plan
4. Manufacturing Plan Draft
5. Customer Services Plan
Phase 2
1. Revised Business Plan
2. Marketing/Sales Plan
3. Detailed Design/Code Freeze
4. Verification Test Plans
5. Manufacturing Plan
6. Customer Services Plan
7. Final Prototype
8. Prototype Test Results
Phase 3
1. Final Business Plan
2. Product Announcement Criteria
3. Verification Tests Results
4. First Shippable Product Available
5. FCS Criteria Met
Phase 4
1. Post Partum Review
2. Manufacturing Process
Certification
3. Market Performance Review
Phase 5
1. Retirement Business Plan
2. Production Stopped
3. Marketing Stopped
4. Support Services Stopped
49
Chapter Two
III
Division of Labor: As with hardware products, there was also some functional
separation of functions such as requirements analysis, design, implementation,
production management, quality assurance, tool development, testing, library
maintenance,
and even management of reusable software parts libraries.
Specific structures varied at different firms. SDC, Toshiba, Hitachi-Omori, and
for example,
Fujitsu-Kamata,
relied on a matrix
engineers served as product managers,
structure,
where system
and then essentially handed off
requirement specifications to the factory to be produced, tested, and maintained.
Management of the process in this way allowed for specialization of workers
and tools, the movement of personnel to different areas as needed, and a focus
on maximizing productivity or quality once the requirements were passed on
from the initial development phase.
For systems software,
however, there
generally was less separation of product design from construction, except to the
extent that the more experienced personnel usually did the higher level design
work and the less experienced personnel the more routine coding operations.
At DEC,
not only were there few staff groups, there was no formal
separation of design from coding, despite some divisions of tasks according to
ability.
Software developers did their own design and coding:
"the corporate
culture is one of entrepreneurial independence and therefore encourages each
software employee to 'take ownership' (a favorite DEC term) of his piece of the
project... The prevalent attitude at DEC is that design and coding are two parts
of a single process;
productivity is absent."
Quality Control:
control
hence,
the concept of improving
'manufacturing'
59
Japanese software factories all embraced extensive quality
systems and treated the ability to guarantee product reliability--
minimal defects, within specified parameters -50
as an essential component of
Chapter Two
their marketing and production strategies.
Factory procedures called for the
monitoring of quality characteristics at each phase of the development process,
using checklists as well as testing of code.
The managers responsible for the
reviews prior to final testing were usually the departments responsible for the
particular phase of production, although independent staff departments for
quality assurance or inspection conducted final tests and reported to the factory
managers, not to project managers.
Toshiba, NEC, and Fujitsu also organized
workers into quality circles to discuss quality improvement measures, as well as
other issues.
With the exception of quality circles, practices at pre-1985 DEC were
DEC had an independent
similar on the surface, though less standardized.
Software Quality Management group (SQM) reporting to the vice-president in
charge of software engineering;
this was
responsible for final testing of
products as well as for dealing with customer claims.
Product managers were
responsible for testing during the development phases.
A major difference from
the factory procedures was that DEC managers appeared to vary considerably in
their testing strategies.
For example, some tested modules by functionality,
others wrote executable specifications or prototypes to test whole programs
early in the requirements definition stage, some used code reviews by outside
parties, and some did not.
SQM tests were "rigorous," and this provided a
"strong incentive to avoid any risk of errors..."60
But, as Duncan and Harris
noted, the process was not rigorous enough to meet the rising complexity and
of programs, and the demands of customers for greater reliability.
Cost and Productivity Management:
Japanese software factories all carefully
measured programmer and machine productivity, development costs, and other
project-management elements, such as schedule lateness.
51
Companies developed
Chapter Two
III
"standard times" for different phases -- such as system design, coding, and
testing -- and used these to estimate as well as monitor a project's manpower
assignments, budgets, and schedule milestones.
In the case of Hitachi, this data
base of standard times for different types of programs dated back to the mid1960s.
Those at NEC, Toshiba, and Fujitsu dated back to the mid- or late
1970s.
As can be seen most clearly at Toshiba, the purpose of measuring
performance so carefully was to control costs and identify ways to improve
productivity. Toshiba managers determined, first, their profit goals, and then
attempted to build the product agreed to with the customer by a set date.
meet the cost and schedule goals,
To
Toshiba project managers had to know
precisely the productivity potential of their workers; to control costs and meet
profit targets, they figured into the budget and schedule the reuse of large
amounts of code, documentation, test programs, and other factory "assets."
They then required programmers to produce more or less at standard rates, and
to reuse code and utilize all the factory tools that facilitated this.
DEC operated with far less rigid procedures for managing productivity and
costs, although there still appeared to be considerable pressure on managers to
meet budget and schedule objectives.
After the approval of Phase 0, project
managers submitted budget requests to their supervisors, based on individual
estimates of how many man-years would be required to development the system.
Project managers were then responsible for keeping track of costs and the
schedule after final approval of the budgets,
but there was no formal
measurement of productivity rates, nor of reuse rates.
Introducing products to
market quickly was highly valued in DEC, and managers clearly felt pressure to
develop projects on time.
But compensation for project managers was not
explicitly based on meeting schedule or budget deadlines. 6 1
It was also common
in DEC for managers to prefer to overspend their budgets (and, therefore, cost
52
Chapter Two
estimates) rather than deliver a late product.
By the mid-1980s,
however,
62
according to Duncan
and Harris,
DEC
customers, and thus company engineers and managers, "started to pay much
more attention to software costs, as the costs of software development and
maintenance began to exceed the cost of hardware."
They claim that DEC was
thus forced to shift its management practices as customers wanted more
sophisticated and reliable software systems.
The systems were also becoming
more complex, and this required better cooperation among dispersed groups:
"Communications
between teams became increasingly difficult as the normal
communications paths became clogged." Advances in computer technologies were
also placing
new strains on software resources,
while maintenance of old
software products was consuming "human and hardware resources that should be
used to build new products," and shortages of skilled software engineers were
predicted to continue for the next 20 years.
63
To deal with these changes in its competitive environment, DEC managers
resolved to improve quality and dependability of their software products while
reducing costs at the same time.
involved the following goal:
Part of the strategy to accomplish this
"to solve new problems in creative ways,
and...solve each problem only once." The effort led, first, to systematic studies
of programmer productivity within DEC, beginning in 1985.
recall:
Duncan and Harris
"[W]e needed to understand how we were doing at a point in time
compared with how we had done in the past."
Process improvements that DEC
introduced between 1985 and 1988, or emphasized more systematically during
this period, included the following tools and systems:
VAX DEC/CMS and VAX DEC/MMS Module Management System -- This set
of tools automated the maintenance of difference versions of source code,
53
Chapter Two
iI
the identification of modules that belong to particular base levels or
versions, and the building processes.
Previously, "the procedures for
building and controlling source code were usually listed on a blackboard, in
a notebook, or in someone's head."
VAX DEC/Test Manager --
This was a software program that simplified
regression analysis, for use in testing, bug analysis, or code optimizations,
on the assumption that a simplified process would encourage programmers
to run these tests more often.
VAX Notes System -assisted "in
This was a "distributed conference tool," which
automating and tracking project design discussions and
decisions."
VAX Common Run-Time Library --
This was a library of approximately
1000 commonly used software routines.
While this system first became
available in 1977, DEC was now encouraging "software engineers ... [to]
search for code, designs, additional tools, and documentation that can be
reused.
Both managers and engineers consider reused code as an
investment in design, programming, and testing that has already been paid
for.
Moreover, the support for that code has been planned and is in
place."
Results from the new emphasis on process appeared to be excellent,
although the projects Duncan and Harris selected for analysis were tools written
in DEC's Commercial Languages and Tools Group.
standards
for quality,
and may
These have particularly high
not be representative of development
54
Chapter Two
performance in other DEC groups.
Nonetheless, the data suggest what levels of
improvement are possible using more systematic approaches.
In terms of lines of code productivity, for a total of 14 products studied,
mean output per man-month rose from 792 lines during 1980-1984 to 2,169 lines
after January 1985.
This represented nearly a three-fold improvement, due
primarily to reuse rates ranging from 22 to 56 percent of the delivered code
across the different projects.
DEC also claimed that defect rates per 1000 lines
of code dropped from between 0.07 and 1.51 prior to 1985 to between 0 and
0.066 afterwards.
Data further indicated that DEC was delivering products at
lower costs than it did prior to 1985.
SUMMARY
This chapter discussed problems that have consistently reappeared in
software development efforts, along with tools and techniques proposed at
various points in time. The general discussion, and company examples, also
suggest there are numerous other obstacles --
as well as
market-related,
technological, organizational, and cultural -- that have prevented firms from
consistently applying "best-practice" solutions to "worst problems."
Software consists of a series of phases, from some sort of requirements
analysis to more detailed specification and design, implementation (coding), and
then testing and maintenance.
The phases seem to be straightforward and
linear; however, each has managerial and technical problems associated with it,
and the iterative nature of actual product development can exacerbate the
impact of problems in one phase on others.
Other larger problems affected the
ability of firms to manage software technology in an optimal manner, such as
greater demand than supply of skilled programmers, and the increasing length
55
Chapter Two
III
and complexity of programs.
Yet, despite the technical and competitive hurdles,
firms and other
institutions have devoted considerable resources since the late 1960s to
developing tools and techniques.
To some extent, the focus of these efforts
seems to have shifted from individual tools and techniques meant for the
individual programmer, to more refined tools and methods that served as formal
company practices,
and
recently to integrating tools and methods
in
comprehensive software production environments. An early example of such an
integrated environment is the software factory approach followed at SDC in the
mid-1970s, and then pursued more successfully in Japan. Other recent tools and
concepts of particular importance, both in factory operations and non-factory
contexts,
included less ambiguous requirements
specification languages,
reusability technology, and automatic program generation.
IBM and GTE provide examples of the difficulties companies with a broad
range of internal and external software customers face in trying to impose one
approach on all development areas.
DEC presents yet another approach, which
falls in between a job-shop or craft organization, and a factory environment.
DEC offers programmers a "market basket" of tools and methods to use if they
choose, and a communications network to communicate,
if they choose; and
management puts all products through a rigorous design review process,
connecting requirements to final outputs to make sure programs come out as
they are supposed, regardless of the process.
Yet, while maximizing product
innovation objectives, evidence from company sources indicates that this mode
of operating in the past has made it almost impossible to capture synergies or
economies of scope and scale across different project groups.
As a result, DEC
in 1985 began greater efforts at process and tool standardization, as well as
systematic data collection and analysis.
56
Chapter Two
In reviewing the history of the software engineering field since the 1960s,
one striking observation is how persistent and similar the problems surfacing
have remained.
This suggests that there is no one set of comprehensive
solutions for these problems,
perhaps because it is impossible to specify,
predict, and automate all aspects of development for all types of software
applications and hardware, even within a single firm.
On the other hand, it is
problems at the company level seemed to center more on management and
organization than technology:
how to introduce the good tools and practices
that exist, develop better tools and practices on a continual basis, tailored to
specific customer needs, and then get an entire organization to adopt them
systematically.
While the tools and techniques of software engineering might be used in
job shops to have sufficient continuity, resources, and authority to be effective,
tools and method development, as well as training programs, probably had to be
organized and implemented at the company, division, or at least facility level,
rather than simply on an ad hoc, project basis.
For this reason, approaches
such as "software factories" -- organizations embracing a combination of tool
and methodology development, standardization of procedures, tools, and inputs,
and coordination of process technology with formal structures and managerial
controls
--
appear to offer
a more
comprehensive
mechanism for solving
problems and seeking scope or scale economies in software development, at least
for products amenable to some process or tool standardization, or reuse of
designs or actual code.
57
Chapter Two
III
REFERENCES
1. For a general description of software products and producers, see U.S.
Department of Commerce, A Competitive Assessment of the U.S. Software
Industry, Washington, D.C., International Trade Administration, 1984, especially
pp. 1-9.
2. A Competitive Assessment of the U.S. Software Industry, p. 10.
3. Bruce W. Arden, ed., What Can Be Automated?, Cambridge, MA, MIT Press,
1980, p. 564.
4. Similar versions of this life cycle, which has also been adopted by the U.S.
Department of Defense, can be found in Peter Naur and Brian Randell, eds.,
Software Engineering: Report on a Conference Sponsored by the NATO Science
Committee, Brussels, Scientific Affairs Division, NATO, January 1969, pp. 20-21;
Barry Boehm, "Software Engineering," IEEE Transactions on Computers, C-25
(1976); and numerous other sources, with minor variations. The following
discussion relies heavily on C. V. Ramamoorthy et al., "Software Engineering:
Problems and Perspectives," Computer (October 1984), pp. 192-193.
5. Derived from M. Zelkowitz et al., Principles of Software Engineering and
Design, Englewood Cliffs, N.J., Prentice-Hall, 1979, p. 9.
6. Naur and Randell.
Essays on Software
7. Frederick P. Brooks, Jr., The Mythical Man-Month:
Engineering, Reading, MA, Addison-Wesley Publishing Company, 1975.
8. Harvey Bratman and Terry Court (System Development Corporation), "The
Software Factory," Computer, May 1975, PP. 28-29.
9. R.H. Thayer, "Modeling a Software Engineering Project Management System, "
unpublished Ph.D. dissertation, University of California at Santa Barbara, 1979,
cited in Tarek Abdel-Hamid, "The Dynamics of Software Development Project
Management: An Integrative Systems Dynamic Perspective," unpublished Ph.D.
dissertation, M.I.T. Sloan School of Management, 1984, pp. 48-57.
10. Arden, pp. 798-799.
11. Arden, pp. 799-804.
12. Some of the communication problems involved in projects with large
numbers of members are discussed in Thomas J. Allen, Managing the Flow of
Technology Transfer and the Dissemination of Technological
Technology:
the RED Organization, Cambridge, MA, MIT Press, 1977. A
within
Information
general discussion of difficulties in product development can be found in Glen
L. Urban and John R. Hauser, Design and Marketing of New Products,
Englewood Cliffs, NJ, Prentice-Hall, 1980, pp. 31-60. Other literature on this
topic includes Deborah Gladstein Ancona and David F. Caldwell, "Management
Issues Facing New-Product Teams in High-Technology Companies," Advances in
Industrial and Labor Relations, Vol. 4, 1987.
58
Chapter Two
13. See R. Goldberg, "Software Engineering: An Emerging Discipline," IBM
Systems Journal, Vol. 25, Nos. 3/4, 1986, pp. 334-353. A discussion of this shift
in focus to environments can also be found in Horst Hunke, ed., Software
Engineering Environments (Amsterdam:
North-Holland, 1981).
14. Arden, pp. 793-794.
15. An excellent summary of this field and literature bibliography can be found
in Goldberg.
16. Raymond C. Houghton, Jr. (National Bureau of Standards),
Development Tools: A Profile," Computer, May 1983, p. 64.
"Software
17. Edward Nash Yourdon, ed., Classics in Software Engineering, New York,
Yourdon Press, 1979, pp. 257, 262.
18. Yourdon, pp. 1-2. 35, 41-42.
19. F.T. Baker, "Chief Programmer Team Management of Production
Programming, " IBM Systems Journal, Vol. 11, No. 1 (1972), in Yourdon, p. 73.
20. Yourdon, p. 63, 127-128, 137, 205.
21. B.W. Boehm, "Software Engineering," IEEE Transactions on Computers, Vol.
C-25, No. 12 (December 1976). This article is reproduced in Yourdon; this
quotation is from pp. 327-329.
22. Boehm (1976).
23. Boehm, "Software Engineering," in Yourdon, pp. 329-350.
24. Barry W. Boehm and Thomas A. Standish, "Software Technology in the
1990's: Using an Evolutionary Paradigm," IEEE Computer, November 1983, pp.
30-37.
25. The following elaboration on the table is based on Ramamoorthy et al.
26. Ramamoorthy et al., p. 205.
27. For a discussion of software errors and program reliability, see Martin
Shooman, Software Engineering:
Design, Reliability, and Management, New
York, McGraw-Hill, 1983, pp. 296-407.
28. For an excellent discussion of productivity measurement see Capers Jones,
Programming Productivity, New York, McGraw-Hill, 1986. Also, as a general
primary on software project management, see Barry W. Boehm, Software
Engineering Economics, Englewood Cliffs, NJ, Prentice-Hall, 1981.
29. See Barbara Liskov and John Guttag, Abstraction and Specification in
Program Development, Cambridge, MA., MIT Press, 1986, especially pp. 3-10,
316-318; and John Savage, Susan Magidson, and Alex Stein, The Mystical
Machine: Issues and Ideas in Computing, Reading, MA., Addison-Wesley, 1986,
pp. 225-228.
59
Chapter Two
30. Savage et al., pp. 238-241.
31. Lawrence M. Fisher, "A New Approach to Programming," The New York
Times, 7 September 1988, p. D8.
32. Ellis Horowitz and John B. Munson, "An Expansive View of Reusable
Software," IEEE Transactions on Software Engineering, Vol. SE-10, No. 5
(September 1984), pp. 477-478.
33. T. Capers Jones, "Reusability in Programming: A Survey of the State of the
Art," IEEE Transactions on Software Engineering, Vol. SE-10, No. 5, September
1984, p. 488.
34. This discussion is based on Jones (1984), pp. 488-494.
35. For additional discussion of reusability along the same lines as presented
here, see Robert G. Lanergan and Charles A. Grasso, "Software Engineering with
Reusable Designs and Code," IEEE Transactions on Software Engineering, Vol.
SE-10, No. 5 (September 1984), pp. 498-501.
36. BUSINESS WEEK, "The Software Trap:
pp. 142-154.
Automate -- or Else," 9 May 1988,
37. David H. Freedman, "In Search of Productivity", Infosystems, Nov. 1986, p.
12.
38. Leon G. Stucki, et al., "Concepts and Prototypes of ARGUS", in Hunke, ed.,
pp. 61-79.
39. R.R Willis, "AIDES:
Hunke (ed.), pp. 27-48.
Computer Aided Design of Software Systems-Il"
in
40. Naur and Randell, p. 123.
41. Interview with Tom Newman,
Research, 6/2/88.
Project Manager,
Bell Communications
42. Boehm, B. W., "Seven Basic Principles of Software Engineering", in Software
Infotech
Engineering Technigues -- Invited Papers (Nicholson House:
International Ltd., 1977) p. 79.
43. Boehm, et al, "A Software Development Environment
Productivity", IEEE Computer, June 1984, pp. 30-42.
for Improving
44. See Manfred Lenz, Hans Albreht Schmid, and Peter F. Wolf (IBM
Laboratories, West Germany), "Software Reuse through Building Blocks," EEE
Software, July 1987, pp. 34-42.
45. Curt Monash, "Software Strategies," Computer, February 1984, pp. 171-172.
46. Sources on IBM's history include Richard Thomas DeLamarter, Big Blue:
Company, 1986;
IBM's Use and Abuse of Power, New York, Dodd, Mead
Franklin M. Fisher, James W. McKie, and Richard B. Mancke, IBM and the U.S.
Data Processing Industry, New York, Praeger, 1983.
60
Chapter Two
47. Interviews with Frederick George, Manager Network Systems Design, IBM
Corporation, Raleigh, N.C., 1/6/87; and Mark Harris, Manager, Intermediate
Processor Development, I BM Corporation, Endicott, N. Y., 12 and 16 December 1986.
48. Letter from Glenn Bacon, IBM/Rolm Corporation, 25 April 1988.
49. William G. Griffin, "Software
November 1984, pp. 66-72.
Engineering
in GTE",
IEEE Computer,
50. The following description of software operations at DEC is based primarily
on a unpublished paper written by a former DEC employee for my course at the
MIT Sloan School of Management, "Japanese Technology Management" (15.940):
Cynthia Schuyler, "The Software Development Process:
A Comparison-Toshiba vs. Digital Equipment," 11 December 1987.
Copies available upon
request; and interviews with Wendy McKay, former Manager, Educational
Software, Digital Equipment Corporation, December 1986. Statements about the
general factory approach are based on Schuyler and material in the case studies
of Hitachi, Toshiba, NEC, and Fujitsu, drawn from sources cited in the
references to these chapters.
51. Schuyler, pp. 16-17; McKay interview.
52. Anne Smith Duncan and Thomas J. Harris, "Software Productivity
Measurements," Commercial Languages and Tools Group, Digital Equipment
Corporation, 1988, p. 1.
53. Schuyler, pp. 17-18.
54. Schuyler, p. 22.
55. Schuyler, p. 23.
56. Duncan and Harris, p. 2.
57. Schuyler, pp. 19-20; McKay interview.
58. The source is Schuyler, Exhibits 5 and 6.
59. Schuyler, pp. 24-25.
60. Schuyler, pp. 26-27.
61. Schuyler, pp. 27-28.
62. Interviews with J.
Corporation, April 1986.
Grady,
Engineering Manager,
Digital
Equipment
63. The remainder of this section is based on Duncan and Harris, pp. 1-9.
Performance data cited is from pp. 6-8.
61
Chapter Two
Download