principles to achieve successful system development

advertisement
1
System Development Process: The Incremental Commitment Model
2
PRINCIPLES TO ACHIEVE SUCCESSFUL SYSTEM DEVELOPMENT
3
The ultimate goal of system development is to deliver a system that satisfies the needs of
4
its operational stakeholders—users, operators, administrators, maintainers, interoperators, the
5
general public—within satisfactory levels of the resources of its development stakeholders—
6
funders, acquirers, developers, suppliers, others. From the human-system integration perspective,
7
satisfying operational stakeholders’ needs can be broadly construed to mean a system that is
8
usable and dependable; permits few or no human errors; and leads to high productivity and
9
adaptability. Developing and delivering systems that simultaneously satisfy all of these success-
10
critical stakeholders usually requires managing a complex set of risks such as usage uncertainties,
11
schedule uncertainties, supply issues, requirements changes, and uncertainties associated with
12
technology maturity and technical design. Each of these areas poses a risk to the delivery of an
13
acceptable operational system within the available budget and schedule. End-state operational
14
system risks can be categorized as uncertainties in achieving a system mission, carrying out the
15
work processes, operating within various constraints such as cost or personnel, satisfying
16
operational stakeholders, or achieving an acceptable operational return on investment.
17
18
This chapter summarizes the study’s analysis of candidate system design, development,
19
and evolution processes with respect to a set of study-derived critical success factor principles
20
for support of human-intensive system development. It presents the results of synthesizing the
21
contributions of these models along with key human factors processes into an Incremental
1
1
Commitment Model that is used as a process framework for application of the study’s
2
recommended processes, methods, and tools, and for illustrating their successful application in
3
several human-system design case studies (see Chapter 5).
4
The five
5
critical success factor principles for human-intensive system development and evolution were
6
evolved during the study and validated by analysis of the critical success factors of award-
7
winning projects (see Tables 2-2 and 2-3) and application to the case studies in Chapter 5:
8
9
1. Stakeholder satisficing. If a system development process presents a successcritical operational or development stakeholder with the prospect of an
10
unsatisfactory outcome, the stakeholder will generally refuse to cooperate,
11
resulting in an unsuccessful system. Stakeholder satisficing involves identifying
12
the success-critical stakeholders and their value propositions; negotiating a
13
mutually satisfactory set of system requirements, solutions, and plans; and
14
managing proposed changes to preserve a mutually satisfactory outcome.
15
2. Incremental growth of system definition and stakeholder commitment. This
16
characteristic encompasses the necessity of incremental discovery of emergent
17
human-system requirements and solutions via such discovery methods as
18
prototyping, operational exercises, and use of early system capabilities.
19
Requirements and commitment cannot be monolithic or fully pre-specifiable for
20
complex, human-intensive systems; understanding, trust, definition and
21
commitment is achieved through a cyclic process.
22
23
3. Iterative system development and definition. The incremental and evolutionary
approaches lead to cyclic refinements of requirements, solutions, and
2
1
development plans. Such iteration helps projects to learn early and efficiently
2
about operational and performance requirements.
3
4. Concurrent system definition and development. Initially, this includes concurrent
4
engineering of requirements and solutions, and integrated product and process
5
definition. In later increments, change-driven rework and rebaselining of next-
6
increment requirements, solutions and plans occurs simultaneously with
7
development of the current system increment. This allows early fielding of core
8
capabilities, continual adaptation to change, and timely growth of complex
9
systems without waiting for every requirement and subsystem to be defined.
10
5. Risk management – risk driven activity levels and anchor point milestones. The
11
level of detail of specific products and processes will depend on the level of risk
12
associated with them. If the user interface is considered a high-risk area, for
13
example, then more design activity will be devoted to this component to achieve
14
stakeholder commitments at particular design anchor points. On the other hand, if
15
interactive graphic user interface (GUI) builder capabilities make it low-risk not
16
to document evolving GUI requirements, much time-consuming effort can be
17
saved by not creating and continually updating GUI requirements documents
18
while evolving the GUI to meet user needs.
19
20
21
THE EVOLVING NATURE OF SYSTEM REQUIREMENTS
Traditionally, requirements have served as the basis for competitive selection of system
22
suppliers and subsequent contracts between the acquirer and selected supplier. As such, they are
23
expected to be prespecificably complete, consistent, unambiguous, and testable. Frequently,
24
progress payments and award fees are based on the degree to which these properties are satisfied.
3
1
However, particularly as systems depend more and more on being parts of network-
2
centric, collaboration-intensive systems of systems, the traditional approach to system
3
requirements has encountered increasing difficulties that the key ICM principles above have
4
been evolved to avoid. These difficulties include:
5

Emergent requirements. The most appropriate user interfaces and collaboration
6
modes for a human-intensive system are not specifiable in advance, but emerge
7
with system prototyping and usage. Forcing them to be prematurely and
8
precisely specified generally leads to poor business or mission performance and
9
expensive late rework and delays (Highsmith, 2000).
10

Rapid change. Specifying current-point-in-time snapshot requirements on a cost-
11
competitive contract generally leads to a big design up front, and a point-solution
12
architecture that is hard to adapt to new developments. Each of the many
13
subsequent changes then leads to considerable nonproductive work in
14
redeveloping documents and software, and in renegotiating contracts (Beck
15
1999).
16

Reusable components. Prematurely specifying requirements (e.g., hasty
17
specification of a 1-second response time requirement when later prototyping
18
showed that 4 seconds would be acceptable) that disqualify otherwise most cost-
19
effective reusable components often leads to overly expensive, late, and
20
unsatisfactory systems (Boehm 2000).
21
The key principles above focus more on incremental and evolutionary acquisition of the
22
most important and best-understood capabilities first; concurrently engineering requirements and
23
solutions; using prototypes, models, and simulations as ways of obtaining information to reduce
4
1
the risk of specifying inappropriate requirements; and basing requirements on stakeholder
2
negotiations once their implications are better understood.
3
The use of these principles works best when the stakeholders adopt a different vocabulary
4
when dealing with requirements. The primary (Webster, 2006) definition of a requirement is,
5
“something required, i.e., claimed or asked for by right and authority.” It is much easier to make
6
progress toward a mutually satisfactory negotiated solution if the stakeholders use more
7
negotiation-oriented terms such as “goals”, “objectives”, or “value propositions” rather than
8
assuming that they are dealing with non-negotiable “requirements”. And when tradeoffs among
9
cost, schedule, performance, and capabilities are not well understood, it is better to specify
10
prioritized capabilities and ranges of mutually satisfactory performance, rather than to insist on
11
precise and unambiguous requirements. However, following Principle 5 above on risk-driven
12
level of product detail, it is important to converge on precise requirements where the risk of
13
having them be imprecise is high. Some good examples are human-computer interaction
14
protocols for safety-critical systems and interfaces among separately-developed mission-critical
15
subsystems.
16
17
18
PRINCIPLES-BASED COMPARISON OF ALTERNATIVE PROCESS MODELS
The study included an analysis of candidate systems development process models with
19
respect to the five critical success factor principles. The candidate models included the waterfall,
20
V, spiral, and concurrent engineering process models discussed in the first two chapters of the
21
Handbook of Systems Engineering and Management (Sage and Rouse, 1999; Patterson, 1999),
22
plus emerging candidates such as agile methods (Beck, 1999; Highsmith, 2000), V-model
5
1
updates (V-Modell XT, 2005), and 2001 extensions of the spiral model (Boehm and Hansen,
2
2001)
3
The analysis, summarized in Figure 2-8, indicated that all of the models made useful
4
contributions, but exhibited shortfalls with respect to human factor considerations, particularly in
5
explicit guidance for stakeholder satisficing. Pure-sequential implementations of the Waterfall
6
and V-models are not good matches for human-intensive systems. They are becoming less
7
frequent, but are still often encountered due to imposition of legacy contracting clauses and
8
standards. More recently, the V-Model XT has adopted more risk-driven and incremental
9
approaches that encourage more concurrent engineering (V-Modell XT, 2005), but it takes some
10
skill to build in stakeholder satisficing and to avoid overly heavyweight implementations and
11
difficulties in coping with rapid change. Risk-driven evolutionary development is better at
12
coping with rapid change, but can have difficulties in optimizing around early increments with
13
architectures that encounter later scalability problems. Concurrent engineering explicitly
14
addresses incremental growth, concurrency, and iteration. It is compatible with stakeholder
15
satisficing and risk management, but lacks much explicit guidance in addressing them.
16
Agile methods are even better at coping with rapid change, but can have even more
17
difficulties with scalability and with mission-critical or safety-critical systems, in which fixing
18
shortfalls in the next increment is not acceptable. There are a wide variety of agile methods;
19
some such as Lean and Feature-Driven Development are better at scalability and criticality than
20
others. The version of spiral development in (Boehm-Hansen, 2001) with stakeholder satisficing
21
and anchor point milestones covers all of the principles, but is unspecific about just how risk
22
considerations guide iteration and incremental growth.
23
The analysis indicted primary shortfalls in support of human factors integration and
24
unproven ability to scale up to the future process challenges involving emergent, network-centric,
6
1
massively-collaborative systems of systems (Maier, 1998; Sage and Cuppan, 2001). The study
2
undertook to integrate human factors considerations into the Spiral 2005 process model (Boehm
3
and Lane, 2006), a generalization of the WinWin Spiral Model being used in the Future Combat
4
Systems system of systems (Boehm et al., 2004). The result is the Incremental Commitment
5
Model to be discussed next. It is not the only model that could be used on future human-
6
intensive systems of systems, but it has served as a reasonably robust framework for explaining
7
the study’s human-system integration concepts, and for evaluating these via the case studies in
8
Chapter 5.
9
10
11
12
THE INCREMENTAL COMMITMENT MODEL: OVERVIEW
13
The Incremental Commitment Model (ICM) builds on early verification and validation
14
concepts in the V-model, concurrency concepts in the Concurrent Engineering model, lighter-
15
weight concepts in the Agile and Lean models, risk-driven concepts in the spiral model, the
16
phases and anchor points in the Rational Unified Process (RUP) (Royce, 1998; Kruchten, 1999;
17
Boehm, 1996), and recent extensions of the spiral model to address systems of systems
18
acquisition (Boehm-Lane, 2006). In comparison to the software-intensive RUP, the ICM also
19
addresses hardware and human factors integration. It extends the RUP phases to cover the full
20
system life cycle: an Exploration phase precedes the RUP Inception phase, which is refocused
21
on Valuation and investment analysis. The RUP Elaboration phase is refocused on Architecting;
22
the RUP Construction and Transition phases are combined into Development; and an additional
7
1
Operations phase combines operations, production, maintenance, and phase-out. . An integration
2
of the RUP and the ICM is being prepared for use in the open-source Eclipse Process Framework.
3
In comparison to the sequential waterfall (Royce 1970) and V-model (Patterson, 1999),
4
the ICM explicitly emphasizes concurrent engineering of requirements and solutions, establishes
5
explicit Feasibility Rationales as pass/fail milestone criteria; explicitly enables risk-driven
6
avoidance of unnecessary documents, phases, and reviews; and provides explicit support for a
7
stabilized current-increment development concurrently with a separate change processing and
8
rebaselining activity to prepare for appropriate and stabilized development of the next increment.
9
These aspects can be integrated into a waterfall or V-model, enabling projects required to use
10
11
such models to cope more effectively with systems of the future.
An overview of the ICM life cycle process is shown in Figure 2-1. It identifies the
12
concurrently engineered life cycle phases, the stakeholder commitment review points and their
13
use of feasibility rationales to assess the compatibility, feasibility and risk associated with the
14
concurrently-engineering artifacts; and the major focus of each life cycle phase. There are a
15
number of alternatives at each commitment point. These are: (1) the risks are negligible and no
16
further analysis and evaluation activities are needed to complete the next phase; (2) the risk is
17
acceptable and work can proceed to the next life cycle phase: (3) the risk is addressable but
18
requires backtracking; or (4) the risk is too great and the development process should be
19
rescoped or halted. These risks are assessed by the system’s success-critical stakeholders, whose
20
commitment will be based on whether the current level of system definition gives sufficient
21
evidence that the system will satisfy their value propositions (see Box 2-1).
22
The ICM commitment milestones correspond fairly closely with the Department of
23
Defense acquisition milestones as defined in DoDI 5000.2 (U.S. DoD, 2003). For example, the
24
ICM DCR-milestone commitment to proceed into Development based on the validated Life
8
1
Cycle Architecture package (an Operations Concept Description, Requirements Description,
2
Architecture Description, Life Cycle Plan, working prototypes or high-risk elements, and a
3
Feasibility Rationale providing evidence of their compatibility and feasibility) corresponds fairly
4
closely with DoD’s Milestone B commitment to proceed into the System Development and
5
Demonstration phase.
6
7
VIEWS OF THE INCREMENTAL COMMITMENT MODEL
8
The following section provides multiple views of the Incremental Commitment Model
9
including a process model generator view, a concurrent level of activity view, an anchor point
10
milestone view, a spiral process view, and an incremental development view for incorporating
11
rapid change and high assurance using agile and plan driven teams. It concludes with a
12
comparison of the Incremental Commitment Model with other often-used process models.
13
Process Model Generator View
14
As can be seen by the four example paths through the Incremental Commitment Model in
15
Figure 2-2, the ICM is not a single monolithic one-size-fits-all process model. As with the spiral
16
model, it is a risk-driven process model generator, but the ICM makes it easier to visualize how
17
different risks create different processes.
18
In Example A, a simple business application based on an appropriately-selected
19
Enterprise Resource Planning (ERP) package, there is no need for a Valuation or Architecting
20
activity if there is no risk that the ERP package and its architecture will not cost-effectively
21
support the application. Thus, one could go directly into the Development phase, using an agile
22
method such as a Scrum/Extreme Programming combination would be a good fit. There is no
23
need for Big Design Up Front (BDUF) activities or artifacts because an appropriate architecture
9
1
is already present in the ERP package. Nor is there a need for heavyweight waterfall or V-model
2
specifications and document reviews. The fact that the risk at the end of the Exploration phase is
3
negligible implies that sufficient risk resolution of the ERP package’s human interface has been
4
done.
5
Example B involves the upgrade of several incompatible legacy applications into a
6
service-oriented web-based system. Here, one could use a sequential waterfall or V-model if the
7
upgrade requirements were stable, and its risks were low. However, if for example the legacy
8
applications’ user interfaces were incompatible with each other and with web-based operations, a
9
concurrent risk-driven spiral, waterfall, or V-model that develops and exercise extensive user
10
11
interface prototypes and generates a Feasibility Rationale (described below) would be preferable.
In Example C, the stakeholders may have found during the Valuation phase that their
12
original assumptions about the stakeholders having a clear, shared vision and compatible goals
13
with respect the proposed new system’s concept of operation and its operational roles and
14
responsibilities were optimistic. In such a case, it is better to go back and assure stakeholder
15
value proposition compatibility and feasibility before proceeding, as indicated by the arrow back
16
into the valuation phase.
17
In Example D, it is discovered before entering the Development phase that a superior
18
product has already entered the marketplace, leaving the current product with an infeasible
19
business case. Here, unless a viable business case can be made by adjusting the project’s scope,
20
it is best to discontinue it. It is worth pointing out that it is not necessary to proceed to the next
21
major milestone before terminating a clearly non-viable project, although stakeholder
22
concurrence in termination is essential.
10
1
2
Concurrent Levels of Activity View
The Concurrent Levels of Activity view shown in Figure 2-3 is an extension of a similar
3
view of concurrently engineered software projects developed as part of the Rational Unified
4
Process (Kruchten, 1999). As with the RUP version, it should be emphasized that the magnitude
5
and shape of the levels of effort will be risk-driven and likely to vary from project to project. In
6
particular, they are likely to have mini risk/opportunity-driven peaks and valleys, rather than the
7
smooth curves shown for simplicity in Figure 2-3. The main intent of this view is to emphasize
8
the necessary concurrency of the primary success-critical activity classes shown as rows in
9
Figure 2-3. Thus, in interpreting the Exploration column, although system scoping is the
10
primary objective of the Exploration phase, doing it well involves a considerable amount of
11
activity in understanding needs, envisioning opportunities, identifying and reconciling
12
stakeholder goals and objectives, architecting solutions, life cycle planning, evaluation of
13
alternatives, and negotiation of stakeholder commitments. Many HSI best-practice tables
14
confine each recommended practice within a single phase-activity cell. Experts treat these
15
confinements as suggestions that need not be followed, but non-expert decision makers often
16
follow such confinements literally, seriously reducing their effectiveness.
17
Table 2-1 shows the primary methods and work products involved in each activity class.
18
The second column of Table 2-1 shows the primary HSI methods that are discussed in Part II of
19
the report. The third column shows the primary corresponding systems engineering methods..
20
See Table 3-A-1 in Chapter 3 Appendix for a more detailed presentation of activities, methods
21
and best practices contained in ISO 18152.
11
1
2
The Development Commitment Anchor Point Milestone Review
Figure 2-3 suggests that is a great deal of concurrent activity planned to occur within and
3
across the various ICM phases. This gives rise to two main questions. First, more specifically
4
than in Figures 2-2 and 2-3, what are the main concurrent activities that are going on in each
5
phase? Second, how are the many concurrent activities synchronized, stabilized, and risk-
6
assessed at the end of each phase?
7
8
Figure 2-4, an elaboration of Figure 2-2, provides the next-level answer for the first
question.
9
The elaboration of the concurrent engineering and feasibility evaluation activities makes
10
it clearer just what is being concurrently engineered and evaluated in each phase. For example,
11
at the Development Commitment Review (DCR), the stakeholders and specialty experts review
12
the Life Cycle Architecture (LCA) package for the overall system and for each increment to
13
assure themselves that it is worthwhile to commit their human, financial, and other resources to
14
the next phase of system Development.
15
During the Architecting phase, the project prepares for the DCR by concurrently
16
engineering the system’s operational aspects into a detailed operational concept and set of
17
system requirements; the various Commercial of the Shelf (COTS),- custom, and outsourced
18
capabilities into a compatible build-to architecture; and the business case and resource
19
constraints into a set of compatible plans, budgets, and schedules for each phase and for the
20
overall system.
21
The next-level answer for the second question on synchronization, stabilization, and risk
22
assessment is provided by the contents of the ICM ACR and DCR anchor point milestone
23
feasibility rationales referred to in Figures 2-4 and shown in Figure 2-5. The contents indicate
12
1
that the project is responsible not just for producing a set of artifacts, but also for producing the
2
evidence of their compatibility and feasibility. This evidence—from models, simulations,
3
prototypes, benchmarks, analyses, etc.—is provided to experts and stakeholders in advance of
4
the milestone review. Shortfalls in this evidence for compatibility and feasibility of the
5
concurrently engineered artifacts should be identified by the system developer as potential
6
project risks and addressed by risk management plans. Any further shortfalls in the evidence or
7
the risk management plans found by the reviewers should be communicated to the developers in
8
time for them to prepare responses to be presented at the DCR review meeting.
9
At the DCR milestone review meeting for the LCA package, the project then either
10
provides adequate additional evidence of feasibility or additional risk management plans to
11
address the risks. The stakeholders then decide whether the risks are negligible, acceptable, high
12
but addressable, or too high and unaddressable, and the project proceeds in the direction of the
13
appropriate DCR risk arrow in Figures 2-2 and 2-4.
14
15
The Other ICM Milestone Reviews
The Architecture Commitment Review (ACR) criteria and procedures are similar but less
16
elaborate than those in the DCR, as the degree of stakeholder resource commitment to support
17
the Architecting phase is considerably lower than for supporting the Development phase. The
18
ACR and DCR review procedures are adapted from the highly successful AT&T Architecture
19
Review Board procedures described in (Marenzano et al., 2005). For the ACR, only high-risk
20
aspects of the Operational Concept, Requirements, Architecture, and Plans are elaborated in
21
detail. And it is sufficient to provide evidence that at least one combination of those artifacts
22
satisfies the Feasibility Rationale criteria, as compared to demonstrating this at the DCR for a
23
particular choice of artifacts to be used for development.
13
1
The review criteria and procedures for the Exploration Commitment Review (ECR) and
2
the Valuation Commitment Review (VCR) are even less elaborate than those for the ACR
3
milestone, as the commitment levels for proceeding are considerably lower. But they will
4
similarly have a risk-driven level of detail and risk-driven stakeholder choice of review outcome.
5
For the ECR, the focus is on a review of an Exploration Phase plan with the proposed scope,
6
schedule, deliverables, and required resource commitment, by a key subset of stakeholders. The
7
plan content is risk-driven, and could be put on a single page for a small and non-controversial
8
Exploration phase. For the VCR, the risk-driven focus is similar; the content includes the
9
Exploration phase results and a valuation phase plan; and a review by all of the stakeholders
10
11
involved in the Valuation phase.
The Operations Commitment Review (OCR) is different, in that it addresses the often
12
much higher operational risks of fielding an inadequate system. In general, stakeholders will
13
experience a factor of 2-to-10 increase in commitment level in going through the sequence of
14
ECR to DCR milestones, but the increase in going from DCR to OCR can be much higher. The
15
OCR focuses on evidence of the adequacy of plans and preparations with respect to doctrine,
16
organization, training, material, leadership, personnel, and facilities (DOTMLPF), along with
17
plans, budgets, and schedules for production, fielding, and operations.
18
The series of ICM milestones has the advantage of reflecting other human life cycle
19
incremental commitment sequences such as those of getting married and raising a family. The
20
ECR is similar to a nonexclusive commitment to go out on dates with a girlfriend or boyfriend.
21
The VCR is similar to a more exclusive but informal commitment to “go steady,” and the ACR is
22
similar to a more formal commitment to get engaged. The DCR is similar to an “until death do us
23
part” commitment to get married: if one marries one’s Life Cycle Architecture package in haste,
14
1
one will repent at leisure. The OCR is similar to having one’s first child: once it arrives, one’s
2
lifestyle is changed by the need to maintain the health of this legacy entity.
3
Another relevant metaphor for the ICM is to poker games such as Texas Hold’em. At
4
each round of betting, each stakeholder looks at their own hole cards and the jointly-visible
5
community cards, and decides whether it is worth adding further resources to the pot of resources
6
on the table, in order to see further community cards and to win the pot based on having the best
7
poker hand constructible from one’s own hole cards and the community cards. With the ICM,
8
however, there will be negotiations designed to make win conditions for each success-critical
9
stakeholder.
10
11
The Spiral View
A simplified spiral model view of the ICM is provided in Figure 2-6 below. It avoids
12
sources of misinterpretation in previous versions of the spiral model, and concentrates on the five
13
key spiral development principles. Stakeholder satisficing is necessary to pass the stakeholder
14
commitment review points or anchor point milestones. Incremental growth in system
15
understanding, cost, time, product, and process detail is shown by the spiral growth along the
16
radial dimension. Concurrent engineering is shown by progress along the angular dimension.
17
Iteration is shown by taking several spiral cycles both to define and develop the system. Risk
18
management is captured by indicating that the activities’ and products’ levels of detail in the
19
angular dimension are risk-driven, and by the risk-driven arrows pointing out from each of the
20
anchor point commitment milestones.
21
These arrows show that the spiral model is not a sequential, unrollable process, but that it
22
incorporates many paths through the diagram including skipping a phase or backtracking to an
23
early phase based on assessed risk. The fourth arrow pointing toward rescoping or halting in
15
1
Figures 2-2 through 2-4 is omitted from Figure 2-6 for simplicity; it would be pointing down
2
underneath the plane of Figure 2-6. Other aspects of the spiral model, such as the specific
3
artifacts being concurrently engineered, and the use of the Feasibility Rationale are consistent
4
with their use in Figure 2-4 and the other figures, where they are easier to understand and harder
5
to misinterpret than in a spiral diagram. Also for simplicity, the concurrent operation of
6
increment N, development of increment N+1, and architecting of increment N+2 are not shown
7
explicitly, although they are going on. This concurrency is explained in more detail in the next
8
section.
9
10
Incremental Development for Accommodating Rapid Change and High Assurance
Many future systems and systems of systems will need to simultaneously achieve high
11
assurance and adaptation to both foreseeable and unforeseeable rapid change, while meeting
12
shorter market windows or new defense threats. Figure 2-7 shows an increment view of the
13
Incremental Commitment Model for addressing such situations. It assumes that the organization
14
has developed artifacts that have passed a Development Commitment Review, including:
15

A best-effort definition of the system’s envisioned overall capability;
16

An incremental sequence of prioritized capabilities culminating in the overall
17
18
system capability;

A Feasibility Rationale providing sufficient evidence for each increment and the
19
overall system that the system architecture will support the increment’s specified
20
capabilities; that each increment can be developed within its available budget and
21
schedule; and that the series of increments create a satisfactory return on
22
investment for the organization and mutually satisfactory outcomes for the
23
success-critical stakeholders.
16
1
The solid lines in Figure 2-7 represent artifact flows. For example, the baselined
2
operational concept, requirements, architecture, and development plans for Increment N enter the
3
center box and guide the plan-driven development of Increment N to be transitioned into
4
operations and maintenance. This development is stabilized by only accepting changes that have
5
been architecturally anticipated (or occasional exceptional show-stoppers). The corresponding
6
baselines for future increments enter the top box, in which an agile team addresses unforeseeable
7
changes and unavoidable content deferrals from Increment N into future increments. The agile
8
team’s output is a rebaselined set of specifications and plans to be used in developing Increment
9
N+1, and counterpart rebaselined specifications and plans for future increments to be updated
10
11
during Increment N+1.
The dotted lines in Figure 2-7 represent cause-effect relationships. For example, the need
12
to deliver high-assurance incremental capabilities on relatively short fixed schedules (to avoid
13
delivery of obsolete capabilities in era of increasingly rapid change) means that each increment
14
needs to be kept as stable as possible. This is particularly the case for very large systems of
15
systems with deep supplier hierarchies (often 6 to 12 levels), in which a high level of in-process
16
change adaptation traffic can easily lead to the developers spending more time processing
17
changes than doing development. In keeping with the use of the ICM as a risk-driven process
18
model generator, the risks of destabilizing the development process make this portion of the
19
project into a build-to-specification subset of the concurrent activities, in which the only changes
20
accommodated are potential show-stoppers or foreseeable changes that have been accommodated
21
in the increment’s architecture. The need for high assurance of each increment also makes it
22
cost-effective to invest in a team of appropriately skilled personnel to continuously verify and
23
validate the increment as it is being developed, as shown in the lower box in Figure 2-7.
17
1
In order to avoid delays and shortfalls in getting increment N+1 specifications and plans
2
ready for development, the agile team is concurrently assessing the unforeseen change traffic and
3
rebaselining the next increment’s LCA package and Feasibility Rationale, so that the stabilized
4
build-to-spec team will have all it needs to hit the ground running in rapidly developing the next
5
increment. More detail on this process, and its staffing and contracting implications, is provided
6
in (Boehm, 2006).
7
8
9
10
PROJECT EXPERIENCE WITH ICM PRINCIPLES
The Incremental Commitment Model uses the critical success factor principles to extend
11
several current spiral-related processes such as the Rational Unified Process, Win-Win Spiral
12
process, and Lean Development process, in ways that more explicitly integrate human-system
13
integration into the system life cycle process. A good source of successful projects that have
14
applied the critical success factor principles is the annual series of Top-5 software-intensive
15
systems projects published in CrossTalk (CrossTalk, 2002-2005).
16
The “Top-5 Quality Software Projects” are chosen annually by panels of leading experts
17
as role models of best practices and successful outcomes. Table 2-2 summarizes each year’s
18
record with respect to usage of four of the five principles: concurrent engineering, risk-driven
19
activities, and evolutionary and iterative system growth (most of the projects were not specific
20
about stakeholder satisficing). Of the 20 top-5 projects in 2002 through 2005, 14 explicitly used
21
concurrent engineering, 12 explicitly used risk-driven development, and 14 explicitly used
22
evolutionary and iterative system growth, while additional projects gave indications of their
23
partial use. Table 2-3 provides more specifics on the 20 projects.
18
1
Evidence of successful results of stakeholder satisficing can be found in the annual series
2
of USC e-services projects using the win-win spiral model as described in (Boehm et al., 1998).
3
Since 1998 over 50 user-intensive e-services applications have used the win-win spiral model to
4
achieve a 92% success rate of on-time delivery of stakeholder satisfactory systems.
5
6
7
CONCLUSIONS
Future transformational, network-centric systems will have many usage uncertainties and
8
emergent characteristics. Their hardware, software, and human factors will need to be
9
concurrently engineered, risk-managed, and evolutionarily developed to converge on cost-
10
effective system operations. They will need to be both highly dependable and rapidly adaptable
11
to frequent changes.
12
The Incremental Commitment Model described in this chapter builds on experience-
13
based critical success factor principles (stakeholder satisficing, incremental definition, iterative
14
evolutionary growth, concurrent engineering, risk management) and the strengths of existing V,
15
concurrent engineering, spiral, agile, and lean process models to provide a framework for
16
concurrently engineering human factors into the systems engineering and systems development
17
processes. It provides capabilities for evaluating the feasibility of proposed HSI solutions; and
18
for integrating HSI feasibility evaluations into decisions on whether and how to proceed further
19
into systems development and operations. It presents several complementary views showing how
20
the principles are applied to perform risk-driven process tailoring and evolutionary growth of a
21
systems definition and realization; to synchronize and stabilize concurrent engineering; and to
22
enable simultaneous high-assurance development and rapid adaptation to change. It analyzes the
23
use of the critical success factor principles on the best-documented government software19
1
intensive system acquisition success stories: the 2002-2005 Cross Talk Top-5 projects, and
2
shows that well over half of them explicitly applied the principles. The next three chapters will
3
elaborate on how HSI practices fit into the ICM process, and provide case studies of successful
4
HSI projects that have used the principles and practices.
5
Unfortunately, the current path of least resistance for a government program manager is
6
to follow a set of legacy regulations, specifications, and standards that select, contract with, and
7
reward developers for doing almost the exact opposite. Most of these legacy instruments
8
emphasize sequential vs. concurrent engineering; risk-insensitive vs. risk-driven processes; early
9
definition of poorly-understood requirements vs. better understanding of needs and opportunities;
10
and slow, unscalable contractual mechanisms for adapting to rapid change.
11
This chapter has provided a mapping of the ICM milestones to the current DoD 5000.2
12
acquisition milestones that shows that they can be quite compatible. It also shows how projects
13
could be organized into stabilized build-to-specification increments that fit current legacy
14
acquisition instruments, along with concurrent agile change-adaptation and V&V functions that
15
need to use alternative contracting methods. Addressing changes of this nature will be important
16
if organizations are to realize the large potential value offered by investments in HSI processes,
17
methods and tools.
18
20
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
BOX 2-1
VALUE-BASED SYSTEMS AND SOFTWARE ENGINEERING
In order for a system’s stakeholders to commit their personal material and financial resources to the next
level of system elaboration, they must be convinced that the current level of system elaboration provides evidence
that their value propositions will be satisfied by the system. This success condition is consistent with the Theory W
(win-win) approach to value-based systems and software engineering, which states that a project will be successful
if and only if it makes winners of its success-critical stakeholders. If the project does not create a satisfactory value
proposition for some success-critical stakeholders (a win-lose situation), they will refuse to participate or will
counterattack, generally creating a lose-lose situation for all the stakeholders.
The associated key value-based principles for creating a success-critical stakeholder win-win outcome are:
(1) to identify the success-critical stakeholders and their value propositions; (2) to identify, confront, and resolve
conflicts among these value propositions; (3) to enable the stakeholders to negotiate a mutually satisfactory or winwin solution region or opportunity space, and (4) to monitor the evolution of the opportunity space, and apply
corrective or adaptive actions to keep the opportunity space viable or increase its value (Boehm-Jain, 2005; 2006).
The associated key value-based practices address these principles and also involve using alternative
terminology to traditional project or system acquisition terminology: early-stage “goals, objectives, value
propositions, or win conditions” rather than “requirements;” “solution space” rather than “solution;” “desired and
acceptable levels of service” rather than “the required level of service;” “satisficing” rather than “optimizing;” and
“success-critical stakeholder or partner” rather than “vendor, supplier, or worker.”
Key value-based practices for identifying the success-critical stakeholders and their value propositions
include the kinds of ethnographic techniques discussed elsewhere in this chapter, plus a technique called Results
Chains (Thorp, 1998) for identifying success-critical stakeholders. Other useful techniques include scenarios,
prototypes, brainstorming, QFD, business case analysis, and participatory design, plus asking “why?” for each
“what” or “how” identified by a stakeholder.
Key value-based practices for identifying, confronting, and resolving conflicts among stakeholder value
propositions include inventing options for mutual gain (Fisher-Ury, 1981), expectations management, business case
analysis, and group-based techniques for prioritizing desired capabilities and for identifying desired and acceptable
levels of service.
Key value-based practices for enabling stakeholders to negotiate a mutually satisfactory or win-win
solution region or opportunity space include the conflict resolution techniques just described, plus negotiation
techniques (Raiffa, 1982), risk-based techniques for determining how much of an activity, artifact, or level of
service is enough such as real options theory (Black-Scholes, 1973; Amram-Kulatilaka, 1999), and groupware
support systems for negotiating stakeholder win-win requirements.
Key value-based practices for monitoring and keeping the opportunity space viable or increasing its value
include market-watch and technology-watch techniques, incremental and evolutionary development, architecting to
accommodate future change, adaptive control techniques, and business-value-oriented earned value management
systems.
38
21
1
2
FIGURE 2.1 Overview of the Incremental Commitment Life Cycle Process.
3
22
1
2
FIGURE 2-2 Different Risks Create Different ICM Processes.
23
1
2
FIGURE 2-3 ICM Activity Categories and Level of Effort
3
24
1
2
FIGURE 2-4 Elaboration of the ICM Life Cycle Process.
25
1

Evidence provided by developer and validated by independent experts that if the system
is build to the specified architecture, it will
 Satisfy the requirements: capability, interfaces, level of service, and evolution
 Support the operational concept
 Be buildable within the budgets and schedules in the plan
 Generate a viable return on investment
 Generate satisfactory outcomes for all of the success-critical stakeholders
2

All major risks resolved or covered by risk management plans

Serves as basis for stakeholders’ commitment to proceed
FIGURE 2-5 ICM ACR and DCR Anchor Point Milestone Feasibility Rationale Content
3
4
Cumulative Level of Understanding, Cost, Time, Product, and
Process Detail (Risk-Driven)
Concurrent
Engineering of
Products and
Processes
OPERATION2
OPERATION1
DEVELOPMENT
ARCHITECTING
VALUATION
STAKEHOLDER
COMMITMENT
REVIEW POINTS:
Opportunities to
proceed, skip
phases
backtrack, or
terminate
5
6
EXPLORATION
6
5
4
3
2
1
FIGURE 2-6 Simplified Spiral View of the ICM.
7
26
1
Exploration Commitment Review
2
Valuation Commitment Review
3
Architecture Commitment Review
4
Development Commitment Review
5
Operations1 and Development2
Commitment Review
6
Operations2 and Development3
Commitment Review
Unforeseeable Change (Adapt)
DIN+1 Baseline LCA
Agile
Rebaselining for
Future Increments
Rapid
Change
Foreseeable
Change
(Plan)
Short
Development
Increments
DIN LCA
Stable Development
Increments
High
Assurance
Continuous V&V
Future Increment Baselines
DIN IOC
Deferrals
Short, Stabilized
Development
of Increment N
Increment Baselines
1
2
DIN+1 Re-Baselined LCA
Artifacts
Increment N Transition/
Operations and Maintenance
Concerns
Verification and
Validation (V&V)
of Increment N
Cause-effect relationship
Artifact flow
FIGURE 2-7 Risk-Driven ICM for Accommodating Rapid Change and High Assurance.
3
4
Process
Models
Principles
Stakeholder
Satisficing
Incremental
Growth
Concurrency
Iteration
Sequential
Waterfall, V
Assumed via initial
requirements; no
specifics
Sequential
No
No
Once at the
beginning
Iterative,
Risk-Driven
Waterfall, V
Assumed via initial
requirements; no
specifics
Risk-driven;
missing specifics
Risky parts
Yes
Yes
Risk-Driven
Evolutionary
Development
Revisited for each
iteration
Risk-driven;
missing specifics
Risky parts
Yes
Yes
Concurrent
Engineering
Implicit; no
specifics
Yes; missing
specifics
Yes
Yes
Implicit; no
specifics
Agile
Fix shortfalls in
next phase
Iterations
Yes
Yes
Some
Spiral Process
2001
Driven by
stakeholder
commitment
milestones
Risk-driven;
missing specifics
Yes
Risk-driven
Yes
Incremental
Commitment
Stakeholderdriven; stronger
human factors
Risk-driven;
more specifics
Yes
Yes
Yes
27
Risk
Management
support
1
FIGURE 2-8 Principles-Based Comparison of Alternative Process Models
2
28
1
TABLE 2-1 Primary Focus of HSI Activity Classes and Methods
Activity Class
1. Envisioning opportunities
Examples of HSI Methods
Described in this Volume
-Futures workshop
-Field observations and ethnography
2. System scoping
-Participatory workshops
-Field observations and ethnography
-Organizational and environmental
context analysis
-Work context analysis
3. Understanding needs
-Organizational and environmental
context analysis
-Field observations and ethnography
-Value-based practices
-Participatory workshop
-Contextual Inquiry
-Work context analysis
-Event data analysis
-Task analysis
-Cognitive task analysis
-Usability benchmarking
-Prototyping and usability evaluation
-Simulations
-Scenarios
-Personas
-Storyboards
-Checklists
-Function allocation
-Simulations
-Work domain analysis
-Task analysis
-Participatory design
-Workload assessment
-Human performance modeling
-Mitigating Fatigue
-Prototyping -Physical ergonomics
-User interface guidelines and
standards
-Prototyping and usability evaluation
4. Goals/objectives and
Requirements
5. Architecting solutions
6. Life cycle planning
-HSI program risk analysis
-Guidelines: Common Industry
Format
29
Systems Engineering
-Modeling
-Change monitoring (technology,
competition, marketplace,
environment
-Investment analysis
-System boundary definition
-Resource allocation
-External environment
characterization
-Success-critical stakeholder
identification
-Success critical stakeholder
requirements
-Competitive analysis
-Market research
-Future needs analysis
-Architecture frameworks
-COTS/reuse evaluation
-Legacy transformation analysis
-Human-hardware-software
allocation
-quality attribute analysis
-synthesis
-Facility/vehicle architecting
-Equipment design
-Component evaluation and
selection
-Supplies/logistics planning
-Construction/ maintenance planning
-Architectural style determinants
-Component evaluation and
selection
-Physical/logical design
-Evolvability design
-Phased objectives (increments,
legacy transformations)
-Milestones and schedule
-Roles and responsibility
-Approach
-Resources
-Assumptions
7. Evaluation
-Risk analysis (process and product)
-Use models and simulation
-Guidelines: Common Industry
Format for usability reports
-Performance measurement
Usability evaluation
8. Negotiating commitments
-Program risk analysis
-Value-based practices and
principles
-Guidelines: Common Industry
Specification for Usability
Requirements
-Risk analysis (process and product)
-User feedback on usability
-Use models and simulation
-Guidelines: Common Industry
Format for usability reports
-Performance measurement
9. Development and evolution
10. Monitoring and control
11. Operations and retirement
12. Organizational capability
improvement
-Organizational and environmental
context analysis
-Risk Analysis
-User feedback
-Work context analysis
-Work context analysis
-Organizational and environmental
context analysis
-Organizational and environmental
context analysis
-Evidence of fitness to proceed
-Feasibility (usability, functionality,
safety)
-Other quality attributes
-Cost/schedule risk
-Business case mission analysis
-Stakeholder commitment
-Simulations, models, benchmarks,
analysis
-Dependency/compatibility tradeoff
analysis
-Expectation management,
prioritization
-Option preservation
-Incrementing sequencing
Materiel/operational solution
analysis; make or buy analysis;
acquisition planning; source
selection;
contracting/incentivization;
human/hardware/software element
development and integration; legacy
transformation preparation;
incremental installation;
Progress monitoring vs. plans;
corrective action; adaptation of plans
to change monitoring;
Planned operations and retirement;
OODA (observe, orient, decide, act)
operations and retirement;
adaptation of operations to change
monitoring;
Organizational goals and strategy
definition; resource allocation;
capability improvement activities;
1
2
NOTE: HSI methods often span multiple activity classes
3
TABLE 2-2 Number of Top-5 Projects Explicitly Using ICM Principles
Year
Concurrent Engineering
Risk-Driven
Evolutionary Growth
2002
4
3
3
2003
4
3
2
2004
2
2
4
2005
4
4
5
Total (of 20)
14
12
14
4
30
1
TABLE 2-3 Critical Success Factor (CSF) Aspects of Top-5 Software Projects
Software Project
CSF
Degree
Concurrent
Requirements/
Solution Development
Risk-Driven Activities
Evolutionary,
Incremental Delivery
STARS Air Traffic Control
*
Yes
HCI, Safety
For multiple sites
Minuteman III Messaging
(HAC/RMPE)
*
Yes
Safety
Yes; block
FA-18 Upgrades
*
upgrades
Not described
Yes
Yes; block
upgrades
Census Digital Imaging
(DCS2000)
**
FBCB2 Army Tactical C3I
**
Yes
Yes
date
Defense Civilian Pay
(DCPS)
Yes
Yes
Yes
No; waterfall
Yes
For multiple
organizations
Tactical Data Radio
(EPLRS)
**
Yes
Yes
Yes
Joint Helmet-Mounted
Cueing (JHMCS)
*
Yes; IPT-based
Not described
For multiple aircraft
Kwajalein Radar (KMAR)
*
Yes; IPT-based
Not described
For multiple radars
One SAF Simulation
Testbed (OTB)
**
Yes
Yes
Yes
Advanced Field Artillery
(AFATDS)
Initially waterfall
Not described
Yes; block upgrades
Defense Medical Logistics
(DMLSS)
Initially waterfall
Not described
Yes; block upgrades
F-18 HOL (H1E SCS)
Legacy requirementsdriven
Yes; COTS, display
No
One SAF Objectives
System (OOS)
**
Yes
Yes
Yes
Patriot Excalibur (PEX)
**
Yes; agile
Not described
Yes
Lightweight Handheld Fire
Control
**
Yes
Yes
Yes
Initially waterfall
Not described
Yes; block upgrades
Marines Integrated Pay
(MCTFS)
Near Imaging Field Towers
(NIFTI)
**
Yes; RUP based
Yes
Yes
Smart Cam Virtual Cockpit
(SC3DF)
**
Yes
Yes
Yes
WARSIM Army Training
**
Yes
Yes
Yes
2
3
4
No; fixed delivery
REFERENCES
31
1
2
Amram, M., and Kulatilaka, N. (1999) Real options: Managing strategic investment in an
uncertain world. Harvard Business School Press.
3
Beck, K. (1999) Extreme Programming Explained, Addison Wesley, Reading, MA
4
Black, F. and Scholes, M. (1973) The pricing of options and corporate liabilities. Journal of
5
political economy, 81 pp. 637-659
6
Boehm, B. (July 1996). Anchoring the Software Process. Software. pp. 73-82.
7
Boehm, B. (May 1988). A Spiral Model for Software Development and Enhancement.
8
9
10
11
12
13
Computer. pp. 61-72.
Boehm, B. (Spring 2006). Some Future Trends and Implications for Systems and Software
Engineering Processes. Systems Engineering. pp. 1-19.
Boehm, B., and Hansen, W. (May 2001). The Spiral Model as a Tool for Evolutionary
Acquisition. CrossTalk. pp. 4-11.
Boehm, B., and Jain, A. (2005) “An initial theory of value-based software engineering.” In Biffl,
14
S., Aurum, A., Boehm, B., Erdogmus, H., and Gruenbacher, P., Value-based software
15
engineering. Springer. pp. 15-37
16
17
18
19
20
21
Boehm, B., and Jain, A. (2006) “A value-based theory of systems engineering,” Proceedings,
INCOSE 2006.
Boehm, B., and Lane, J. (May 2006). 21st Century Processes for Acquiring 21st Century
Systems of Systems. CrossTalk. pp. 4-9.
Boehm, B., Brown, A.W., Basili, V., and Turner, R. (May 2004) “Spiral acquisition of softwareintensive systems of systems,” CrossTalk, pp. 4-9.
32
1
2
3
4
5
6
Boehm, B., Egyed, A., Kwan, J., Port, D., Shah, A., and Madachy, R. (July 1998). Using the
WinWin Spiral Model: A Case Study. Computer. pp.33-44.
Boehm, B.W. (1991). Software Risk Management: Principles and Practices. IEEE Software,
8(1), 32-41.
Checkland, P. (1999) Systems Thinking, Systems Practice. 2nd ed., Wiley. Clements, P. et al.
(2002). Documenting Software Architectures, Addison Wesley.
7
Clements, P. et al. (2002). Documenting Software Architectures, Addison Wesley.
8
CrossTalk (2002-2005, “Top Five Quality Software Projects,” January 2002, July 2003, July
9
2004, September 2005.
10
Fisher, R., and Ury, W. (1981). Getting to yes. Houghton-Mifflin
11
Highsmith, J. (2000) Adaptive Software Development. Dorset House, New York, NY
12
ISO/IEC 15288 (2002) Systems engineering – System life cycle processes.
13
Kruchten, P. (1999). The Rational Unified Process, Addison Wesley.
14
Lane, J., and Valerdi, R. (October 2005). Synthesizing systems-of-systems concepts for use in
15
16
17
18
19
20
cost estimation, Proceedings, IEEE SMC 2005.
Maier, M. (1998). “Architecting principles for systems-of-systems, Systems Engineering 1:4,
pp. 267-284.
Marenzano, J. et al. (2005) Architecture reviews: Practice and experience. IEEE Software,
March/April 2005, pp. 34-43.
Merriam Webster. (2006).Webster’s Collegiate Dictionary.
33
1
Patterson, F. (1999). System engineering life cycles: life cycles for research, development, test,
2
and evaluation; acquisition; and planning and marketing, in Sage, A., and Rouse, W., ed.,
3
(1999) Handbook of Systems Engineering and Management, Wiley, pp. 59-111.
4
Raiffa, H. (1982) The art and science of negotiation. Harvard University Press.
5
Royce, W. E. (1998) Software Project Management. Addison Wesley.
6
Sage, A., and Cuppan, C. (2001). On the systems engineering and management of systems of
7
systems and federations of systems, Information, Knowledge, and Systems Management
8
2, pp. 325-345.
9
10
11
Sage, A., and Rouse, W., ed. (1999). Handbook of Systems Engineering and Management,
Wiley, New York, NY
Sage, A., and Rouse, W. (1999). An Introduction to Systems Engineering and Systems
12
Management, in Sage, A., and Rouse, W., ed. (1999) Handbook of Systems Engineering
13
and Management, Wiley, pp. 1-58.
14
Standish Group (2006) https://secure.standishgroup.com/reports/reports.php]
15
Thorp, J. (1998). The information paradox. McGraw Hill.
16
U.S Department of Defense (2003). DoD Instruction 5000.2, Operation of the Defense
17
Acquisition System.
18
V-Modell XT (2005). http://www.v-modell-xt.de
19
Womack, J., Jones, D. (1996). Lean Thinking: Banish Waste and Create Wealth in Your
20
Corporation. Simon & Schuster
21
34
Download