ICM Tutorial 14 Jul 2008 v9.ppt

advertisement
University of Southern California
Center for Systems and Software Engineering
Incremental Commitment Model:
What It Is (and Isn’t)
How to Right Size It
How It Helps Do Competitive Prototyping
----Tutorial; Charts with Notes---Barry Boehm and Jo Ann Lane
University of Southern California
Center for Systems and Software Engineering
http://csse.usc.edu
Presented at
ICM and CP Workshop
July 14, 2008
University of Southern California
Center for Systems and Software Engineering
Goals of Tutorial
1. Provide an overview the Incremental
Commitment Model (ICM) and its capabilities
2. Provide guidance on using the ICM process
decision table through elaboration and
examples of special cases
3. Describe how ICM supports the new DoD
Competitive Prototyping (CP) policy
July 2008
©USC-CSSE
2
University of Southern California
Center for Systems and Software Engineering
Tutorial Outline
• ICM nature and context
• ICM views and examples
• ICM process decision table: Guidance and
examples for using the ICM
• ICM and competitive prototyping
• Summary, conclusions, and references
July 2008
©USC-CSSE
3
University of Southern California
Center for Systems and Software Engineering
ICM Nature and Origins
• Integrates hardware, software, and human factors
elements of systems engineering
– Concurrent exploration of needs and opportunities
– Concurrent engineering of hardware, software, human aspects
– Concurrency stabilized via anchor point milestones
• Developed in response to DoD-related issues
– Clarify “spiral development” usage in DoD Instruction 5000.2
• Initial phased version (2005)
– Explain Future Combat System of systems spiral usage to GAO
• Underlying process principles (2006)
– Provide framework for human-systems integration
• National Research Council report (2007)
• Integrates strengths of current process models
– But not their weaknesses
July 2008
©USC-CSSE
4
University of Southern California
Center for Systems and Software Engineering
ICM Integrates Strengths of Current Process Models
But not their weaknesses
• V-Model: Emphasis on early verification and validation
– But not ease of sequential, single-increment interpretation
• Spiral Model: Risk-driven activity prioritization
– But not lack of well-defined in-process milestones
• RUP and MBASE: Concurrent engineering stabilized by
anchor point milestones
– But not software orientation
• Lean Development: Emphasis on value-adding activities
– But not repeatable manufacturing orientation
• Agile Methods: Adaptability to unexpected change
– But not software orientation, lack of scalability
July 2008
©USC-CSSE
5
University of Southern California
Center for Systems and Software Engineering
The ICM: What It Is and Isn’t
• Risk-driven framework for tailoring system processes
– Not a one-size-fits-all process
– Focused on future process challenges
• Integrates the strengths of phased and risk-driven
spiral process models
• Synthesizes together principles critical to successful
system development
– Commitment and accountability of system sponsors
– Success-critical stakeholder satisficing
– Incremental growth of system definition and stakeholder
commitment
– Concurrent engineering
– Iterative development cycles
– Risk-based activity levels and evidence-based milestones
Principles
trump
diagrams…
Principles Used by 60-80% of CrossTalk Top-5 projects, 2002-2005
July 2008
©USC-CSSE
6
University of Southern California
Center for Systems and Software Engineering
Context: Current and Future DoD S&SE Challenges
•
•
•
•
Multi-owner, multi-mission systems of systems
Emergent requirements; rapid pace of change
Always-on, never-fail systems
Need to turn within adversaries’ OODA loop
– Observe, orient, decide, act
• Within asymmetric adversary constraints
July
2008
03/19/2008
©USC-CSSE
77
University of Southern California
Center for Systems and Software Engineering
Asymmetric Conflict and OODA Loops
•Adversary
•Picks time and place
•Little to lose
•Lightweight, simple
systems and processes
Observe
new/updated
objectives,
constraints,
alternatives
Orient with respect
to stakeholders
priorities,
feasibility, risks
•Can reuse anything
•Defender
•Ready for anything
Act on plans,
specifications
•Much to lose
Decide on new
capabilities,
architecture
upgrades, plans
•More heavy, complex
systems and processes
•Reuse requires trust
July
2008
03/19/2008
©USC-CSSE
88
University of Southern California
Center for Systems and Software Engineering
Average Change Processing Time:
Two Complex Systems of Systems
160
140
120
Average
100
workdays to
process 80
changes 60
40
20
0
Within
Groups
Across Contract
Groups
Mods
Incompatible with turning within adversary’s OODA loop
July
2008
03/19/2008
©USC-CSSE
99
University of Southern California
Center for Systems and Software Engineering
Lifecycle Evolution Motivators as Identified
by CSSE Sponsors and Affiliates
• Current lifecycle models and associated practices
– Based on outdated assumptions
– Don’t always address today’s and tomorrow’s system development
challenges, especially as systems become more complex
– Examples of outdated assumptions and shortfalls in guidance
documents included in backup charts
• Need better integration of
–
–
–
–
–
Systems engineering
Software engineering
Human factors engineering
Specialty engineering (e.g. information assurance, safety)
Engineering-management interactions (feasibility of plans, budgets,
schedules, risk management methods)
• Need to better identify and manage critical issues and risks up
front (Young memo)
July 2008
©USC-CSSE
10
University of Southern California
Center for Systems and Software Engineering
System/Software Architecture Mismatches
- Maier, 2006
• System Hierarchy
• Software Hierarchy
– Part-of relationships;
no shared parts
– Uses relationships;
layered multi-access
– Function-centric; single
data dictionary
– Data-centric; classobject data relations
– Interface dataflows
– Interface protocols;
concurrency challenges
– Dynamic functionalphysical migration
– Static functionalphysical allocation
03/19/2008
July 2008
©USC-CSSE
11
11
University of Southern California
Center for Systems and Software Engineering
Examples of Architecture
Mismatches
• Fractionated, incompatible sensor data
management
…
…
Sensor 1
SDMS1
…
Sensor 2
Sensor 3
Sensor n
SDMS2
SDMS3
SDMSn
• “Touch Football” interface definition earned value
– Full earned value taken for defining interface dataflow
– No earned value left for defining interface dynamics
July
2008
03/19/2008
• Joining/leaving network, publish-subscribe, interrupt
handling, security protocols,
exception handling, mode
©USC-CSSE
transitions
12
12
University of Southern California
Center for Systems and Software Engineering
Tutorial Outline
• ICM nature and context
• ICM views and examples
• ICM process decision table: Guidance and
examples for using the ICM
• ICM and competitive prototyping
• Summary, conclusions, and references
July 2008
©USC-CSSE
13
University of Southern California
Center for Systems and Software Engineering
The Incremental Commitment Life Cycle Process: Overview
Stage I: Definition
Stage II: Development and Operations
Anchor Point
Milestones
Synchronize, stabilize concurrency via FEDs
Risk patterns
determine life
cycle process
July 2008
©USC-CSSE
14
University of Southern California
Center for Systems and Software Engineering
ICM HSI Levels of Activity for Complex Systems
July 2008
©USC-CSSE
15
University of Southern California
Center for Systems and Software Engineering
Anchor Point Feasibility Evidence Description
• Evidence provided by developer and validated by
independent experts that:
If the system is built to the specified architecture, it will
– Satisfy the requirements: capability, interfaces, level of service,
and evolution
– Support the operational concept
– Be buildable within the budgets and schedules in the plan
– Generate a viable return on investment
– Generate satisfactory outcomes for all of the success-critical
stakeholders
• All major risks resolved or covered by risk management
plans
• Serves as basis for stakeholders’ commitment to proceed
Can be used to strengthen current schedule- or event-based reviews
July 2008
©USC-CSSE
16
University of Southern California
Center for Systems and Software Engineering
ICM Anchor Point Milestone Content (1)
(Risk-driven level of detail for each element)
Milestone
Element
Foundations Commitment Review
(FCR/MS-A) Package
Development Commitment Review
(DCR/MS-B) Package
Definition of
Operational
Concept
• System shared vision update
• Top-level system objectives and scope
– System boundary; environment
parameters and assumptions
• Top-level operational concepts
– Production, deployment, operations and
sustainment scenarios and parameters
– Organizational life-cycle responsibilities
(stakeholders)
• Elaboration of system objectives and scope
by increment
• Elaboration of operational concept by
increment
– Including all mission-critical
operational scenarios
– Generally decreasing detail in later
increments
System
Prototype(s)
• Exercise key usage scenarios
• Resolve critical risks
– E.g., quality attribute levels,
technology maturity levels
• Exercise range of usage scenarios
• Resolve major outstanding risks
Definition of
System
Requirements
• Top-level functions, interfaces, quality
attribute levels, including
– Growth vectors and priorities
• Project and product constraints
• Stakeholders’ concurrence on essentials
• Elaboration of functions, interfaces, quality
attributes, and constraints by increment
– Including all mission-critical offnominal requirements
– Generally decreasing detail in later
increments
• Stakeholders’ concurrence on their priority
concerns
July 2008
©USC-CSSE
17
University of Southern California
Center for Systems and Software Engineering
ICM Anchor Point Milestone Content (2)
(Risk-driven level of detail for each element)
Milestone
Element
Foundations Commitment Review
(FCR/MS-A) Package
Development Commitment Review
(DCR/MS-B) Package
Definition of
System
Architecture
• Top-level definition of at least one feasible
architecture
– Physical and logical elements and
relationships
– Choices of Non-Developmental Items
(NDI)
• Identification of infeasible architecture
options
• Choice of architecture and elaboration by
increment and component
– Physical and logical components,
connectors, configurations, constraints
– NDI choices
– Domain-architecture and architectural
style choices
• Architecture evolution parameters
Definition of
Life-Cycle Plan
• Identification of life-cycle stakeholders
– Users, customers, developers,
testers, sustainers, interoperators,
general public, others
• Identification of life-cycle process model
– Top-level phases, increments
• Top-level WWWWWHH* by phase, function
– Production, deployment, operations,
sustainment
• Elaboration of WWWWWHH* for Initial
Operational Capability (IOC) by phase,
function
– Partial elaboration, identification of key
TBD’s for later increments
*WWWWWHH: Why, What, When, Who, Where, How, How Much
July 2008
©USC-CSSE
18
University of Southern California
Center for Systems and Software Engineering
ICM Anchor Point Milestone Content (3)
(Risk-driven level of detail for each element)
Milestone
Element
Foundations Commitment Review
(FCR/MS-A) Package
Feasibility
Evidence
Description
(FED)
• Evidence of consistency, feasibility among
elements above
– Via physical and logical modeling,
testbeds, prototyping, simulation,
instrumentation, analysis, etc.
– Mission cost-effectiveness analysis
for requirements, feasible
architectures
Development Commitment Review
(DCR/MS-B) Package
• Evidence of consistency, feasibility among
elements above
– Identification of evidence shortfalls;
risks
• All major risks resolved or covered by risk
management plan
• Stakeholders’ concurrence on their priority
concerns, commitment to development
• Identification of evidence shortfalls;
risks
• Stakeholders’ concurrence on
essentials
July 2008
©USC-CSSE
19
University of Southern California
Center for Systems and Software Engineering
Incremental Commitment in
Gambling
• Total Commitment: Roulette
– Put your chips on a number
• E.g., a value of a key performance parameter
– Wait and see if you win or lose
• Incremental Commitment: Poker, Blackjack
– Put some chips in
– See your cards, some of others’ cards
– Decide whether, how much to commit to
proceed
July 2008
©USC-CSSE
20
University of Southern California
Center for Systems and Software Engineering
Scalable remotely controlled
operations
July 2008
©USC-CSSE
21
University of Southern California
Center for Systems and Software Engineering
Total vs. Incremental Commitment – 4:1 RPV
• Total Commitment
–
–
–
–
–
Agent technology demo and PR: Can do 4:1 for $1B
Winning bidder: $800M; PDR in 120 days; 4:1 capability in 40 months
PDR: many outstanding risks, undefined interfaces
$800M, 40 months: “halfway” through integration and test
1:1 IOC after $3B, 80 months
• CP-based Incremental Commitment [number of competing teams]
–
–
–
–
–
$25M, 6 mo. to VCR [4]: may beat 1:2 with agent technology, but not 4:1
$75M, 8 mo. to ACR [3]: agent technology may do 1:1; some risks
$225M, 10 mo. to DCR [2]: validated architecture, high-risk elements
$675M, 18 mo. to IOC [1]: viable 1:1 capability
1:1 IOC after $1B, 42 months
July 2008
©USC-CSSE
22
University of Southern California
Center for Systems and Software Engineering
The Incremental Commitment Life Cycle Process: Overview
Stage I: Definition
Stage II: Development and Operations
Anchor Point
Milestones
Concurrently engr.
OpCon, rqts, arch,
plans, prototypes
July 2008
©USC-CSSE
Concurrently engr.
Incr.N (ops), N+1
(devel), N+2 (arch)
23
University of Southern California
Center for Systems and Software Engineering
ICM Risk-Driven Scalable Development
Rapid
Change
Foreseeable
Change
(Plan)
Increment N Baseline
High
Assurance
July 2008
Short Development
Increments
Short, Stabilized
Development
Of Increment N
Increment N Transition/O&M
Stable Development
Increments
©USC-CSSE
24
University of Southern California
Center for Systems and Software Engineering
ICM Risk-Driven Scalable Development: Expanded View
Unforeseeable Change (Adapt)
Future Increment Baselines
Agile
Rebaselining for
Future Increments
Rapid
Change
Foreseeable
Change
(Plan)
Short
Development
Increments
Increment N Baseline
Stable Development
Increments
High
Assurance
Current V&V
Resources
Continuous V&V
July 2008
Deferrals
Short, Stabilized
Development
of Increment N
Artifacts
Operations and Maintenance
Concerns
Verification and
Validation (V&V)
of Increment N
©USC-CSSE
Increment N Transition/
Future V&V
Resources
25
University of Southern California
Center for Systems and Software Engineering
Agile Change Processing and Rebaselining
Stabilized
Increment-N
Development Team
Agile FutureIncrement
Rebaselining Team
Future Increment
Managers
Change
Proposers
Proposed changes
Propose
Changes
Defer some Increment-N capabilities
Recommend handling
in current increment
Negotiate change
disposition
Accept changes
Handle
Accepted
Increment-N
changes
Assess Changes,
Propose Handling
Discuss, revise,
defer, or drop
Handle in current
rebaseline
Formulate, analyze options
in context of other changes
Recommend deferrals to future increments
Discuss, resolve deferrals to
future increments
Rebaseline
future-increment
Foundations packages
July 2008
Recommend no action,
provide rationale
©USC-CSSE
Prepare for rebaselined
future-increment
development
26
University of Southern California
Center for Systems and Software Engineering
Spiral View of Incremental Commitment Model
Cumulative Level of Understanding, Cost, Time, Product,
and Process Detail (Risk-Driven)
Concurrent
Engineering of
Products and
Processes
OPERATION2
OPERATION1
DEVELOPMENT
ARCHITECTING
VALUATION
STAKEHOLDER
COMMITMENT
REVIEW
POINTS:
EXPLORATION
6
5
4
3
2
1
Opportunities to
proceed, skip
phases
backtrack, or
terminate
July 2008
©USC-CSSE
1
Exploration Commitment Review
2
Valuation Commitment Review
3
Architecture Commitment Review
4
Development Commitment Revie
5
Operations1 and Development2
Commitment Review
6
Operations2 and Development3
Commitment Review
27
University of Southern California
Center for Systems and Software Engineering
Use of Key Process Principles:
Annual CrossTalk Top-5 Projects
Year
Concurrent
Engineering
2002
2003
2004
2005
Total (of 20)
4
5
3
4
16
July 2008
Evolutionary
Risk-Driven
Growth
3
4
3
4
14
©USC-CSSE
3
3
4
5
15
28
University of Southern California
Center for Systems and Software Engineering
Example ICM Commercial Application:
Symbiq Medical Infusion Pump
Winner of 2006 HFES Best New Design Award
Described in NRC HSI Report, Chapter 5
July 2008
©USC-CSSE
29
University of Southern California
Center for Systems and Software Engineering
Symbiq IV Pump ICM Process - I
• Exploration Phase
–
–
–
–
Stakeholder needs interviews, field observations
Initial user interface prototypes
Competitive analysis, system scoping
Commitment to proceed
• Valuation Phase
–
–
–
–
–
Feature analysis and prioritization
Display vendor option prototyping and analysis
Top-level life cycle plan, business case analysis
Safety and business risk assessment
Commitment to proceed while addressing risks
July 2008
©USC-CSSE
30
University of Southern California
Center for Systems and Software Engineering
Symbiq IV Pump ICM Process - II
• Foundations Phase
–
–
–
–
–
–
Modularity of pumping channels
Safety feature and alarms prototyping and iteration
Programmable therapy types, touchscreen analysis
Failure modes and effects analyses (FMEAs)
Prototype usage in teaching hospital
Commitment to proceed into development
• Development Phase
–
–
–
–
Extensive usability criteria and testing
Iterated FMEAs and safety analyses
Patient-simulator testing; adaptation to concerns
Commitment to production and business plans
July 2008
©USC-CSSE
31
University of Southern California
Center for Systems and Software Engineering
ICM Summary
• Current processes not well matched to future challenges
– Emergent, rapidly changing requirements
– High assurance of scalable performance and qualities
• Incremental Commitment Model addresses challenges
– Assurance via evidence-based milestone commitment reviews,
stabilized incremental builds with concurrent V&V
• Evidence shortfalls treated as risks
– Adaptability via concurrent agile team handling change traffic and
providing evidence-based rebaselining of next-increment specifications
and plans
– Use of critical success factor principles: stakeholder satisficing,
incremental growth, concurrent engineering, iterative development, riskbased activities and milestones
• Major implications for funding, contracting, career paths
July 2008
©USC-CSSE
32
University of Southern California
Center for Systems and Software Engineering
Implications for funding, contracting, career paths
• Incremental vs. total funding
– Often with evidence-based competitive downselect
• No one-size-fits all contracting
– Separate instruments for build-to-spec, agile rebaselining, V&V
teams
• With funding and award fees for collaboration, risk management
• Compatible regulations, specifications, and standards
• Compatible acquisition corps education and training
– Generally, schedule/cost/quality as independent variable
• Prioritized feature set as dependent variable
• Multiple career paths
– For people good at build-to-spec, agile rebaselining, V&V
– For people good at all three
• Future program managers and chief engineers
July 2008
©USC-CSSE
33
University of Southern California
Center for Systems and Software Engineering
Tutorial Outline
• ICM nature and context
• ICM views and examples
• ICM process decision table:
Guidance and examples for using
the ICM
• ICM and competitive prototyping
• Summary, conclusions, and references
July 2008
©USC-CSSE
34
University of Southern California
Center for Systems and Software Engineering
The Incremental Commitment Life Cycle Process: Overview
Stage I: Definition
Stage II: Development and Operations
Anchor Point
Milestones
Synchronize, stabilize concurrency via FEDs
Risk patterns
determine life
cycle process
July
2008
03/19/2008
©USC-CSSE
35
University of Southern California
Center for Systems and Software Engineering
Different Risk Patterns Yield Different Processes
July 2008
©USC-CSSE
36
University of Southern California
Center for Systems and Software Engineering
The ICM as Risk-Driven Process
Generator
• Stage I of the ICM has 3 decision nodes with 4 options/node
– Culminating with incremental development in Stage II
– Some options involve go-backs
– Results in many possible process paths
• Can use ICM risk patterns to generate frequently-used
processes
– With confidence that they fit the situation
• Can generally determine this in the Exploration phase
– Develop as proposed plan with risk-based evidence at VCR
milestone
– Adjustable in later phases
July 2008
©USC-CSSE
37
University of Southern California
Center for Systems and Software Engineering
The ICM Process Decision Table:
Key Decision Inputs
•
•
•
•
Product and project size and complexity
Requirements volatility
Mission criticality
Nature of Non-Developmental Item (NDI)* support
– Commercial, open-source, reused components
• Organizational and Personnel Capability
* NDI Definition [DFARS]: a) any product that is available in the commercial marketplace; b) any
previously developed product in use by a U.S. agency (federal, state, or local) or a foreign government that
has a mutual defense agreement with the U.S.; c) any product described in the first two points above that
requires only modifications to meet requirements; d) any product that is being produced, but not yet in the
commercial marketplace, that satisfies the above criteria.
July 2008
©USC-CSSE
38
University of Southern California
Center for Systems and Software Engineering
The ICM Process Decision Table:
Key Decision Outputs
• Key Stage I activities: incremental definition
• Key Stage II activities: incremental
development and operations
• Suggested calendar time per build, per
deliverable increment
July 2008
©USC-CSSE
39
University of Southern California
Center for Systems and Software Engineering
Common Risk-Driven Special Cases of the ICM
Special Case
Example
Size,
Complexi
ty
Change
Rate %
/Month
Criticality
NDI Support
Org, Personnel
Capability
Complete
Key Stage I Activities :
Incremental Definition
Key Stage II Activities:
Incremental Development,
Operations
Acquire NDI
Use NDI
Time per Build;
per Increment
1
Use NDI
Small
Accounting
2
Agile
E-services
Low
1 – 30
LowMed
Good;
in place
Agile-ready
Med-high
Skip Valuation , Architecting
phases
Scrum plus agile methods
of choice
<= 1 day;
2-6 weeks
3
Architected
Agile
Business data
processing
Med
1 – 10
MedHigh
Good;
most in place
Agile-ready
Med-high
Combine Valuation,
Architecting phases.
Complete NDI preparation
Architecture-based Scrum
of Scrums
2-4 weeks;
2-6 months
4
Formal
Methods
Security kernel
or safetycritical LSI chip
Low
0.3 – 1
Extra
high
None
Strong formal
methods
experience
Precise formal specification
Formally-based
programming language;
formal verification
1-5 days;
1-4 weeks
5
HW component
with embedded
SW
Multi-sensor
control device
Low
0.3 – 1
MedVery
High
Good;
In place
Experienced;
med-high
Concurrent HW/SW
engineering. CDR-level ICM
DCR
IOC Development, LRIP,
FRP. Concurrent Version
N+1 engineering
SW: 1-5 days;
Market-driven
6
Indivisible IOC
Complete
vehicle platform
Med –
High
0.3 – 1
HighVery
High
Some in place
Experienced;
med-high
Determine minimum-IOC
likely, conservative cost.
Add deferrable SW features
as risk reserve
Drop deferrable features
to meet conservative
cost. Strong award fee for
features not dropped
SW: 2-6 weeks;
Platform: 6-18
months
7
NDI- Intensive
Supply Chain
Management
Med –
High
0.3 – 3
MedVery
High
NDI-driven
architecture
NDIexperienced;
Med-high
Thorough NDI-suite life cycle
cost-benefit analysis,
selection, concurrent
requirements/ architecture
definition
Pro-active NDI evolution
influencing, NDI upgrade
synchronization
SW: 1-4 weeks;
System: 6-18
months
8
Hybrid agile /
plan-driven
system
C4ISR
Med –
Very
High
Mixed
parts:
1 – 10
Mixed
parts;
MedVery
High
Mixed parts
Mixed parts
Full ICM; encapsulated agile
in high change, low-medium
criticality parts (Often HMI,
external interfaces)
Full ICM ,three-team
incremental development,
concurrent V&V, nextincrement rebaselining
1-2 months;
9-18 months
9
Multi-owner
system of
systems
Net-centric
military
operations
Very
High
Mixed
parts:
1 – 10
Very
High
Many NDIs;
some in place
Related
experience,
med-high
Full ICM; extensive multiowner team building,
negotiation
Full ICM; large ongoing
system/software
engineering effort
2-4 months; 1824 months
10
Family of
systems
Medical Device
Product Line
Med –
Very
High
1–3
Med –
Very
High
Some in place
Related
experience,
med – high
Full ICM; Full stakeholder
participation in product line
scoping. Strong business
case
Full ICM. Extra resources
for first system, version
control, multi-stakeholder
support
1-2 months; 918 months
C4ISR: Command, Control, Computing, Communications, Intelligence, Surveillance, Reconnaissance. CDR: Critical Design Review.
DCR: Development Commitment Review. FRP: Full-Rate Production. HMI: Human-Machine Interface. HW: Hard ware.
July 2008
©USC-CSSE
IOC: Initial Operational Capability. LRIP: Low-Rate Initial Production. NDI: Non-Development Item. SW: Software
40
University of Southern California
Center for Systems and Software Engineering
Case 1: Use NDI
•
•
Exploration phase identifies NDI opportunities
NDI risk/opportunity analysis indicates risks acceptable
– Product growth envelope fits within NDI capability
– Compatible NDI and product evolution paths
– Acceptable NDI volatility
• Some open-source components highly volatile
– Acceptable usability, dependability, interoperability
– NDI available or affordable Example: Small accounting system
•
•
•
•
•
•
•
•
•
Size/complexity: Low
Anticipated change rate (% per month): Low
Criticality: Low
NDI support: Complete
Organization and personnel capability: NDI-experienced
Key Stage I activities: Acquire NDI
Key Stage II activities: Use NDI
Time/build: Driven by time to initialize/tailor NDI
Time/increment: Driven by NDI upgrades
July 2008
©USC-CSSE
41
University of Southern California
Center for Systems and Software Engineering
Case 2: Pure Agile Methods
•
Exploration phase determines
–
–
–
–
–
•
•
•
•
•
•
•
•
•
•
Low product and project size and complexity
Fixing increment defects in next increment acceptable
Existing hardware and NDI support of growth envelope
Sufficient agile-capable personnel
Need to accommodate rapid change, emergent requirements, early user capability
Example: E-services
Size/complexity: Low
Anticipated change rate (% per month): 1-30%
Criticality: Low to medium
NDI support: Good; in place
Organization and personnel capability: Agile-ready, medium to high
capability
Key Stage I activities: Skip Valuation and Architecting phases
Key Stage II activities: Scrum plus agile methods of choice
Time/build: Daily
Time/increment: 2-6 weeks
July 2008
©USC-CSSE
42
University of Southern California
Center for Systems and Software Engineering
Case 3: Architected Agile
•
Exploration phase determines
– Need to accommodate fairly rapid change, emergent requirements, early user
capability
– Low risk of scalability up to 100 people
– NDI support of growth envelope
– Nucleus of highly agile-capable personnel
– Moderate to high loss due to increment defects
•
•
•
•
•
•
•
•
•
Example: Business data processing
Size/complexity: Medium
Anticipated change rate (% per month): 1-10%
Criticality: Medium to high
NDI support: Good, most in place
Organization and personnel capability: Agile-ready, med-high capability
Key Stage I activities: Combined Valuation and Architecting phase,
complete NDI preparation
Key Stage II activities: Architecture-based scrum of scrums
Time/build: 2-4 weeks
Time/increment: 2-6 months
July 2008
©USC-CSSE
43
University of Southern California
Center for Systems and Software Engineering
Case 4: Formal Methods
•
•
•
•
•
•
•
•
•
•
•
Biggest risks: Software/hardware does not accurately implement
required algorithm precision, security, safety mechanisms, or
critical timing
Example: Security kernel or safety-critical LSI chip
Size/complexity: Low
Anticipated change rate (% per month): 0.3%
Criticality: Extra high
NDI support: None
Organization and personnel capability: Strong formal methods
experience
Key Stage I activities: Precise formal specification
Key Stage II activities: Formally-based programming language;
formal verification
Time/build: 1-5 days
Time/increment: 1-4 weeks
July 2008
©USC-CSSE
44
University of Southern California
Center for Systems and Software Engineering
Case 5: Hardware Component with
Embedded Software
• Biggest risks: Device recall, lawsuits, production line rework,
hardware-software integration
– DCR carried to Critical Design Review level
– Concurrent hardware-software design
• Criticality makes Agile too risky
– Continuous hardware-software integration
• Initially with simulated hardware
• Low risk of overrun
– Low complexity, stable requirements and NDI
– Little need for risk reserve
– Likely single-supplier software
July 2008
©USC-CSSE
45
University of Southern California
Center for Systems and Software Engineering
Case 5: Hardware Component with
Embedded Software (continued)
•
•
•
•
•
•
•
•
•
•
Example: Multi-sensor control device
Size/complexity: Low
Anticipated change rate (% per month): 0.3-1%
Criticality: Medium to very high
NDI support: Good, in place
Organization and personnel capability: Experienced; medium to high
capability
Key Stage I activities: Concurrent hardware and software engineering;
CDR-level ICM DCR
Key Stage II activities: IOC Development, LRIP,FRP, concurrent version
N+1 engineering
Time/build: 1-5 days (software)
Time/increment: Market-driven
July 2008
©USC-CSSE
46
University of Southern California
Center for Systems and Software Engineering
Case 6: Indivisible IOC
• Biggest risk: Complexity, NDI uncertainties cause costschedule overrun
– Similar strategies to case 5 for criticality (CDR, concurrent HWSW design, continuous integration)
– Add deferrable software features as risk reserve
• Adopt conservative (90% sure) cost and schedule
• Drop software features to meet cost and schedule
• Strong award fee for features not dropped
– Likely multiple-supplier software makes longer (multi-weekly)
builds more necessary
July 2008
©USC-CSSE
47
University of Southern California
Center for Systems and Software Engineering
Case 6: Indivisible IOC (continued)
•
•
•
•
•
•
•
•
•
•
Example: Complete vehicle platform
Size/complexity: Medium to high
Anticipated change rate (% per month): 0.3-1%
Criticality: High to very high
NDI support: Some in place
Organization and personnel capability: Experienced,
medium to high capability
Key Stage I activities: Determine minimum-IOC likely,
conservative cost; Add deferrable software features as risk
reserve
Key Stage II activities: Drop deferrable features to meet
conservative cost; Strong award fee for features not dropped
Time/build: 2-6 weeks (software)
Time/increment: 6-18 months (platform)
July 2008
©USC-CSSE
48
University of Southern California
Center for Systems and Software Engineering
Case 7: NDI-Intensive
•
•
•
•
•
•
•
•
•
•
•
Biggest risks: incompatible NDI; rapid change, business/mission
criticality; low NDI assessment and integration experience; supply chain
stakeholder incompatibilities
Example: Supply chain management
Size/complexity: Medium to high
Anticipated change rate (% per month): 0.3-3%
Criticality: Medium to very high
NDI support: NDI-driven architecture
Organization and personnel capability: NDI-experienced; medium to
high capability
Key Stage I activities: Thorough NDI-suite life cycle cost-benefit
analysis, selection, concurrent requirements and architecture definition
Key Stage II activities: Pro-active NDI evolution influencing, NDI upgrade
synchronization
Time/build: 1-4 weeks (software)
Time/increment: 6-18 months (systems)
July 2008
©USC-CSSE
49
University of Southern California
Center for Systems and Software Engineering
Case 8: Hybrid Agile/Plan-Driven System
•
•
•
•
•
•
•
•
•
•
•
Biggest risks: large scale, high complexity, rapid change, mixed
high/low criticality, partial NDI support, mixed personnel capability
Example: C4ISR system
Size/complexity: Medium to very high
Anticipated change rate (% per month): Mixed parts; 1-10%
Criticality: Mixed parts; medium to very high
NDI support: Mixed parts
Organization and personnel capability: Mixed parts
Key Stage I activities: Full ICM; encapsulated agile in high
changed; low-medium criticality parts (often HMI, external
interfaces)
Key Stage II activities: Full ICM, three-team incremental
development, concurrent V&V, next-increment rebaselining
Time/build: 1-2 months
Time/increment: 9-18 months
July 2008
©USC-CSSE
50
University of Southern California
Center for Systems and Software Engineering
Case 9: Multi-Owner System of Systems
•
Biggest risks: all those of Case 8 plus
– Need to synchronize, integrate separately-managed, independently-evolving
systems
– Extremely large-scale; deep supplier hierarchies
– Rapid adaptation to change extremely difficult
•
•
•
•
•
•
•
•
•
Example: Net-centric military operations or global supply chain
management
Size/complexity: Very high
Anticipated change rate (% per month): Mixed parts; 1-10%
Criticality: Very high
NDI support: Many NDIs; some in place
Organization and personnel capability: Related experience, medium to
high
Key Stage I activities: Full ICM; extensive multi-owner teambuilding,
negotiation
Key Stage II activities: Full ICM; large ongoing system/software
engineering effort
Time/build: 2-4 months
Time/increment:18-24 months
July 2008
©USC-CSSE
51
University of Southern California
Center for Systems and Software Engineering
ICM Approach to Systems of Systems
•
•
•
SoSs provide examples of how systems and software engineering can be
better integrated when evolving existing systems to meet new needs
Net-centricity and collaboration-intensiveness of SoSs have created more
emphasis on integrating hardware, software, and human factors engineering
Focus is on
– Flexibility and adaptability
– Use of creative approaches, experimentation, and tradeoffs
– Consideration of non-optimal approaches that are satisfactory to key stakeholders
•
Key focus for SoS engineering is to guide the evolution of constituent
systems within SoS to
– Perform cross-cutting capabilities
– While supporting existing single-system stakeholders
•
SoS process adaptations have much in common with the ICM
– Supports evolution of constituent systems using the appropriate special case for
each system
– Guides the evolution of the SoS through long range increments that fit with the
incremental evolution of the constituent systems
– ICM fits well with the OSD SoS SE Guidebook description of SoSE core elements
July 2008
©USC-CSSE
52
University of Southern California
Center for Systems and Software Engineering
The ICM Life Cycle Process and the SoS SE Core Elements
Stage I: Definition
Stage II: Development and Operations
Translating capability objectives
Understanding systems & relationships
Monitoring & assessing changes
Developing
& evolving SoS
architecture
Addressing requirements & solution options
Orchestrating upgrades to SoS
Assessing performance to capability objectives
July 2008
©USC-CSSE
53
University of Southern California
Center for Systems and Software Engineering
Example: SoSE Synchronization Points
FCR1
SoS-Level Exploration
Valuation
Candidate Supplier/
Strategic
Partner n
●
DCR1
OCR1
Architecting
Develop
OCR2

Operation
LCO-type
Proposal &
Feasibility
Info
●
Source
Selection
Rebaseline/
Adjustment FCR1
●
Candidate Supplier/
Strategic Partner 1
OCRx2
OCRx1
System x

Develop
OCRx3
Operation
Operation
OCRx5
OCRx4
Operation
Operation
DCRC
OCRC1

●
●
●
System C   
FCRC
Exploration
Valuation
Architecting
FCRB
System B   
Exploration
Valuation
July 2008
   Exploration
Valuation
DCRB
Architecting
FCRA
System A
Develop
Operation
OCRB1
Develop
©USC-CSSE

OCRB2
Operation

OCRA1
DCRA
Architecting
OCRC2
Develop
Operation
54

University of Southern California
Center for Systems and Software Engineering
Case 10: Family of Systems
•
Biggest risks: all those of Case 8 plus
– Need to synchronize, integrate separately-managed, independentlyevolving systems
– Extremely large-scale; deep supplier hierarchies
– Rapid adaptation to change extremely difficult
•
•
•
•
•
•
•
•
•
Example: Medical device product line
Size/complexity: Medium to very high
Anticipated change rate (% per month): 1-3%
Criticality: Medium to very high
NDI support: Some in place
Organization and personnel capability: Related experience, medium
to high capability
Key Stage I activities: Full ICM; full stakeholder participation in
product line scoping; strong business case
Key Stage II activities: Full ICM; extra resources for first system,
version control, multi-stakeholder support
Time/build: 1-2 months
Time/increment: 9-18 months
July 2008
©USC-CSSE
55
University of Southern California
Center for Systems and Software Engineering
Frequently Asked Question
• Q: Having all that ICM generality and then using the
decision table to come back to a simple model seems
like an overkill.
– If my risk patterns are stable, can’t I just use the special case
indicated by the decision table?
• A: Yes, you can and should – as long as your risk
patterns stay stable. But as you encounter change,
the ICM helps you adapt to it.
– And it helps you collaborate with other organizations that
may use different special cases.
July 2008
©USC-CSSE
56
University of Southern California
Center for Systems and Software Engineering
Tutorial Outline
• ICM nature and context
• ICM views and examples
• ICM process decision table: Guidance and
examples for using the ICM
• ICM and competitive prototyping
– Motivation and context
– Applying ICM principles to CP
• Summary, conclusions, and references
July 2008
©USC-CSSE
57
University of Southern California
Center for Systems and Software Engineering
Motivation and Context
• DoD is emphasizing CP for system acquisition
– Young memo, September 2007
• CP can produce significant benefits, but also
has risks
– Benefits discussed in RPV CP example
– Examples of risks from experiences, workshops
• The risk-driven ICM can help address the risks
– Primarily through its underlying principles
July 2008
©USC-CSSE
58
University of Southern California
Center for Systems and Software Engineering
Young Memo: Prototyping and Competition
• Discover issues before costly SDD phase
– Producing detailed designs in SDD
– Not solving myriad technical issues
• Services and Agencies to produce competitive
prototypes through Milestone B
– Reduce technical risk, validate designs and cost estimates,
evaluate manufacturing processes, refine requirements
• Will reduce time to fielding
– And enhance govt.-industry teambuilding, SysE skills,
attractiveness to next generation of technologists
• Applies to all programs requiring USD(AT&L) approval
– Should be extended to appropriate programs below ACAT I
July 2008
©USC-CSSE
59
University of Southern California
Center for Systems and Software Engineering
Example Risks Involved in CP
Based on TRW, DARPA, SAIC experiences; workshop
• Seductiveness of sunny-day demos
– Lack of coverage of rainy-day off-nominal scenarios
– Lack of off-ramps for infeasible outcomes
• Underemphasis on quality factor tradeoffs
– Scalability, performance, safety, security, adaptability
• Discontinuous support of developers, evaluators
– Loss of key team members
– Inadequate evaluation of competitors
• Underestimation of productization costs
– Brooks factor of 9 for software
– May be higher for hardware
• Underemphasis on non-prototype factors
July 2008
©USC-CSSE
60
University of Southern California
Center for Systems and Software Engineering
Milestone B Focus on Technology Maturity
Misses Many OSD/AT&L Systemic Root Causes
1 Technical process (35 instances)
6 Lack of appropriate staff (23)
- V&V, integration, modeling&sim.
2 Management process (31)
7 Ineffective organization (22)
3 Acquisition practices (26)
8 Ineffective communication (21)
4 Requirements process (25)
9 Program realism (21)
5 Competing priorities (23)
10 Contract structure (20)
•Some of these are root causes of technology immaturity
•Can address these via evidence-based Milestone B exit criteria
•Technology Development Strategy
•Capability Development Document
•Evidence of affordability, KPP satisfaction, program achievability
July 2008
©USC-CSSE
61
University of Southern California
Center for Systems and Software Engineering
Applying ICM Principles and Practices to CP
• When, what, and how much to prototype?
– Risk management principle: buying information to
reduce risk
• Whom to involve in CP?
– Satisficing principle: all success-critical stakeholders
• How to sequence CP?
– Incremental growth, iteration principles
• How to plan for CP?
– Concurrent engineering principle: more parallel effort
• What is needed at Milestone B besides
prototypes?
– Risk management principle: systemic analysis insights
July 2008
©USC-CSSE
62
University of Southern California
Center for Systems and Software Engineering
When, What, and How Much to Prototype?
 Buying information to reduce risk
• When and what: Expected value of perfect
information
• How much is enough: Simple statistical
decision theory
July 2008
©USC-CSSE
63
University of Southern California
Center for Systems and Software Engineering
When and What to Prototype: Early RPV Example
• Bold approach
0.5 probability of success: Value VBS = $100M
0.5 probability of failure: Value VBF = - $20M
• Conservative approach
Value VC = $20M
• Expected value with no information
EVNI = max(EVB, EVC) = max(.5($100M)+.5(-$20M), $20M)
= max($50M-$10M,$20M) = $40M
• Expected value with perfect information
EVPI = 0.5[max(VBS,VC)] + 0.5[max(VBF,VC)]
= 0.5 * max($100M,$20M) + 0.5 * max(-$20M,$20M)
= 0.5 * $100M + 0.5 * $20M = $60M
• Expected value of perfect information
EVPI = EVPI – EVNI = $20M
• Can spend up to $20M buying information to reduce risk
July 2008
©USC-CSSE
64
University of Southern California
Center for Systems and Software Engineering
If Risk Exposure is Low, CP Has Less Value
• Risk Exposure RE = Prob(Loss) * Size(Loss)
• Value of CP (EVPI) would be very small if the
Bold approach is less risky
– Prob(Loss) = Prob (VBF) is near zero rather than 0.5
– Size(Loss) = VBF is near $20M rather than -$20M
July 2008
©USC-CSSE
65
University of Southern California
Center for Systems and Software Engineering
How Much Prototyping is Enough?
 Value of imperfect information
•
Larger CP investments reduce the probability of
– False Negatives (FN): prototype fails, but approach would succeed
– False Positives (FP): prototype succeeds, but approach would fail
Can calculate EV(Prototype) from previous data plus P(FN), P(FP)
EV(CP)
EV(Info)
Net EV(CP)
P(FN)
P(FP)
CP Cost
0
$40M
0
0
8
6
Net EV (CP) ($M)
•
$2M
0.3
0.2
$46M
$6M
$4M
$5M
0.2
0.1
$52M
$12M
$7M
$10M
0.15
0.075
$54M
$14M
$4M
$15M
0.1
0.05
$56M
$16M
$1M
$22M
0.0
0.0
$60M
$20M
-$2M
•
4
2
0
-2
$0
$2
$5
$10
$15
$22
-4
-6
Cost (CP) in $M
Added CP decision criterion
– The prototype can cost-effectively reduce the uncertainty
July 2008
©USC-CSSE
66
University of Southern California
Center for Systems and Software Engineering
Summary: CP Pays Off When
•
The basic CP value propositions are satisfied
1. There is significant risk exposure in making the wrong
decision
2. The prototype can cost-effectively reduce the risk
exposure
•
There are net positive side effects
3. The CP process does not consume too much calendar
time
4. The prototypes have added value for teambuilding or
training
5. The prototypes can be used as part of the product
July 2008
©USC-CSSE
67
University of Southern California
Center for Systems and Software Engineering
Applying ICM Principles and Practices to CP
• When, what, and how much to prototype?
– Risk management principle: buying information to
reduce risk
• Whom to involve in CP?
– Satisficing principle: all success-critical stakeholders
• How to sequence CP?
– Incremental growth, iteration principles
• How to plan for CP?
– Concurrent engineering principle: more parallel effort
• What is needed at Milestone B besides
prototypes?
– Risk management principle: systemic analysis insights
July 2008
©USC-CSSE
68
University of Southern California
Center for Systems and Software Engineering
Whom to Involve in CP?
 Satisficing principle: All success-critical stakeholders
• Success-critical: high risk of neglecting their
interests
–
–
–
–
Acquirers
Developers
Users
Testers
 Operators
 Maintainers
 Interoperators
 Others
• Risk-driven level of involvement
– Interoperators: initially high-level; increasing detail
• Need to have CRACK stakeholder participants
– Committed, Representative, Authorized, Collaborative,
Knowledgeable
July 2008
©USC-CSSE
69
University of Southern California
Center for Systems and Software Engineering
How to Sequence CP?
 Iterative cycles; incremental commitment principles
100%
Traditional Degree
Of Commitment
Incremental CP
Commitments
Traditional
Degree Of
Understanding
Blanchard-Fabrycky, 1998
July 2008
©USC-CSSE
70
University of Southern California
Center for Systems and Software Engineering
Actual CP Situation: Need to Conserve Momentum
• Need time to evaluate and rebaseline
• Eliminated competitors’ experience lost
100%
Degree of
Commitment
Need to keep
competitors
productive,
compensated
Degree of
Understanding
Need to capitalize
on lost experience
Proto-1
July 2008
Eval-1
Proto-2
Eval-2
©USC-CSSE
Proto-3 Eval-3
71
University of Southern California
Center for Systems and Software Engineering
Keeping Competitors Productive and Supported
During Evaluations
 Concurrent engineering principle
• Provide support for a core group within each
competitor organization
– Focused on supporting evaluation activities
– Avoiding loss of tacit knowledge and momentum
• Key evaluation support activities might include
– Supporting prototype exercises
– Answering questions about critical success factors
• Important to keep evaluation and selection period
as short as possible
– Through extensive preparation activities (see next chart)
July 2008
©USC-CSSE
72
University of Southern California
Center for Systems and Software Engineering
Keeping Acquirers Productive and
Supported During Prototyping
• Adjusting plans based on new information
• Preparing evaluation tools and testbeds
– Criteria, scenarios, experts, stakeholders, detailed
procedures
• Possibly assimilating downselected competitors
– IV&V contracts as consolation prizes
• Identifying, involving success-critical stakeholders
• Reviewing interim progress
• Pursuing complementary acquisition initiatives
– Operational concept definition, life cycle planning, external
interface negotiation, mission cost-effectiveness analysis
July 2008
©USC-CSSE
73
University of Southern California
Center for Systems and Software Engineering
Applying ICM Principles and Practices to CP
• When, what, and how much to prototype?
– Risk management principle: buying information to
reduce risk
• Whom to involve in CP?
– Satisficing principle: all success-critical stakeholders
• How to sequence CP?
– Incremental growth, iteration principles
• How to plan for CP?
– Concurrent engineering principle: more parallel effort
• What is needed at Milestone B besides
prototypes?
– Risk management principle: systemic analysis insights
July 2008
©USC-CSSE
74
University of Southern California
Center for Systems and Software Engineering
Later CP Rounds Need Increasing
Focus on Complementary Practices
 By all success critical stakeholders
• Stakeholder roles, responsibilities, authority,
accountability
• Capability priorities and sequencing of
development increments
• Concurrent engineering of requirements,
architecture, feasibility evidence
• Early preparation of development infrastructure
(i.e., key parts of the architecture)
• Acquisition planning, contracting, management,
staffing, test and evaluation
July 2008
©USC-CSSE
75
University of Southern California
Center for Systems and Software Engineering
When to Stop CP
 Commitment and accountability principle: Off-ramps
• Inadequate technology base
– Lack of evidence of scalability, security, accuracy, robustness,
airworthiness, useful lifetime, …
– Better to pursue as research, exploratory development
• Better alternative solutions emerge
– Commercial, other government
• Key success-critical stakeholders decommit
– Infrastructure providers, strategic partners, changed
leadership
Important to emphasize possibility of off-ramps….
July 2008
©USC-CSSE
76
University of Southern California
Center for Systems and Software Engineering
Acquiring Organization’s ICM-Based CP Plan
• Addresses issues discussed above
– Risk-driven prototyping rounds, concurrent definition and
development, continuity of support, stakeholder involvement,
off-ramps
• Organized around key management questions
– Objectives (why?): concept feasibility, best system solution
– Milestones and Schedules (what? when?): Number and timing of
competitive rounds; entry and exit criteria, including off-ramps
– Responsibilities (who? where?): Success-critical stakeholder
roles and responsibilities for activities and artifacts
– Approach (how?): Management approach or evaluation
guidelines, technical approach or evaluation methods, facilities,
tools, and concurrent engineering
– Resources (how much?): Necessary resources for acquirers,
competitors, evaluators, other stakeholders across full range of
prototyping and evaluation rounds
– Assumptions (whereas?): Conditions for exercise of off-ramps,
rebaselining of priorities and criteria
• Provides a stable framework for pursuing CP
July 2008
©USC-CSSE
77
University of Southern California
Center for Systems and Software Engineering
Tutorial Outline
• ICM nature and context
• ICM views and examples
• ICM process decision table: Guidance and
examples for using the ICM
• ICM and competitive prototyping
• Conclusions, References, Acronyms
– Copy of Young Memo
July 2008
©USC-CSSE
78
University of Southern California
Center for Systems and Software Engineering
ICM and CP Conclusions
• CP most effective in reducing technical risk
– If project is low-risk, may not need CP
• May be worth it for teambuilding
• Other significant risks need resolution by Milestone B
– Systemic Analysis DataBase (SADB) sources: management,
acquisition, requirements, staffing, organizing, contracting
• CP requires significant, continuing preparation
– Prototypes are just tip of iceberg
– Need evaluation criteria, tools, testbeds, scenarios, staffing,
procedures
• Need to sustain CP momentum across evaluation breaks
– Useful competitor tasks to do; need funding support
• ICM provides effective framework for CP plan, execution
– CP value propositions, milestone criteria, guiding principles
• CP will involve changes in cultures and institutions
– Need continuous corporate assessment and improvement of
CP-related principles, processes, and practices
July 2008
©USC-CSSE
79
University of Southern California
Center for Systems and Software Engineering
ICM Transition Paths
• Existing programs may benefit from some ICM principles and
practices, but not others
• Problem programs may find some ICM practices helpful in
recovering viability
• Primary opportunities for incremental adoption of ICM
principles and practices
– Supplementing traditional requirements and design reviews with
development and review of feasibility evidence
– Stabilized incremental development and concurrent architecture
rebaselining
– Using schedule as independent variable and prioritizing features
to be delivered
– Continuous verification and validation
– Using the process decision table
– Tailoring the Competitive Prototyping Plan
July 2008
©USC-CSSE
80
University of Southern California
Center for Systems and Software Engineering
References - I
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Beck, K., Extreme Programming Explained, Addison Wesley, 1999.
Blanchard, B., and Fabrycky, W., Systems Engineering and Analysis, Prentice Hall, 1998 (3rd ed.)
Boehm, B., “Some Future Trends and Implications for Systems and Software Engineering Processes”,
Systems Engineering 9(1), pp. 1-19, 2006.
Boehm, B., Brown, W., Basili, V., and Turner, R., “Spiral Acquisition of Software-Intensive Systems of
Systems, CrossTalk, Vol. 17, No. 5, pp. 4-9, 2004.
Boehm, B. and Lane J., "21st Century Processes for Acquiring 21st Century Software-Intensive
Systems of Systems." CrossTalk: Vol. 19, No. 5, pp.4-9, 2006.
Boehm, B., and Lane, J., “Using the ICM to Integrate System Acquisition, Systems Engineering, and
Software Engineering,” CrossTalk, October 2007, pp. 4-9.
Boehm, B., and Lane, J., “A Process Decision Table for Integrated Systems and Software
Engineering,” Proceedings, CSER 2008, April 2008.
Boehm, B., “Future Challenges and Rewards for Software Engineers,” DoD Software Tech News,
October 2007, pp. 6-12.
Boehm, B. et al., Software Cost Estimation with COCOMO II, Prentice Hall, 2000.
Boehm, B., Software Engineering Economics, Prentice Hall, 2000.
Brooks, F. The Mythical Man-Month, Addison Wesley, 1995 (2nd ed.).
Carlock, P. and Fenton, R., "System of Systems (SoS) Enterprise Systems for Information-Intensive
Organizations," Systems Engineering, Vol. 4, No. 4, pp. 242-26, 2001.
Carlock, P., and J. Lane, “System of Systems Enterprise Systems Engineering, the Enterprise
Architecture Management Framework, and System of Systems Cost Estimation”, 21st International
Forum on COCOMO and Systems/Software Cost Modeling, 2006.
Checkland, P., Systems Thinking, Systems Practice, Wiley, 1980 (2nd ed., 1999).
Department of Defense (DoD), Defense Acquisition Guidebook, version 1.6, http://akss.dau.mil/dag/,
2006.
Department of Defense (DoD), Instruction 5000.2, Operation of the Defense Acquisition System, May
2003.
Department of Defense (DoD), Systems Engineering Plan Preparation Guide, USD(AT&L), 2004.
Electronic Industries Alliance (1999); EIA Standard 632: Processes for Engineering a System
Galorath, D., and Evans, M., Software Sizing, Estimation, and Risk Management, Auerbach, 2006.
July 2008
©USC-CSSE
81
University of Southern California
Center for Systems and Software Engineering
References -II
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Hall, E.T., Beyond Culture, Anchor Books/Doubleday, 1976.
Highsmith, J., Adaptive Software Development, Dorset House, 2000.
International Standards Organization, Information Technology Software Life Cycle Processes, ISO/IEC
12207, 1995
ISO, Systems Engineering – System Life Cycle Processes, ISO/IEC 15288, 2002.
Jensen, R. “An Improved Macrolevel Software Development Resource Estimation Model,”
Proceedings, ISPA 5, April 1983, pp. 88-92.
Krygiel, A., Behind the Wizard’s Curtain; CCRP Publication Series, July, 1999, p. 33
Lane, J. and Boehm, B., "System of Systems Cost Estimation: Analysis of Lead System Integrator
Engineering Activities", Information Resources Management Journal, Vol. 20, No. 2, pp. 23-32, 2007.
Lane, J. and Valerdi, R., “Synthesizing SoS Concepts for Use in Cost Estimation”, Proceedings of
IEEE Systems, Man, and Cybernetics Conference, 2005.
Lientz, B., and Swanson, E.B., Software Maintenance Management, Addison Wesley, 1980.
Madachy, R., Boehm, B., Lane, J., "Assessing Hybrid Incremental Processes for SISOS Development",
USC CSSE Technical Report USC-CSSE-2006-623, 2006.
Maier, M., “Architecting Principles for Systems-of-Systems”; Systems Engineering, Vol. 1, No. 4 (pp
267-284).
Maier, M., “System and Software Architecture Reconciliation,” Systems Engineering 9 (2), 2006, pp.
146-159.
Northrop, L., et al., Ultra-Large-Scale Systems: The Software Challenge of the Future, Software
Engineering Institute, 2006.
Pew, R. W., and Mavor, A. S., Human-System Integration in the System Development Process: A New
Look, National Academy Press, 2007.
Putnam, L., “A General Empirical Solution to the Macro Software Sizing and Estimating Problem,”
IEEE Trans SW Engr., July 1978, pp. 345-361.
Rechtin, E. Systems Architecting, Prentice Hall, 1991.
Schroeder, T., “Integrating Systems and Software Engineering: Observations in Practice,” OSD/USC
Integrating Systems and Software Engineering Workshop,
http://csse.usc.edu/events/2007/CIIForum/pages/program.html, October 2007.
July 2008
©USC-CSSE
82
University of Southern California
Center for Systems and Software Engineering
List of Acronyms
B/L
C4ISR
CD
CDR
COTS
CP
DCR
DI
DoD
ECR
EVMS
FCR
FED
FMEA
FRP
GAO
GUI
July 2008
Baselined
Command, Control, Computing, Communications, Intelligence, Surveillance,
Reconnaissance
Concept Development
Critical Design Review
Commercial Off-the-Shelf
Competitive Prototyping
Development Commitment Review
Development Increment
Department of Defense
Exploration Commitment Review
Earned Value Management System
Foundations Commitment Review
Feasibility Evidence Description
Failure Modes and Effects Analysis
Full-Rate Production
Government Accountability Office
Graphical User Interface
©USC-CSSE
83
University of Southern California
Center for Systems and Software Engineering
List of Acronyms
HMI
HSI
HW
ICM
IOC
IRR
IS&SE
LCO
LRIP
MBASE
NDI
NRC
OC
OCR
OO&D
OODA
O&M
July 2008
(continued)
Human-Machine Interface
Human-System Interface
Hardware
Incremental Commitment Model
Initial Operational Capability
Inception Readiness Review
Integrating Systems and Software Engineering
Life Cycle Objectives
Low-Rate Initial Production
Model-Based Architecting and Software Engineering
Non-Developmental Item
National Research Council
Operational Capability
Operations Commitment Review
Observe, Orient and Decide
Observe, Orient, Decide, Act
Operations and Maintenance
©USC-CSSE
84
University of Southern California
Center for Systems and Software Engineering
List of Acronyms
PDR
PM
PR
PRR
RUP
SoS
SoSE
SSE
SW
SwE
SysE
Sys Engr
S&SE
USD (AT&L)
VCR
V&V
WBS
WMI
July 2008
(continued)
Preliminary Design Review
Program Manager
Public Relations
Product Release Review
Rational Unified Process
System of Systems
System of Systems Engineering
Systems and Software Engineering
Software
Software Engineering
Systems Engineering
Systems Engineer
Systems and Software Engineering
Under Secretary of Defense for Acquisition, Technology, and Logistics
Validation Commitment Review
Verification and Validation
Work Breakdown Structure
Warfighter-Machine Interface
©USC-CSSE
85
University of Southern California
Center for Systems and Software Engineering
Competitive Prototyping Policy: John Young Memo
July 2008
©USC-CSSE
86
Download