Estimation and Management Measurement for Next-Generation Processes Barry Boehm, USC-CSSE

advertisement
University of Southern California
Center for Systems and Software Engineering
Estimation and Management Measurement
for Next-Generation Processes
Barry Boehm, USC-CSSE
COCOMO/SSCM Forum and ICM Workshop 3
October 27, 2008
University of Southern California
Center for Systems and Software Engineering
Recent CSSE Developments
Center for Systems and Software Engineering
• Prof. Neno Medvidovic becomes next CSSE Director in 2009
– Australian DMO becoming Affiliate
• Stevens-USC team selected as DoD SysE UARC
– Minimum $2M/year for 5 years; renewable
• Competitive Prototyping survey with OSD, NDIA
• DoD Guidebooks: SysOfSys Engr; DoD use of ICM
– Incremental Commitment Model
• Developing chapters for AFCAA Cost Estimation Guide
• COCOMO/SSCM Forum and Workshops, Oct 27-30
– Workshops on DoD ICM Guidebook, COSYSMO 2.0
University of Southern California
Center for Systems and Software Engineering
Thanks for CSSE Affiliate Support
• Commercial Industry (10)
– Bosch, Cisco, Cost Xpert Group, Galorath, IBM, Intelligent
Systems, Microsoft, Price Systems, Softstar Systems, Sun
• Aerospace Industry (8)
– BAE Systems, Boeing, General Dynamics, Lockheed Martin,
Northrop Grumman(2), Raytheon, SAIC
• Government (7)
– DACS, NASA-Ames, NSF, OSD (AT&L/SSE), US Army Research
Labs, US Army TACOM, USAF Cost Center
• FFRDC’s and Consortia (6)
– Aerospace, FC-MD, IDA, JPL, SEI, SPC
• International (2)
– Institute of Software, Chinese Academy of Sciences; Samsung
3/18/2008
(c) USC-CSSE
3
University of Southern California
Center for Systems and Software Engineering
Lead organizations
SER-UARC
Members
•
•
•
•
•
•
•
Auburn University
Air Force Institute of Technology
Carnegie Mellon University
Fraunhofer Center at UMD
Massachusetts Institute of
Technology
Missouri University of Science
and Technology (S&T)
Pennsylvania State University
•
•
•
•
•
•
•
•
•
Southern Methodist University
Texas A&M University
Texas Tech University
University of Alabama in Huntsville
University of California at San Diego
University of Maryland
University of Massachusetts
University of Virginia
Wayne State University
The DoD Systems Engineering Research-University Affiliated Research Center will be
responsible for systems engineering research that supports the development, integration,
testing and sustainability of complex defense systems, enterprises and services. Stevens has
MOUs to develop enhanced SE courseware for competency development within DoD with AFIT,
NPS and DAU. Further, SER-UARC members are located in 11 states, near many DoD facilities
and all DAU campuses.
University of Southern California
Center for Systems and Software Engineering
Dr. Dinesh Verma
Executive Director
Dr. Art Pyster
Deputy Executive Director
Doris H. Schultz
Director of Operations
Dr. Barry Boehm
Director of Research
Pool of more than 140 Senior Researchers and hundreds of
research faculty and graduate students from across members
Stevens’ School of Systems and Enterprises will host the SER-UARC at Stevens’ Hoboken, NJ, campus. Stevens’
faculty engagement will be complemented by a critical mass of systems engineering faculty at USC. A fundamental
tenet of SER-UARC is its virtual nature – each of its 18 members will be a nexus of research activities. All research
projects will be staffed by the best available researchers and graduate students from across the members.
University of Southern California
Center for Systems and Software Engineering
FIRST TWO TASK ORDERS
1.
“Assessing Systems Engineering Effectiveness in Major Defense
Acquisition Programs (MDAPs)”
–
2.
Barry Boehm (USC) task lead, with support from Stevens, Fraunhofer
Center-MD, MIT, University of Alabama at Huntsville
“Evaluation of Systems Engineering Methods, Processes, and
Tools (MPT) on Department of Defense and Intelligence
Community Programs”
–
Mike Pennotti (Stevens) task lead, with support from USC, University
of Alabama at Huntsville, and Fraunhofer-MD
University of Southern California
Center for Systems and Software Engineering
Estimation and Measurement Challenges
for Next-Generation Processes
• Emergent requirements
– Example: Virtual global collaboration support systems
– Need to manage early concurrent engineering
• Rapid change
– In competitive threats, technology, organizations,
environment
• Net-centric systems of systems
– Incomplete visibility and control of elements
• Model-driven, service-oriented, Brownfield systems
– New phenomenology, counting rules
• Always-on, never-fail systems
– Need to balance agility and discipline
University of Southern California
Center for Systems and Software Engineering
Emergent Requirements
– Example: Virtual global collaboration support systems
• View sharing, navigation,
modification;agenda
control; access control
• Mix of synchronous and
asynchronous participation
• No way to specify
collaboration support
requirements in advance
• Need greater investments in
concurrent engineering
– of needs, opportunities,
requirements, solutions, plans,
resources
University of Southern California
Center for Systems and Software Engineering
The Broadening Early Cone of Uncertainty (CU)
• Need greater investments in
narrowing CU
Global Interactive,
Brownfield
2005 =>
Batch, Greenfield 1965
ConOps
Local Interactive,
Some Legacy
1985
Specs/Plans
IOC
– Mission, investment, legacy
analysis
– Competitive prototyping
– Concurrent engineering
– Associated estimation
methods and management
metrics
• Larger systems will often
have subsystems with
narrower CU’s
University of Southern California
Center for Systems and Software Engineering
ICM HSI Levels of Activity for Complex Systems
15 July 2008
©USC-CSSE
10
University of Southern California
Center for Systems and Software Engineering
The Incremental Commitment Life Cycle Process: Overview
Stage I: Definition
Stage II: Development and Operations
Anchor Point
Milestones
Synchronize, stabilize concurrency via FEDs
Risk patterns
determine life
cycle process
15 July 2008
©USC-CSSE
11
University of Southern California
Center for Systems and Software Engineering
Nature of FEDs and Anchor Point Milestones
•
Evidence provided by developer and validated by independent experts
that:
If the system is built to the specified architecture, it will
– Satisfy the specified operational concept and requirements
• Capability, interfaces, level of service, and evolution
– Be buildable within the budgets and schedules in the plan
– Generate a viable return on investment
– Generate satisfactory outcomes for all of the success-critical stakeholders
•
Shortfalls in evidence are uncertainties and risks
– Should be resolved or covered by risk management plans
•
Assessed in increasing detail at major anchor point milestones
– Serves as basis for stakeholders’ commitment to proceed
– Serves to synchronize and stabilize concurrently engineered elements
– Can be used to strengthen current schedule- or event-based reviews
03/19/2008
©USC-CSSE
12
University of Southern California
Center for Systems and Software Engineering
Key Point: Need to Show Evidence
• Not just traceability matrices and PowerPoint charts
• Evidence can include results of
–
–
–
–
–
–
–
Prototypes: networks, robots, user interfaces, COTS interoperability
Benchmarks: performance, scalability, accuracy
Exercises: mission performance, interoperability, security
Models: cost, schedule, performance, reliability; tradeoffs
Simulations: mission scalability, performance, reliability
Early working versions: infrastructure, data fusion, legacy compatibility
Combinations of the above
• Validated by independent experts
–
–
–
–
Realism of assumptions
Representativeness of scenarios
Thoroughness of analysis
Coverage of key off-nominal conditions
• Effort estimation via COSYSMO and expert judgment
– Analogy, unit cost, activity-based estimation
4/15/05
© USC-CSE
13
University of Southern California
Center for Systems and Software Engineering
FED Development Management Metrics
• FED is a first-class deliverable
– Not an optional appendix
• Needs planning, progress tracking, earned value
• As with other ICM artifacts, FED process and
content are risk-driven
• Generic set of steps provided, but need to be
tailored to situation
– Can apply at increasing levels of detail in Exploration,
Validation, and Foundations phases
– Can be satisfied by pointers to existing evidence
– Also applies to Stage II Foundations rebaselining process
University of Southern California
Center for Systems and Software Engineering
Steps for Developing Feasibility Evidence
A.
Develop phase work-products/artifacts
–
B.
Determine most critical feasibility assurance issues
–
C.
Issues for which lack of feasibility evidence is program-critical
Evaluate feasibility assessment options
–
–
D.
E.
For examples, see ICM Anchor Point Milestone Content charts
Cost-effectiveness, risk reduction leverage/ROI, rework
avoidance
Tool, data, scenario availability
Select options, develop feasibility assessment plans
Prepare FED assessment plans and earned value
milestones
–
–
Try to relate earned value to risk-exposure avoided rather than
budgeted cost
Up to 3:1 risk-avoidance leverage for large systems
“Steps” denoted by letters rather than numbers
to indicate that many are done concurrently
University of Southern California
Center for Systems and Software Engineering
Steps for Developing Feasibility Evidence (cont.)
F. Begin monitoring progress with respect to plans
– Also monitor project/technology/objectives changes and adapt
plans
G. Prepare evidence-generation enablers
– Assessment criteria
– Parametric models, parameter values, bases of estimate
– COTS assessment criteria and plans
– Benchmarking candidates, test cases
– Prototypes/simulations, evaluation plans, subjects, and
scenarios
– Instrumentation, data analysis capabilities
H. Perform pilot assessments; evaluate and iterate plans and enablers
I. Assess readiness for Commitment Review
– Shortfalls identified as risks and covered by risk mitigation
plans
– Proceed to Commitment Review if ready
J. Hold Commitment Review when ready; adjust plans based on
review outcomes
University of Southern California
Center for Systems and Software Engineering
COSYSMO Feasibility Evidence Effort Estimation
# Requirements
# Interfaces
# Scenarios
# Algorithms
+
Volatility Factor
Size
Drivers
Effort
Effort
Multipliers
- Application factors
-8 factors
- Team factors
-6 factors
- Schedule driver
17
COSYSMO
Calibration
WBS guided by
ISO/IEC 15288
University of Southern California
Center for Systems and Software Engineering
4. Rate Cost Drivers Application
18
University of Southern California
Center for Systems and Software Engineering
Estimation and Measurement Challenges
for Next-Generation Processes
• Emergent requirements
– Example: Virtual global collaboration support systems
– Need to manage early concurrent engineering
• Rapid change
– In competitive threats, technology, organizations,
environment
• Net-centric systems of systems
– Incomplete visibility and control of elements
• Model-driven, service-oriented, Brownfield systems
– New phenomenology, counting rules
• Always-on, never-fail systems
– Need to balance agility and discipline
University of Southern California
Center for Systems and Software Engineering
Rapid Change Creates a Late Cone of Uncertainty
– Need incremental vs. one-shot development
4x
Uncertainties in competition,
technology, organizations,
mission priorities
2x
1.5x
1.25x
Relative
Cost Range
x
0.8x
0.67x
0.5x
0.25x
Concept of
Operation
Feasibility
Plans
and
Rqts.
Detail
Design
Spec.
Product
Design
Spec.
Rqts.
Spec.
Product
Design
Detail
Design
Accepted
Software
Devel. and
Test
Phases and Milestones
15 July 2008
©USC-CSSE
20
University of Southern California
Center for Systems and Software Engineering
The Incremental Commitment Life Cycle Process: Overview
Stage I: Definition
Stage II: Development and Operations
Anchor Point
Milestones
Concurrently engr.
OpCon, rqts, arch,
plans, prototypes
21
July 2008
Concurrently engr.
Incr.N (ops), N+1
(devel), N+2 (arch)
©USC-CSSE
University of Southern California
Center for Systems and Software Engineering
ICM Risk-Driven Scalable Development
Rapid
Change
Foreseeable
Change
(Plan)
Increment N Baseline
High
Assurance
22
Short
Development
Increments
Short, Stabilized
Increment N Transition/O&M
Development
Of Increment N
Stable Development
Increments
July 2008
©USC-CSSE
University of Southern California
Center for Systems and Software Engineering
Change Processing Requires Complementary Processes
Unforeseeable Change (Adapt)
Future Increment Baselines
Agile
Rebaselining for
Future Increments
Rapid
Change
Foreseeable
Change
(Plan)
Short
Development
Increments
Increment N Baseline
Stable Development
Increments
High
Assurance
Current V&V
Resources
Continuous V&V
23
Deferrals
Short, Stabilized
Development
of Increment N
Artifacts
Operations and Maintenance
Concerns
Verification and
Validation (V&V)
of Increment N
July 2008
Increment N Transition/
Future V&V
Resources
©USC-CSSE
University of Southern California
Center for Systems and Software Engineering
Change Processing Requires Complementary Processes
Need different progress measurements, contract incentives
Unforeseeable Change (Adapt)
Prioritized Change Resolution
Future Increment Baselines
Agile
Rebaselining for
Future Increments
Rapid
Change
Foreseeable
Change
(Plan)
Short
Development
Increments
Increment N Baseline
Stable Development
Increments
High
Assurance
Current V&V
Resources
Continuous V&V
24
Deferrals
Earned Value
Short, Stabilized
Development
of Increment N
Artifacts
Increment N Transition/
Operations and Maintenance
Concerns
Problem Identification Delay
Future V&V
Verification and
Validation (V&V)
of Increment N
July 2008
Resources
©USC-CSSE
University of Southern California
Center for Systems and Software Engineering
Change Processing Requires Complementary Processes
Unforeseeable Change (Adapt)
Future Increment Baselines
Agile
Rebaselining for
Future Increments
Rapid
Change
Foreseeable
Change
(Plan)
Short
Development
Increments
Increment N Baseline
Stable Development
Increments
High
Assurance
Current V&V
Resources
Continuous V&V
25
Deferrals
Short, Stabilized
Development
of Increment N
Artifacts
Operations and Maintenance
Concerns
Verification and
Validation (V&V)
of Increment N
July 2008
Increment N Transition/
Future V&V
Resources
©USC-CSSE
University of Southern California
Center for Systems and Software Engineering
Agile Change Processing and Rebaselining
Key metrics: Prioritized change speed and feasibility evidence
Stabilized
Increment-N
Development Team
Agile FutureIncrement
Rebaselining Team
Future Increment
Managers
Defer some Increment-N capabilities
Proposed changes
Recommend
handling
in current
increment
Negotiate change
disposition
Accept
changes
Handle
Accepted
Increment-N
changes
Assess Changes,
Propose Handling
Handle in
current
rebaseline
Propose
Changes
Discuss, revise,
defer, or drop
Formulate, analyze options
in context of other changes
Recommend deferrals to future
increments
Discuss, resolve deferrals to
future increments
Rebaseline
future-increment
Foundations packages
26
Recommend no action,
provide rationale
Change
Proposers
July 2008
Prepare for rebaselined
future-increment
development
©USC-CSSE
University of Southern California
Center for Systems and Software Engineering
Rapid Change: Asymmetric Competition
Need to adapt within adversary’s OODA loop
•Adversary
•Picks time and place
•Little to lose
•Lightweight, simple
systems and processes
Observe
new/updated
objectives,
constraints,
alternatives
Orient with respect
to stakeholders
priorities,
feasibility, risks
•Can reuse anything
•Defender
•Ready for anything
Act on plans,
specifications
•Much to lose
Decide on new
capabilities,
architecture
upgrades, plans
•More heavy, complex
systems and processes
•Reuse requires trust
03/19/2008
©USC-CSSE
27
University of Southern California
Center for Systems and Software Engineering
Need Rapid, Prioritized Change
Execution and Monitoring
160
140
120
Average workdays
to process 100
80
changes: 2
large systems
60
40
20
0
Within
Groups
Across Contract
Groups
Mods
Incompatible with turning within adversary’s OODA loop
12/31/2007
©USC-CSSE
28
University of Southern California
Center for Systems and Software Engineering
Incremental Development Productivity Decline (IDPD)
• Example: Site Defense BMD Software
– 5 builds, 7 years, $100M
– Build 1 productivity over 300 SLOC/person month
– Build 5 productivity under 150 SLOC/PM
• Including Build 1-4 breakage, integration, rework
• 318% change in requirements across all builds
• IDPD factor = 20% productivity decrease per build
– Similar trends in later unprecedented systems
– Not unique to DoD: key source of Windows Vista delays
• Maintenance of full non-COTS SLOC, not ESLOC
– Build 1: 200 KSLOC new; 200K reused@20% = 240K ESLOC
– Build 2: 400 KSLOC of Build 1 software to maintain, integrate
University of Southern California
Center for Systems and Software Engineering
IDPD Cost Drivers:
Conservative 4-Increment Example
• Some savings: more experienced personnel (5-20%)
• Depending on personnel turnover rates
• Some increases: code base growth, diseconomies of
scale, requirements volatility, user requests
• Breakage, maintenance of full code base (20-40%)
• Diseconomies of scale in development, integration
(10-25%)
• Requirements volatility; user requests (10-25%)
• Best case: 20% more effort (IDPD=6%)
• Worst case: 85% (IDPD=23%)
University of Southern California
Center for Systems and Software Engineering
Effects of IDPD on Number of Increments
•
•
•
SLOC
Model relating productivity decline to
number of builds needed to reach 8M
20000
8M
SLOC Full Operational Capability
18000
Assumes Build 1 production of 2M SLOC16000
14000
@ 100 SLOC/PM
Cumulative 12000
KSLOC 10000
– 20000 PM/ 24 mo. = 833 developers
8000
– Constant staff size for all builds
6000
2M
Analysis varies the productivity decline 4000
2000
per build
0
1
– Extremely important to determine the
incremental development
productivity decline (IDPD) factor per
build
0% productivity decline
10% productivity decline
15% productivity decline
20% productivity decline
2
3
4
5
Build
6
7
8
University of Southern California
Center for Systems and Software Engineering
Estimation and Measurement Challenges
for Next-Generation Processes
• Emergent requirements
– Example: Virtual global collaboration support systems
– Need to manage early concurrent engineering
• Rapid change
– In competitive threats, technology, organizations,
environment
• Net-centric systems of systems
– Incomplete visibility and control of elements
• Model-driven, service-oriented, Brownfield systems
– New phenomenology, counting rules
• Always-on, never-fail systems
– Need to balance agility and discipline
University of Southern California
Center for Systems and Software Engineering
Net-Centric Systems of Systems Challenges
• Need for rapid adaptation to change
– See first, understand first, act first, finish decisively
• Built-in authority-responsibility mismatches
– Increasing as authority decreases through Directed,
Acknowledged, Collaborative, and Virtual SoS classes
• Incompatible element management chains, legacy
constraints, architectures, service priorities, data,
operational controls, standards, change priorities...
• High priority on leadership skills, collaboration
incentives, negotiation support such as cost models
– SoS variety and complexity makes compositional cost
models more helpful than one-size-fits-all models
University of Southern California
Center for Systems and Software Engineering
Example: SoSE Synchronization Points
FCR1
SoS-Level Exploration
Valuation
Candidate Supplier/
Strategic Partner n
DCR1
Architecting
OCR1
Develop
OCR2
Operation

LCO-type
Proposal &
Feasibility
Info
●
Source
Selection
Rebaseline/
Adjustment FCR1
●
●
Candidate Supplier/
Strategic Partner 1
OCRx2
OCRx1
System x   
Develop
OCRx3
Operation
Operation
OCRx5
OCRx4
Operation
Operation
DCRC
OCRC1

●
●
●
System C   
FCRC
Exploration
Valuation
Architecting
FCRB
System B   
Exploration
Valuation
34
Exploration
Valuation
DCRB
Architecting
FCRA
System A   
Develop
Operation
OCRB1
Develop
July 2008

OCRB2
Operation

OCRA1
DCRA
Architecting
OCRC2
Develop
Operation
©USC-CSSE

University of Southern California
Center for Systems and Software Engineering
Recognizing Authority/Responsibility Mismatch
Acknowledged SoS
Collaborative SoS
Directed SoS
Ad-hoc SoS
Responsibility
Infeasible
Traditional Methods
Authority
35
18 June 2007
©USC-CSSE
University of Southern California
Center for Systems and Software Engineering
Compositional approaches: Directed systems of systems
LCO
Schedule =
Effort/Staff
Try to model
ideal staff size
Source
Selection
RFP, SOW,
Evaluations
,
Contracting
Effort/Staff
risks,
rework
risks,
rework
Proposal
Feasibility
Risk-manage
slowperformer,
completeness
Process and estimate content will vary
with system or SoS risk
Similar, with
added change
traffic from
users…
COSOSIMO
-like
LSI –
Agile
LSI IPTs –
Agile
Suppliers –
Agile
LCA1
Degree of
Completeness
Customer,
Users
Assess
sources of
change;
Negotiate
rebaselined
LCA2
package at
all levels
Effort/staff
at all levels
Proposals
Increments
2,… n
Increment 1
SoS
Architecting
COSOSIMO
-like
IOC1
Rework LCO  LCA
Packages at all levels
Effort COSYSMO-like.
Elaboration
Assess
compatibility, shortfalls
Inception
LCA
Develop to
spec,
V&V
CORADMO
risks,
rework
LCA2
-like
Integrate
COSOSIMO
-like
Similar, with Suppliers –
PD – V&V
added rebaselineing risks
and rework…
risks,
rework
LCA2 shortfalls
LSI –
Integrators
University of Southern California
Center for Systems and Software Engineering
COSOSIMO Operational Concept
Size Drivers
Interface-related eKSLOC
Number of logical interfaces
at SoS level
Number of operational
scenarios
Number of components
Exponential Scale Factors
COSOSIMO
Integration simplicity
Integration risk resolution
Integration stability
Component readiness
Integration capability
Integration processes
37
SoS
Definition and
Integration
Effort
Calibration
04/16/07
(c) USC-CSSE
University of Southern California
Center for Systems and Software Engineering
Different Risk Patterns Yield Different Processes
15 July 2008
©USC-CSSE
38
University of Southern California
Center for Systems and Software Engineering
SoSs May Contain Several Common Risk-Driven ICM Special Cases
Special Case
Example
Size,
Complexity
Change Rate
%
/Month
Criticality
NDI Support
Org, Personnel
Capability
Complete
Key Stage I Activities : Incremental
Definition
Key Stage II Activities: Incremental
Development, Operations
Acquire NDI
Use NDI
Time per Build; per
Increment
1
Use NDI
Small Accounting
2
Agile
E-services
Low
1 – 30
Low-Med
Good;
in place
Agile-ready
Med-high
Skip Valuation , Architecting
phases
Scrum plus agile methods of
choice
<= 1 day;
2-6 weeks
3
Architected Agile
Business data
processing
Med
1 – 10
Med-High
Good;
most in place
Agile-ready
Med-high
Combine Valuation, Architecting
phases. Complete NDI
preparation
Architecture-based Scrum of
Scrums
2-4 weeks;
2-6 months
4
Formal Methods
Security kernel or
safety-critical LSI
chip
Low
0.3 – 1
Extra
high
None
Strong formal
methods
experience
Precise formal specification
Formally-based programming
language; formal verification
1-5 days;
1-4 weeks
5
HW component
with embedded
SW
Multi-sensor
control device
Low
0.3 – 1
Med-Very
High
Good;
In place
Experienced; medhigh
Concurrent HW/SW engineering.
CDR-level ICM DCR
IOC Development, LRIP, FRP.
Concurrent Version N+1
engineering
SW: 1-5 days;
Market-driven
6
Indivisible IOC
Complete vehicle
platform
Med –
High
0.3 – 1
High-Very
High
Some in place
Experienced; medhigh
Determine minimum-IOC likely,
conservative cost. Add deferrable
SW features as risk reserve
Drop deferrable features to
meet conservative cost.
Strong award fee for features
not dropped
SW: 2-6 weeks;
Platform: 6-18
months
7
NDI- Intensive
Supply Chain
Management
Med –
High
0.3 – 3
Med- Very
High
NDI-driven
architecture
NDI-experienced;
Med-high
Thorough NDI-suite life cycle
cost-benefit analysis, selection,
concurrent requirements/
architecture definition
Pro-active NDI evolution
influencing, NDI upgrade
synchronization
SW: 1-4 weeks;
System: 6-18
months
8
Hybrid agile /
plan-driven
system
C4ISR
Med –
Very High
Mixed parts:
1 – 10
Mixed
parts;
Med-Very
High
Mixed parts
Mixed parts
Full ICM; encapsulated agile in
high change, low-medium
criticality parts (Often HMI,
external interfaces)
Full ICM ,three-team
incremental development,
concurrent V&V, nextincrement rebaselining
1-2 months;
9-18 months
9
Multi-owner
system of
systems
Net-centric
military operations
Very High
Mixed parts:
1 – 10
Very High
Many NDIs;
some in place
Related
experience, medhigh
Full ICM; extensive multi-owner
team building, negotiation
Full ICM; large ongoing
system/software engineering
effort
2-4 months; 18-24
months
10
Family of
systems
Medical Device
Product Line
Med –
Very High
1–3
Med –
Very High
Some in place
Related
experience, med –
high
Full ICM; Full stakeholder
participation in product line
scoping. Strong business case
Full ICM. Extra resources for
first system, version control,
multi-stakeholder support
1-2 months; 9-18
months
C4ISR: Command, Control, Computing, Communications, Intelligence, Surveillance, Reconnaissance. CDR: Critical Design Review.
DCR: Development Commitment Review. FRP: Full-Rate Production. HMI: Human-Machine Interface. HW: Hard ware.
IOC: Initial Operational Capability. LRIP: Low-Rate Initial Production. NDI: Non-Development Item. SW: Software
15 July 2008
©USC-CSSE
39
University of Southern California
Center for Systems and Software Engineering
Estimation and Measurement Challenges
for Next-Generation Processes
• Emergent requirements
– Example: Virtual global collaboration support systems
– Need to manage early concurrent engineering
• Rapid change
– In competitive threats, technology, organizations,
environment
• Net-centric systems of systems
– Incomplete visibility and control of elements
• Model-driven, service-oriented, Brownfield systems
– New phenomenology, counting rules
• Always-on, never-fail systems
– Need to balance agility and discipline
University of Southern California
Center for Systems and Software Engineering
Model-Driven, Service-Oriented, Brownfield Systems
New phenomenology, counting rules
• Product generation from model directives
– Treat as very high level language: count directives
• Model reuse feasibility, multi-model incompatibilities
– Use Feasibility Evidence progress tracking measures
• Functional vs. service-oriented architecture mismatches
– Part-of (one-many) vs. served-by (many-many)
• Brownfield legacy constraints, reverse engineering
– Elaborate COSYSMO Migration Complexity cost driver
– Elaborate COCOMO II reuse model for reverse engineering
University of Southern California
Center for Systems and Software Engineering
Always-on, never-fail systems
• Consider using “weighted SLOC” as a productivity metric
• Some SLOC are “heavier to move into place” than others
– And largely management uncontrollables
– Examples: high values of COCOMO II cost drivers
• RELY: Required Software Reliability
• DATA: Database Size
• CPLX: Software Complexity
• DOCU: Required Documentation
• RUSE: Required Development for Future Reuse
• TIME: Execution Time Constraint
• STOR: Main Storage Constraint
• SCED: Required Schedule Compression
• Provides way to compare productivities across projects
– And to develop profiles of project classes
15 July 2008
©USC-CSSE
42
University of Southern California
Center for Systems and Software Engineering
Initial UARC Task: Evaluate SysE
Effectiveness Measues
• Develop measures to monitor and predict system
engineering effectiveness for DoD programs
– Define SysE effectiveness
– Develop measurement methods for contractors, DoD
program managers, program executive officers
– Recommend continuous process improvement approach
– Identify DoD SysE outreach strategy
• Consider full range of data sources
– Journals, tech reports, org’s (INCOSE, NDIA), DoD studies
• Deliverables
– Report and presentation: approach, results
– Tool set: metrics, processes, procedures, software
03/19/2008
©USC-CSSE
43
University of Southern California
Center for Systems and Software Engineering
Candidate Measurement Methods
•
•
•
•
•
•
•
•
•
•
NDIA/SEI capability/challenge criteria
INCOSE/LMCO/MIT quantitative measurements
IBM/Stevens Complexity Points
USC/MIT COSYSMO size, cost driver parameters
SISAIG Early Warning Indicators
USC complex systems award fee criteria
OSD/FC-MD Best Practices Repository
USC Anchor Point Feasibility Evidence progress
UAH teaming theories
Systemic analysis root causes
03/19/2008
©USC-CSSE
44
University of Southern California
Center for Systems and Software Engineering
Fragmentary
Competent but
Incomplete
Strong
Externally Validated
Impact (1-5)
Meager
Question #
Unavailable
Macro Risk Model Interface
USC MACRO RISK MODEL
Copyright USC-CSSE
NOTE: Evidence ratings should be done independently, and should
address the degree to which the evidence that has been provided
supports a "yes" answer to each question.
U
M
F
C/I
S
EV
EVIDENCE
Rationale and Ar
System and software objectives and constraints have been adequately defined and validated.
Goal 1:
Critical Success Factor 1
System and software functionality and performance objectives have been defined
and prioritized.
1(a)
3
3
Are the user needs clearly defined and tied to the mission?
1(b)
3
3
Is the impact of the system on the user understood?
1(c)
2
2
Have all the risks that the software-intensive acquisition will not meet the user
expectations been addressed?
Critical Success Factor 2
The system boundary, operational environment, and system and software
interface objectives have been defined.
2(a)
3
3
Are all types of interface and dependency covered?
2(b)
4
4
For each type, are all aspects covered?
2(c)
5
5
Are interfaces and dependencies well monitored and controlled?
Critical Success Factor 3
3(a)
CSF Risk
(1-25)
5
6/18/08
5
System and software flexibility and resolvability objectives have been defined and
prioritized.
6
15
21
Has the system been conceived and described as "evolutionary"?
©USC-CSSE
45
University of Southern California
Center for Systems and Software Engineering
Conclusions
• DoD software estimation, data collection, and
management metrics initiatives reaching critical mass
– SRDR, WBS, 5000.2, Guidebooks, ICM evidence-based reviews
• Software Cost Metrics Manual initiative provides
further opportunity to consolidate gains
– Provide framework for continuous improvement
– Develop strong community of practice
• Future trends imply need to concurrently address new
DoD estimation and management metrics challenges
– Emergent requirements, rapid change, net-centric systems of
systems, Brownfield systems, ultrahigh assurance
15 July 2008
©USC-CSSE
46
University of Southern California
Center for Systems and Software Engineering
References
Boehm, B., “Some Future Trends and Implications for Systems and Software Engineering Processes”,
Systems Engineering 9(1), pp. 1-19, 2006.
Boehm, B. and Lane J., "21st Century Processes for Acquiring 21st Century Software-Intensive Systems of
Systems." CrossTalk: Vol. 19, No. 5, pp.4-9, 2006.
Boehm, B., and Lane, J., “Using the ICM to Integrate System Acquisition, Systems Engineering, and
Software Engineering,” CrossTalk, October 2007, pp. 4-9.
Boehm, B., Brown, A.W.. Clark, B., Madachy, R., Reifer, D., et al., Software Cost Estimation with COCOMO II,
Prentice Hall, 2000.
Department of Defense (DoD), Defense Acquisition Guidebook, version 1.6, http://akss.dau.mil/dag/, 2006.
Department of Defense (DoD), Instruction 5000.2, Operation of the Defense Acquisition System, May 2003.
Department of Defense (DoD), Systems Engineering Plan Preparation Guide, USD(AT&L), 2004.
Galorath, D., and Evans, M., Software Sizing, Estimation, and Risk Management, Auerbach, 2006.
Hopkins, R., and Jenkins, K., Eating the IT Elephant: Moving from Greenfield Development to Brownfield,
IBM Press, 2008.
Lane, J. and Boehm, B., “Modern Tools to Support DoD Software-Intensive System of Systems Cost
Estimation,” DACS State of the Art Report, also Tech Report USC-CSSE-2007-716
Lane, J. and Valerdi, R., “Synthesizing SoS Concepts for Use in Cost Estimation”, Proceedings of IEEE
Systems, Man, and Cybernetics Conference, 2005.
Madachy, R., “Cost Model Comparison,” Proceedings 21st, COCOMO/SCM Forum, November, 2006,
http://csse.usc.edu/events/2006/CIIForum/pages/program.html
Maier, M., “Architecting Principles for Systems-of-Systems”; Systems Engineering, Vol. 1, No. 4 (pp 267284).
Northrop, L., et al., Ultra-Large-Scale Systems: The Software Challenge of the Future, Software
Engineering Institute, 2006.
Reifer, D., “Let the Numbers Do the Talking,” CrossTalk, March 2002, pp. 4-8.
Valerdi, R, Systems Engineering Cost Estimation with COSYSMO, Wiley, 2009 (to appear)
15 July 2008
©USC-CSSE
47
University of Southern California
Center for Systems and Software Engineering
List of Acronyms
AA
AAF
AAM
COCOMO
COSOSIMO
COSYSMO
COTS
CU
DCR
DoD
ECR
ESLOC
EVMS
FCR
FDN
FED
GD
GOTS
15 July 2008
Assessment and Assimilation
Adaptation Adjustment Factor
Adaptation Adjustment Modifier
Constructive Cost Model
Constructive System of Systems Integration Cost Model
Constructive Systems Engineering Cost Model
Commercial Off-The-Shelf
Cone of Uncertainty
Development Commitment Review
Department of Defense
Exploration Commitment Review
Equivalent Source Lines of Code
Earned Value Management System
Foundations Commitment Review
Foundations, as in FDN Package
Feasibility Evidence Description
General Dynamics
Government Off-The-Shelf
©USC-CSSE
48
University of Southern California
Center for Systems and Software Engineering
List of Acronyms (continued)
ICM
IDPD
IOC
LCA
LCO
LMCO
LSI
MDA
NDA
NDI
NGC
OC
OCR
OO
OODA
O&M
PDR
PM
15 July 2008
Incremental Commitment Model
Incremental Development Productivity Decline
Initial Operational Capability
Life Cycle Architecture
Life Cycle Objectives
Lockheed Martin Corporation
Lead System Integrator
Model-Driven Architecture
Non-Disclosure Agreement
Non-Developmental Item
Northrop Grumman Corporation
Operational Capability
Operations Commitment Review
Object-Oriented
Observe, Orient, Decide, Act
Operations and Maintenance
Preliminary Design Review
Program Manager
©USC-CSSE
49
University of Southern California
Center for Systems and Software Engineering
List of Acronyms (continued)
RFP
SAIC
SLOC
SoS
SoSE
SRDR
SSCM
SU
SW
SwE
SysE
Sys Engr
S&SE
ToC
USD (AT&L)
VCR
V&V
WBS
15 July 2008
Request for Proposal
Science Applications international Corporation
Source Lines of Code
System of Systems
System of Systems Engineering
Software Resources Data Report
Systems and Software Cost Modeling
Software Understanding
Software
Software Engineering
Systems Engineering
Systems Engineer
Systems and Software Engineering
Table of Contents
Under Secretary of Defense for Acquisition, Technology, and Logistics
Validation Commitment Review
Verification and Validation
Work Breakdown Structure
©USC-CSSE
50
University of Southern California
Center for Systems and Software Engineering
Backup Charts
University of Southern California
Center for Systems and Software Engineering
How Much Architecting is Enough?
Percent of Time Added to Overall Schedule
- Larger projects need more
100
90
10000
KSLOC
80
Percent of Project Schedule Devoted to
Initial Architecture and Risk Resolution
70
Added Schedule Devoted to Rework
(COCOMO II RESL factor)
Total % Added Schedule
60
Sweet Spot
50
40
100 KSLOC
30
Sweet Spot Drivers:
20
Rapid Change: leftward
10 KSLOC
10
High Assurance: rightward
0
0
10
20
30
40
50
60
Percent of Time Added for Architecture and
Risk Resolution
15 July 2008
©USC-CSSE
52
University of Southern California
Center for Systems and Software Engineering
Large-Scale
Simulation and
Testbed FED
Preparation
Example
University of Southern California
Center for Systems and Software Engineering
TRW/COCOMO II Experience Factory: IV
Rescope
System objectives:
fcn’y, perf., quality
COCOMO II
Cost,
Sched,
Risks
M/S
Results
Milestone plans,
resources
Cost, Sched,
Quality
drivers
Evaluate
Corporate
SW
Improvement
Strategies
Yes
Execute
project
to next
Milestone
Ok?
Corporate parameters:
tools, processes, reuse
Improved
Corporate
Parameters
N
o
Recalibrate
COCOMO II
Milestone
expectations
Accumulate
COCOMO II
calibration
data
No
Ok?
Revised
Expectations
Yes
Done?
Yes
End
54
Revise
Milestones,
Plans,
Resources
No
Download