Critical Success Factors for Schedule Estimation and Improvement

advertisement
University of Southern California
Center for Systems and Software Engineering
Critical Success Factors for Schedule
Estimation and Improvement
Barry Boehm, USC-CSSE
http://csse.usc.edu
26th COCOMO/Systems and Software Cost Forum
November 2, 2011
University of Southern California
Center for Systems and Software Engineering
Schedule Estimation and Improvement CSFs
• Motivation for good schedule estimation&improvement
• Validated data on current project state and end state
• Relevant estimation methods able to use the data
• Framework for improving on the estimated schedule
• Guidelines for avoiding future schedule overruns
• Conclusions
2 November 2011
©USC-CSSE
2
University of Southern California
Center for Systems and Software Engineering
Motivation for Good Schedule
Estimation and Improvement
• Market Capture/ Cost of Delay
• Overrun avoidance
• Realistic time-to-complete for lagging projects
– August 2011 DoD Workshop
– “No small slips”
2 November 2011
©USC-CSSE
3
University of Southern California
Center for Systems and Software Engineering
Magnitude of Overrun Problem: DoD
2 November 2011
©USC-CSSE
4
University of Southern California
Center for Systems and Software Engineering
Magnitude of Overrun Problem:
Standish Surveys of Commercial Projects
Year
2000
2002
2004
2006
2008
Within budget and schedule
28
34
29
35
32
Prematurely cancelled
23
15
18
19
24
Budget or schedule overrun
49
51
53
46
44
2 November 2011
©USC-CSSE
5
University of Southern California
Center for Systems and Software Engineering
How Much Testing is Enough?
- Early Startup: Risk due to low dependability
- Commercial: Risk due to low dependability
- High Finance: Risk due to low dependability
- Risk due to market share erosion
Combined Risk Exposure
1
Market Share
Erosion
0.8
Early Startup
0.6
RE =
P(L) * S(L)
Sweet
Spot
0.4
Commercial
High Finance
0.2
0
VL
L
N
H
VH
RELY
COCOMO II:
0
12
22
34
54
Added % test time
COQUALMO:
1.0
.475
.24
.125
0.06
P(L)
Early Startup:
.33
.19
.11
.06
.03
S(L)
Commercial:
1.0
.56
.32
.18
.10
S(L)
High Finance:
3.0
1.68
.96
.54
.30
S(L)
Market Risk:
.008
.027
.09
.30
1.0
REm
6 November 2011
2
©USC-CSSE
University of Southern California
Center for Systems and Software Engineering
Schedule Estimation and Improvement CSFs
• Motivation for good schedule estimation&improvement
• Validated data on current project state and end state
• Relevant estimation methods able to use the data
• Framework for improving on the estimated schedule
• Guidelines for avoiding future schedule overruns
• Conclusions
2 November 2011
©USC-CSSE
7
University of Southern California
Center for Systems and Software Engineering
Sources of Invalid Schedule Data
• The Cone of Uncertainty
– If you don’t know what you’re building, it’s hard to
estimate its schedule
• Invalid Assumptions
– Plans often make optimistic assumptions
• Lack of Evidence
– Assertions don’t make it true
• Unclear Data Reporting
– What does “90% complete” really mean?
• The Second Cone of Uncertainty
– And you thought you were out of the woods
• Establishing a Solid Baseline: SCRAM
2 November 2011
©USC-CSSE
8
University of Southern California
Center for Systems and Software Engineering
The Cone of Uncertainty
Schedule highly correlated with size and cost
2 November 2011
©USC-CSSE
9
University of Southern California
Center for Systems and Software Engineering
Invalid Planning Assumptions
•
•
•
•
•
•
•
No requirements changes
Changes processed quickly
Parts delivered on time
No cost-schedule driver changes
Stable external interfaces
Infrastructure providers have all they need
Constant incremental development
productivity
2 November 2011
©USC-CSSE
10
University of Southern California
Center for Systems and Software Engineering
Average Change Processing Time:
Two Complex Systems of Systems
160
140
120
Average workdays
to process 100
80
changes
60
40
20
0
Within
Groups
Across Contract
Groups
Mods
Incompatible with turning within adversary’s OODA loop
11
2 November 2011
©USC-CSSE
University of Southern California
Center for Systems and Software Engineering
Incremental Development Productivity
Decline (IDPD)
• Some savings: more experienced personnel (5-20%)
• Depending on personnel turnover rates
• Some increases: code base growth, diseconomies of
scale, requirements volatility, user requests
• Breakage, maintenance of full code base (20-40%)
• Diseconomies of scale in development, integration
(10-25%)
• Requirements volatility; user requests (10-25%)
• Best case: 20% more effort (IDPD=6%)
• Worst case: 85% (IDPD=23%)
2 November 2011
©USC-CSSE
12
University of Southern California
Center for Systems and Software Engineering
Effects of IDPD on Number of Increments
•
•
•
SLOC
Model relating productivity decline to
number of builds needed to reach 8M
20000
8M
SLOC Full Operational Capability
18000
Assumes Build 1 production of 2M SLOC16000
14000
@ 100 SLOC/PM
Cumulative 12000
KSLOC 10000
– 20000 PM/ 24 mo. = 833 developers
8000
– Constant staff size for all builds
6000
2M
Analysis varies the productivity decline 4000
2000
per build
0
1
– Extremely important to determine the
incremental development
productivity decline (IDPD) factor per
build
2 November 2011
©USC-CSSE
0% productivity decline
10% productivity decline
15% productivity decline
20% productivity decline
2
3
4
5
6
7
8
Build
13
University of Southern California
Center for Systems and Software Engineering
Common Examples of Inadequate Evidence
1.
2.
3.
4.
5.
6.
7.
14
We have three algorithms that met the KPPs on small-scale
nominal cases. At least one will scale up and handle the offnominal cases.
We’ll build it and then tune it to satisfy the KPPs
The COTS vendor assures us that they will have a securitycertified version by the time we need to deliver.
We have demonstrated solutions for each piece from our
NASA, Navy, and Air Force programs. It’s a simple matter of
integration to put them together.
Our subcontractors are Level-5 organizations
The task is 90% complete
Our last project met a 1-second response time requirement
2 November 2011
©USC-CSSE
University of Southern California
Center for Systems and Software Engineering
Problems Encountered without Evidence:
15-Month Architecture Rework Delay
$100M
Required
Architecture:
Custom; many
cache processors
$50M
Original
Architecture:
Modified
Client-Server
Original Cost
Original Spec
1
After Prototyping
2
3
4
5
Response Time (sec)
15
2 November 2011
©USC-CSSE
University of Southern California
Center for Systems and Software Engineering
Sources of Invalid Schedule Data
• The Cone of Uncertainty
– If you don’t know what you’re building, it’s hard to
estimate its schedule
• Invalid Assumptions
– Plans often make optimistic assumptions
• Lack of Evidence
– Assertions don’t make it true
• Unclear Data Reporting
– What does “90% complete” really mean?
• The Second Cone of Uncertainty
– And you thought you were out of the woods
• Establishing a Solid Baseline: SCRAM
2 November 2011
©USC-CSSE
16
University of Southern California
Center for Systems and Software Engineering
Unclear Data Reporting
• All of the requirements are defined
– Sunny day scenarios? Rainy day scenarios?
• All of the units are tested
– Nominal data? Off-nominal data? Singularities and endpoints?
• Sea of Green risk mitigation progress
– Risk mitigation planning, staffing, organizing, preparing done
– But the risk is still red, and the preparations may be inadequate
• 90% of the problem reports are closed
– We needed the numbers down, so we did the easy ones first
• All of the interfaces are defined, with full earned value
– Maybe not for net-centric systems
2 November 2011
©USC-CSSE
17
University of Southern California
Center for Systems and Software Engineering
“Touch Football” Interface Definition Earned Value
• Full earned value taken for defining interface dataflow
• No earned value left for defining interface dynamics
– Joining/leaving network
– Publish-subscribe
– Interrupt handling
– Security protocols
– Exception handling
– Mode transitions
• Result: all green EVMS turns red in integration
2 November 2011
©USC-CSSE
18
University of Southern California
Center for Systems and Software Engineering
The Second Cone of Uncertainty
– Need evolutionary/incremental vs. one-shot development
4x
Uncertainties in competition,
technology, organizations, mission
priorities
2x
1.5x
1.25x
Relative
Cost Range
x
0.8x
0.67x
0.5x
0.25x
Concept of
Operation
Feasibility
Plans
and
Rqts.
Detail
Design
Spec.
Product
Design
Spec.
Rqts.
Spec.
Product
Design
Detail
Design
Accepted
Software
Devel. and
Test
Phases and Milestones
2 November 2011
©USC-CSSE
19
University of Southern California
Center for Systems and Software Engineering
Schedule Compliance Risk Analysis Method (SCRAM)
Root Cause Analysis of Schedule Slippage (RCASS)
Stakeholders
Subcontractors
Functional
Assets
Management
&
Infrastructure
Requirements
Workload
Rework
Staffing &
Effort
Schedule &
Duration
Schedule
Execution
Australian MoD (Adrian Pitman, Angela Tuffley); Software Metrics (Brad&Betsy Clark)
2 November 2011
©USC-CSSE
20
University of Southern California
Center for Systems and Software Engineering
Stakeholders
• “Our stakeholders are like a
100-headed hydra –
everyone can say ‘no’ and
no one can say ‘yes’.”
– Identification
– Management
– Communication


Experiences
Critical stakeholder (customer) added one condition for acceptance
that removed months from the development schedule
 Failed organizational relationship, key stakeholders were not
talking to each other (even though they were in the same facility)
2 November 2011
©USC-CSSE
21
University of Southern California
Center for Systems and Software Engineering
Requirements
• What was that thing you
wanted?
–
–
–
–

Sources
Definitions
Analysis and Validation
Management
Experiences
Misinterpretation of a communication standard led to an
additional 3,000 requirements to implement the standard.
A large ERP project had two system specifications – one with
the sponsor/customer and a different specification under
contract with the developer – would this be a problem?


2 November 2011
©USC-CSSE
22
University of Southern California
Center for Systems and Software Engineering
Workload
Number of work units
 Requirements, configuration
items, SLOC, test cases,
PTRs…
 Contract data deliverables
(CDRLs) workload often
underestimated by both
contractor and customer

Experiences
 Identical estimates in four different areas of software
development (Cut & Paste estimation)
 Re-plan based on twice the historic productivity with no
basis for improvement
 Five delivery iterations before CDRL approval

2 November 2011
©USC-CSSE
23
University of Southern California
Center for Systems and Software Engineering
Schedule Estimation and Improvement CSFs
• Motivation for good schedule estimation&improvement
• Validated data on current project state and end state
• Relevant estimation methods able to use the data
• Framework for improving on the estimated schedule
• Guidelines for avoiding future schedule overruns
• Conclusions
2 November 2011
©USC-CSSE
24
University of Southern California
Center for Systems and Software Engineering
Estimation Methods Able to Use the Data?
• Parametric Schedule Estimation Models
– All of the parameter values available?
• Critical Path Duration Analysis
– Activity network kept up to date? Covers subcontractors?
• Mathematical Optimization Techniques
– Constraints, weighting factors mathematically expressible?
• Monte Carlo based on parameter, task duration ranges
– Guessing at the inputs vs. guessing at the outputs?
• Expert judgment
– Anybody done a system of systems with clouds before?
• Use of Earned Value Management System Data
– Were the net-centric interface protocols defined or not?
• Root Cause Analysis of Overruns to Date
– Can we get data like this from the subcontractors?
2 November 2011
©USC-CSSE
25
University of Southern California
Center for Systems and Software Engineering
Schedule Estimation and Improvement CSFs
• Motivation for good schedule estimation&improvement
• Validated data on current project state and end state
• Relevant estimation methods able to use the data
• Framework for improving on the estimated schedule
• Guidelines for avoiding future schedule overruns
• Conclusions
2 November 2011
©USC-CSSE
26
University of Southern California
Center for Systems and Software Engineering
RAD Opportunity Tree
Business process reengineering - BPRS
Reusing assets - RVHL
Applications generation - RVHL
Schedule as independent variable - O
Eliminating Critical PathTasks
Tools and automation - O
Reducing Time Per Task
Work streamlining (80-20) - O
Increasing parallelism - RESL
Reducing Risks of Single-Point
Failures
Reducing failures - RESL
Reducing their effects - RESL
Early error elimination - RESL
Process anchor points - RESL
Improving process maturity - O
Collaboration technology - CLAB
Minimizing task dependencies - BPRS
Avoiding high fan-in, fan-out - BPRS
Reducing task variance - BPRS
Removing tasks from critical path - BPRS
Reducing Backtracking
Activity Network Streamlining
24x7 development - PPOS
Nightly builds, testing - PPOS
Weekend warriors - PPOS
Increasing Effective Workweek
Better People and Incentives
Transition to Learning Organization
27
Personnel capability and experience - PERS
-O
2 November 2011
O: covered by classic cube root model
©USC-CSSE
University of Southern California
Center for Systems and Software Engineering
Reuse at HP’s Queensferry
Telecommunication Division
70
Time
to
Market
60
(months)
40
Non-reuse Project
Reuse project
50
30
20
10
0
86
87
88
89
90
91
92
Year
28
2 November 2011
©USC-CSSE
University of Southern California
Center for Systems and Software Engineering
The SAIV* Process Model
1. Shared vision and expectations management
2. Feature prioritization
3. Schedule range estimation and core-capability
determination
- Top-priority features achievable within fixed schedule with 90% confidence
4. Architecting for ease of adding or dropping borderlinepriority features
- And for accommodating past-IOC directions of growth
5. Incremental development
- Core capability as increment 1
6. Change and progress monitoring and control
- Add or drop borderline-priority features to meet schedule
*Schedule As Independent Variable; Feature set as dependent variable
– Also works for cost, schedule/cost/quality as independent variable
29November 2011
2
©USC-CSSE
University of Southern California
Center for Systems and Software Engineering
Effect of Size on Software Schedule Sweet Spots
2 November 2011
©USC-CSSE
30
University of Southern California
Center for Systems and Software Engineering
Schedule Estimation and Improvement CSFs
• Motivation for good schedule estimation&improvement
• Validated data on current project state and end state
• Relevant estimation methods able to use the data
• Framework for improving on the estimated schedule
• Guidelines for avoiding future schedule overruns
• Conclusions
2 November 2011
©USC-CSSE
31
University of Southern California
Center for Systems and Software Engineering
Some Frequent Overrun Causes
• Conspiracy of Optimism
• Effects of First Budget Shortfall
– System Engineering
• Decoupling of Technical and Cost/Schedule Analysis
– Overfocus on Performance, Security, Functionality
• Overfocus on Acquisition Cost
– Frequent brittle, point-solution architectures
• Assumption of Stability
• Total vs. Incremental Commitment
– Watch out for the Second Cone of Uncertainty
– Stabilize increments; work change traffic in parallel
2 November 2011
©USC-CSSE
32
University of Southern California
Center for Systems and Software Engineering
Conspiracy of Optimism
We can do 1500 man-months of software in a year
*
University of Southern California
Center for Systems and Software Engineering
Achieving Agility and High Assurance -I
Using timeboxed or time-certain development
Precise costing unnecessary; feature set as dependent variable
Rapid
Change
Foreseeable
Change
(Plan)
Increment N Baseline
High
Assurance
34
Short Development
Increments
Short, Stabilized
Development
Of Increment N
Stable Development
Increments
4/27/2011
Increment N Transition/O&M
University of Southern California
Center for Systems and Software Engineering
Evolutionary Concurrent Engineering:
Incremental Commitment Spiral Model
Unforeseeable Change (Adapt)
Agile
Rebaselining for
Future Increments
Rapid
Change
Short
Development
Increments
Foreseeable
Change
(Plan)
Stable Development
Increments
High
Assurance
Current V&V
Resources
Continuous V&V
35
Deferrals
Short, Stabilized
Development
of Increment N
Increment N Baseline
Future Increment Baselines
Artifacts
Operations and Maintenance
Concerns
Verification and
Validation (V&V)
of Increment N
4/27/2011
Increment N Transition/
Future V&V
Resources
University of Southern California
Center for Systems and Software Engineering
Top-Priority Recommendation:
Evidence-Based Milestone Decision Reviews
• Not schedule-based
– The Contract says PDR on April 1, whether there’s an IMP/IMS or not
• Not event-based
– The Contract needs an IMS, so we backed into one that fits the milestones
• But evidence-based
– The Monte Carlo runs of both the parametric model and the IMS
probabilistic critical path analysis show an 80% on-schedule probability
– We have prioritized the features and architected the system to enable
dropping some low-priority features to meet schedule
– The evidence has been evaluated and approved by independent experts
– The evidence is a first-class deliverable, needing plans and an EVMS
• Added recommendation: better evidence-generation models
– Covering systems of systems, deep subcontractor hierarchies, change
impact analysis, evolutionary acquisition; with calibration data
2 November 2011
©USC-CSSE
36
University of Southern California
Center for Systems and Software Engineering
Backup Charts
2 November 2011
©USC-CSSE
37
University of Southern California
Center for Systems and Software Engineering
SEPRT Seeks Performance Evidence
That can be independently validated
2 November 2011
©USC-CSSE
38
University of Southern California
Center for Systems and Software Engineering
Conclusions: Needs For
• Validated data on current project state and end state
• Relevant estimation methods able to use the data
• Framework for improving on the estimated schedule
• Guidelines for avoiding future schedule overruns
• There are serious and increasing challenges, but the
next presentations provide some ways to address them
2 November 2011
©USC-CSSE
39
University of Southern California
Center for Systems and Software Engineering
Effect of Volatility and Criticality on Sweet Spots
2 November 2011
©USC-CSSE
40
Download