Determine How Much Dependability Is Enough: A Value-Based Approach

advertisement
Determine How Much Dependability is
Enough:
A Value-Based Approach
LiGuo Huang, Barry Boehm
University of Southern California
© USC-CSE
1
Software Dependability Business Case
• Software dependability in a competitive
world
– Software dependability requirements often conflict with
schedule/cost requirements
• How much dependability is enough?
– When to stop testing and release the product
– Determining a risk-balanced “sweet spot” operating point
© USC-CSE
2
Competing on Schedule and Dependability
– A risk analysis approach
• Risk Exposure RE = Prob (Loss) * Size (Loss)
– “Loss” – financial; reputation; future prospects, …
• For multiple sources of loss:
RE =  [Prob (Loss) * Size (Loss)]source
sources
© USC-CSE
3
Example RE Profile: Time to Ship
– Loss due to unacceptable dependability
Many defects: high P(L)
Critical defects: high S(L)
RE =
P(L) * S(L)
Few defects: low P(L)
Minor defects: low S(L)
Time to Ship (amount of testing)
© USC-CSE
4
Example RE Profile: Time to Ship
- Loss due to unacceptable dependability
- Loss due to market share erosion
Many defects: high P(L)
Critical defects: high S(L)
Many rivals: high P(L)
Strong rivals: high S(L)
RE =
P(L) * S(L)
Few rivals: low P(L)
Weak rivals: low S(L)
Few defects: low P(L)
Minor defects: low S(L)
Time to Ship (amount of testing)
© USC-CSE
5
Example RE Profile: Time to Ship
- Sum of Risk Exposures
Many defects: high P(L)
Critical defects: high S(L)
RE =
P(L) * S(L)
Many rivals: high P(L)
Strong rivals: high S(L
Sweet
Spot
Few rivals: low P(L)
Weak rivals: low S(L)
Few defects: low P(L)
Minor defects: low S(L)
Time to Ship (amount of testing)
© USC-CSE
6
Software Development Cost/Reliability Tradeoff
- COCOMO II calibration to 161 projects
RELY
Rating
Defect Risk
Rough MTBF(mean time between failures)
Very
High
Loss of
Human Life
High
High
Financial
Loss
10K hours
Nominal
Moderate
recoverable
loss
300 hours
Low, easily
recoverable
loss
10 hours
Low
Very
Low
Safety-critical
300K hours
1.26
Commercial
quality leader
1.10
In-house support software
1.0
Commercial
cost leader
0.92
Slight
inconvenience
(1 hour)
Startup
0.82 demo
0.8
0.9
1.0
1.1
1.2
1.3
Relative Cost/Source Instruction
© USC-CSE
7
Current COQUALMO System
COCOMO II
COQUALMO
Software Size Estimate
Software platform,
Project, product and
personnel attributes
Defect
Introduction
Model
Software
development effort,
cost and schedule
estimate
Number of residual defects
Defect density per unit of size
Defect removal
profile levels
Automation,
Reviews, Testing
Defect
Removal
Model
© USC-CSE
8
Defect Removal Rating Scales
COCOMO II p.263
Very Low
Low
Nominal
High
Very High
Extra High
Automated
Analysis
Simple compiler
syntax checking
Basic compiler
capabilities
Compiler
extension
Basic req. and
design
consistency
Intermediatelevel module
Simple
req./design
More elaborate
req./design
Basic distprocessing
Formalized
specification,
verification.
Advanced distprocessing
Peer Reviews
No peer review
Ad-hoc informal
walk-through
Well-defined
preparation,
review, minimal
follow-up
Formal review
roles and Welltrained people
and basic
checklist
Root cause
analysis, formal
follow
Using historical
data
Extensive review
checklist
Statistical control
Execution
Testing and
Tools
No testing
Ad-hoc test and
debug
Basic test
Test criteria
based on
checklist
Well-defined test
seq. and basic
test coverage tool
system
More advance
test tools,
preparation.
Dist-monitoring
Highly advanced
tools, modelbased test
© USC-CSE
9
Defect Removal Estimates
- Nominal Defect Introduction Rates (60 defects/KSLOC)
70
60
60 (1.0)
1.0
50
Delivered Defects
/ KSLOC
40
30
0.5 Prob. of Loss
P(L)
28.5 (.475)
20
14.3 (.24)
7.5 (.125)
(.06)
(.03)
3.5
1.6 0
10
0
VL
Low
Nom
High
VH
XH
Composite Defect Removal Rating
© USC-CSE
10
Relations Between COCOMO II and COQUALMO
COQUALMO rating scales for levels of investment in
defect removal via automated analysis, peer reviews,
and execution testing and tools have been aligned
with the COCOMO II RELY rating levels.
© USC-CSE
11
Typical Marketplace Competition
Value Estimating Relationships
Internet Services, Wireless Infrastructure:
Value Loss vs. System Delivery Time
Market
Share
Loss
VL(Td)
Fixed-schedule Event Support:
Value of On-time System Delivery
Market
Share
Loss
VL(Td)
System Delivery Time
Td
System Delivery Time
Td
Off-line Data Processing:
Value Loss vs. System Delivery
Market
Share
Loss
VL(Td)
System Delivery Time
© USC-CSE
Td
12
How much Dependability is Enough?
- Early Startup: Risk due to low dependability
- Commercial: Risk due to low dependability
- High Finance: Risk due to low dependability
- Risk due to market share erosion
Combined Risk Exposure
1
Market Share
Erosion
0.8
Early Startup
0.6
RE =
P(L) * S(L)
Commercial
Sweet
Spot
0.4
High Finance
0.2
0
VL
L
N
H
VH
RELY
COCOMO II:
0
12
22
34
54
Added % test time
COQUALMO:
1.0
.475
.24
.125
0.06
P(L)
Early Startup:
.33
.19
.11
.06
.03
S(L)
Commercial:
1.0
.56
.32
.18
.10
S(L)
High Finance:
3.0
1.68
.96
.54
.30
S(L)
Market Risk:
.008
.027
.30
1.0
REm
© USC-CSE
.09
13
20% of Features Provide 80% of Value:
Focus Testing on These (Bullock, 2000)
100
80
% of
Value
for
Correct
Customer
Billing
60
Automated test
generation tool
- all tests have equal value
40
20
5
10
15
Customer Type
© USC-CSE
14
Value-Based vs. Value-Neutral Testing
– High Finance
Combined Risk Exposure
1
0. 8
Market Share Erosion
Sweet
Spot
0. 6
RE =
P(L) * S(L)
Value-based Testing
Value-neutral Testing
0. 4
0. 2
0
VL
L
N
H
VH
RELY
COCOMO II:
0
12
22
34
54
Added % test time
COQUALMO:
1.0
.475
.24
.125
0.06
P(L)
Value-based:
3.0
1.68
.96
.54
.30
S(L): Exponential
Value-Neutral:
3.0
2.33
1.65
.975
.30
S(L): Linear
Market Risk:
.008
.027
.09
.30
1.0
REm
© USC-CSE
15
Reasoning about the
Value of Dependability – iDAVE
• iDAVE: Information Dependability Attribute Value
Estimator
• Extend iDAVE model to enable risk analyses
– Determine how much dependability is
enough
© USC-CSE
16
iDAVE Model Framework
Time-phased
information
processing
capabilities
Cost estimating relationships (CER’s)
Time-phased
Cost = f
IP Capabilities (size),
project attributes
 Dependability
attribute levels Di
Project attributes
Time-phased
dependability
investments
 Cost
Dependability attribute estimating
relationships (DER’s)
 Value components
Vj
Di = gi
Dependability
investments,
project attributes
 Return on
Investment
Value estimating relationships (VER’s)
Vj = hj
IP Capabilities
dependability levels Di
© USC-CSE
17
Usage Scenario of iDAVE
Combined Risk Analyses
- How much Dependability is Enough?
1.
2.
3.
4.
5.
6.
Estimate software size in terms of value-adding capabilities.
Enter the project size and cost drivers into iDAVE to obtain project
delivered defect density (= (defects introduced – defects
removed)/KSLOC) for the range of dependability driver (RELY)
ratings from Very Low to Very High.
Involve stakeholders in determining the sizes of loss based on the
value estimating relationships for software dependability attributes.
Involve stakeholders in determining the risk exposures of market
erosion based on the delivery time of the product.
Apply the iDAVE to assess the probability of losses for the range of
dependability driver (RELY) ratings from Very Low to Very High.
Apply the iDAVE to combine the dependability risk exposures and
market erosion risk exposures to find the sweet spot.
© USC-CSE
18
Conclusions
• Increasing need for value-based approaches to
software dependability achievement and evaluation
– Methods and models emerging to address needs
• Value-based dependability cost/quality models such
as COCOMO II, COQUALMO can be combined with
Value Estimating Relationships (VERs) to
– Perform risk analyses to determine “how much dependability
is enough?”
– Perform sensitivity analyses for the most appropriate
dependability investment level for different cost-of-delay
situations
– Determine relative payoff of value-based vs. value-neutral
testing
© USC-CSE
19
Download