high S(L)

advertisement
University of Southern California
Center for SoftwareEngineering
CeBASE
High Dependability Computing in
a Competitive World
Barry Boehm, USC
NASA/CMU HDC Workshop
January 10, 2001
(boehm@; http://) sunset.usc.edu
University of Southern California
Center for SoftwareEngineering
CeBASE
HDC in a Competitive World
• The economics of IT competition and
dependability
• Software Dependability Opportunity Tree
–
–
–
–
Decreasing defects
Decreasing defect impact
Continuous improvement
Attacking the future
• Conclusions and References
01/10/01
©USC-CSE
2
CeBASE
University of Southern California
Center for SoftwareEngineering
Competing on Cost and Quality
- adapted from Michael Porter, Harvard
ROI
Underpriced
01/10/01
Cost
leader
(acceptable Q)
Stuck in
the middle
Quality
leader
(acceptable C)
©USC-CSE
Overpriced
Price
or Cost
Point
3
University of Southern California
Center for SoftwareEngineering
CeBASE
Competing on Schedule and Quality
- A risk analysis approach
• Risk Exposure RE = Prob (Loss) * Size (Loss)
– “Loss” – financial; reputation; future prospects, …
• For multiple sources of loss:
RE =  [Prob (Loss) * Size (Loss)]source
sources
01/10/01
©USC-CSE
4
CeBASE
University of Southern California
Center for SoftwareEngineering
Example RE Profile: Time to Ship
- Loss due to unacceptable dependability
Many defects: high P(L)
Critical defects: high S(L)
RE =
P(L) * S(L)
Few defects: low P(L)
Minor defects: low S(L)
Time to Ship (amount of testing)
01/10/01
©USC-CSE
5
CeBASE
University of Southern California
Center for SoftwareEngineering
Example RE Profile: Time to Ship
- Loss due to unacceptable dependability
- Loss due to market share erosion
Many defects: high P(L)
Critical defects: high S(L)
Many rivals: high P(L)
Strong rivals: high S(L)
RE =
P(L) * S(L)
Few rivals: low P(L)
Weak rivals: low S(L)
Few defects: low P(L)
Minor defects: low S(L)
Time to Ship (amount of testing)
01/10/01
©USC-CSE
6
CeBASE
University of Southern California
Center for SoftwareEngineering
Example RE Profile: Time to Ship
- Sum of Risk Exposures
Many defects: high P(L)
Critical defects: high S(L)
RE =
P(L) * S(L)
Many rivals: high P(L)
Strong rivals: high S(L)
Sweet
Spot
Few rivals: low P(L)
Weak rivals: low S(L)
Few defects: low P(L)
Minor defects: low S(L)
Time to Ship (amount of testing)
01/10/01
©USC-CSE
7
CeBASE
University of Southern California
Center for SoftwareEngineering
Comparative RE Profile:
Safety-Critical System
Higher
S(L): defects
High-Q
Sweet
Spot
RE =
P(L) * S(L)
Mainstream
Sweet
Spot
Time to Ship (amount of testing)
01/10/01
©USC-CSE
8
CeBASE
University of Southern California
Center for SoftwareEngineering
Comparative RE Profile:
Internet Startup
Higher
S(L): delays
Low-TTM
Sweet
Spot
RE =
P(L) * S(L)
Mainstream
Sweet
Spot
TTM:
Time to Market
Time to Ship (amount of testing)
01/10/01
©USC-CSE
9
University of Southern California
Center for SoftwareEngineering
CeBASE
Conclusions So Far
• Unwise to try to compete on both cost/schedule
and quality
– Some exceptions: major technology or marketplace
edge
• There are no one-size-fits-all cost/schedule/quality
strategies
• Risk analysis helps determine how much testing
(prototyping, formal verification, etc.) is enough
– Buying information to reduce risk
• Often difficult to determine parameter values
– Some COCOMO II values discussed next
01/10/01
©USC-CSE
10
CeBASE
University of Southern California
Center for SoftwareEngineering
Software Development Cost/Quality Tradeoff
- COCOMO II calibration to 161 projects
RELY
Rating
Defect Risk
Rough MTBF(mean time between failures)
Very
High
Loss of
Human Life
High
High
Financial
Loss
2 years
Nominal
Moderate
recoverable
loss
1 month
Low
Low, easily
recoverable
loss
1 day
Very
Low
Slight
inconvenience
1 hour
100 years
1.0
0.8
01/10/01
In-house support software
0.9
1.0
1.1
1.2
Relative Cost/Source Instruction
1.3
11
CeBASE
University of Southern California
Center for SoftwareEngineering
Software Development Cost/Quality Tradeoff
RELY
Rating
- COCOMO II calibration to 161 projects
Defect Risk
Rough MTBF(mean time between failures)
Very
High
Loss of
Human Life
High
High
Financial
Loss
2 years
Moderate
recoverable
loss
1 month
Low, easily
recoverable
loss
1 day
Slight
inconvenience
1 hour
Nominal
Low
Very
Low
100 years
1.10
In-house support software
1.0
Commercial
cost leader
0.92
0.8
01/10/01
Commercial
quality leader
0.9
1.0
1.1
1.2
Relative Cost/Source Instruction
1.3
12
CeBASE
University of Southern California
Center for SoftwareEngineering
Software Development Cost/Quality Tradeoff
- COCOMO II calibration to 161 projects
RELY
Rating
Very
High
Defect Risk
Rough MTBF(mean time between failures)
Loss of
Human Life
Safety-critical
100 years
1.26
High
Nominal
Low
Very
Low
High
Financial
Loss
2 years
Moderate
recoverable
loss
1 month
Low, easily
recoverable
loss
1 day
1.10
In-house support software
1.0
Commercial
cost leader
0.92
Slight
inconvenience
(1 hour)
0.82
0.8
01/10/01
Commercial
quality leader
Startup
demo
0.9
1.0
1.1
1.2
Relative Cost/Source Instruction
1.3
13
CeBASE
University of Southern California
Center for SoftwareEngineering
COCOMO II RELY Factor Dispersion
100
t = 2.6
t > 1.9 significant
80
60
40
20
0
Very
Low
01/10/01
Low
Nominal
©USC-CSE
High
Very
High
14
CeBASE
University of Southern California
Center for SoftwareEngineering
COCOMO II RELY Factor Phenomenology
Rqts. and
Product Design
Detailed
Design
RELY = Very
Low
Little detail
Many TBDs
Little verification
Minimal QA, CM,
standards, draft
user manual,
test plans
Minimal PDR
Basic design
information
Minimal QA, CM,
standards, draft
user manual,
test plans
Informal design
inspections
RELY = Very
High
Detailed verification,
QA, CM,
standards, PDR,
documentation
IV & V interface
Very detailed test
plans, procedures
Detailed verification,
QA, CM,
standards, CDR,
documentation
Very thorough
design inspections
Very detailed test
plans, procedures
IV & V interface
Less rqts. rework
01/10/01
©USC-CSE
Code and
Unit test
No test procedures
Minimal path test,
standards check
Minimal QA, CM
Minimal I/O and offnominal tests
Minimal user
manual
Detailed test
procedures, QA,
CM, documentation
Very thorough
code inspections
Very extensive offnominal tests
IV & V interface
Less rqts., design
rework
Integration
and test
No test procedures
Many requirements
untested
Minimal QA, CM
Minimal stress and
off-nominal tests
Minimal as-built
documentation
Very detailed test
procedures, QA,
CM, documentation
Very extensive stress
and off-nominal
tests
IV & V interface
Less rqts., design,
code rework
15
University of Southern California
Center for SoftwareEngineering
CeBASE
“Quality is Free”
• Did Philip Crosby’s book get it all wrong?
• Investments in dependable systems
– Cost extra for simple, short-life systems
– Pay off for high-value, long-life systems
01/10/01
©USC-CSE
16
CeBASE
University of Southern California
Center for SoftwareEngineering
Software Life-Cycle Cost vs. Dependability
1.4
1.3
Relative
Cost to
Develop
1.26
1.2
1.1
1.10
1.0
1.0
0.9
0.8
0.92
0.82
Very
Low
Low
Nominal
COCOMO II RELY Rating
01/10/01
©USC-CSE
High
Very
High
17
CeBASE
University of Southern California
Center for SoftwareEngineering
Software Life-Cycle Cost vs. Dependability
1.4
1.3
Relative
Cost to
Develop,
Maintain
1.2
1.26
1.23
1.1
1.10
1.10
1.07
1.0
0.99
0.9
0.8
0.92
0.82
Very
Low
Low
Nominal
COCOMO II RELY Rating
01/10/01
©USC-CSE
High
Very
High
18
CeBASE
University of Southern California
Center for SoftwareEngineering
Software Life-Cycle Cost vs. Dependability
1.4
1.3
Relative
Cost to
Develop,
Maintain
1.2
• Low-dependability inadvisable
for evolving systems
1.26
1.23
1.20
1.1
1.10
1.10
1.05
1.07
70%
Maint.
1.11
1.0
1.07
0.99
0.9
0.8
0.92
0.82
Very
Low
Low
Nominal
COCOMO II RELY Rating
01/10/01
©USC-CSE
High
Very
High
19
CeBASE
University of Southern California
Center for SoftwareEngineering
Software Ownership Cost vs. Dependability
1.4
VL = 2.55
L = 1.52
1.3
Relative
Cost to
Develop,
Maintain,
Own and
Operate
1.2
Operational-defect cost at Nominal dependability
= Software life cycle cost
1.26
1.23
1.20
1.1
1.0
1.10
1.10
1.05
1.07
70%
Maint.
1.11
Operational defect cost = 0
1.07
0.99
0.9
0.8
0.92
0.76
0.82
Very
Low
0.69
Low
Nominal
COCOMO II RELY Rating
01/10/01
©USC-CSE
High
Very
High
20
University of Southern California
Center for SoftwareEngineering
CeBASE
Conclusions So Far - 2
• Quality is better than free for high-value,
long-life systems
• There is no universal dependability sweet spot
– Yours will be determined by your value model
– And the relative contributions of dependability
techniques
– Let’s look at these next
01/10/01
©USC-CSE
21
University of Southern California
Center for SoftwareEngineering
CeBASE
HDC in a Competitive World
• The economics of IT competition and
dependability
• Software Dependability Opportunity Tree
–
–
–
–
Decreasing defects
Decreasing defect impact
Continuous improvement
Attacking the future
• Conclusions
01/10/01
©USC-CSE
22
CeBASE
University of Southern California
Center for SoftwareEngineering
Software Dependability Opportunity Tree
Decrease
Defect
Prob (Loss)
Decrease
Defect
Risk
Exposure
Decrease
Defect
Impact,
Size (Loss)
Continuous
Improvement
Defect Prevention
Defect Detection
and Removal
Value/Risk - Based
Defect Reduction
Graceful Degradation
CI Methods and Metrics
Process, Product, People
Technology
01/10/01
©USC-CSE
23
University of Southern California
Center for SoftwareEngineering
CeBASE
Software Defect Prevention Opportunity Tree
Defect
Prevention
People
practices
Standards
Languages
Prototyping
Modeling & Simulation
Reuse
Root cause analysis
01/10/01
©USC-CSE
IPT, JAD, WinWin,…
PSP, Cleanroom, Dual development,…
Manual execution, scenarios,...
Staffing for dependability
Rqts., Design, Code,…
Interfaces, traceability,…
Checklists
24
University of Southern California
Center for SoftwareEngineering
CeBASE
People Practices: Some Empirical Data
• Cleanroom: Software Engineering Lab
– 25-75% reduction in failure rates
– 5% vs 60% of fix efforts over 1 hour
• Personal Software Process/Team Software Process
– 50-75% defect reduction in CMM Level 5 organization
– Even higher reductions for less mature organizations
• Staffing
– Many experiments find factor-of-10 differences in people’s
defect rates
01/10/01
©USC-CSE
25
University of Southern California
Center for SoftwareEngineering
CeBASE
Root Cause Analysis
• Each defect found can trigger five analyses:
– Debugging: eliminating the defect
– Regression: ensuring that the fix doesn’t create new
defects
– Similarity: looking for similar defects elsewhere
– Insertion: catching future similar defects earlier
– Prevention: finding ways to avoid such defects
• How many does your organization do?
01/10/01
©USC-CSE
26
University of Southern California
Center for SoftwareEngineering
CeBASE
Pareto 80-20 Phenomena
• 80% of the rework comes from 20% of
the defects
• 80% of the defects come from 20% of the
modules
– About half the modules are defect-free
• 90% of the downtime comes from < 10% of
the defects
01/10/01
©USC-CSE
27
University of Southern California
Center for SoftwareEngineering
CeBASE
Pareto Analysis of Rework Costs
TRW Project B
1005 SPR’s
100
90
80
70
% of
60
Cost
50
to
40
Fix
SPR’s 30
20
10
0
TRW Project A
373 SPR’s
Major Rework Sources:
Off-Nominal Architecture-Breakers
A - Network Failover
B - Extra-Long Messages
0
10 20 30 40 50 60 70 80 90 100
% of Software Problem Reports (SPR’s)
01/10/01
©USC-CSE
28
CeBASE
University of Southern California
Center for SoftwareEngineering
Software Defect Detection Opportunity Tree
Automated
Analysis
pre/post conditions
Traceability checking
Defect
Detection
and Removal
- Rqts.
- Design
- Code
Completeness checking
Consistency checking
- Views, interfaces, behavior,
Compliance checking
- Models, assertions, standards
Peer reviews, inspections
Reviewing
Architecture Review Boards
Pair programming
Requirements & design
Structural
Operational profile
Testing
Usage (alpha, beta)
Regression
Value/Risk - based
Test automation
01/10/01
©USC-CSE
29
CeBASE
University of Southern California
Center for SoftwareEngineering
Orthogonal Defect Classification
- Chillarege, 1996
Percent within activity
50
40
40
10
30
30
30
20
40
40
30
30
25
20
20
10
20
20
20
10
10
0
Design
Code review
Function
01/10/01
Assignment
©USC-CSE
Function test
Interface
System test
Timing
30
CeBASE
UMD-USC CeBASE Experience Comparisons
- http://www.cebase.org
“Under specified conditions, …”
Technique Selection Guidance (UMD, USC)
•
Peer reviews are more effective than functional testing for faults of
omission and incorrect specification
– Peer reviews catch 60 % of the defects
•
Functional testing is more effective than reviews for faults concerning
numerical approximations and control flow
Technique Definition Guidance (UMD)
•
•
For a reviewer with an average experience level, a procedural
approach to defect detection is more effective than a less procedural
one.
Readers of a software artifact are more effective in uncovering
defects when each uses a different and specific focus.
– Perspective - based reviews catch 35% more defects
01/10/01
31
CeBASE
University of Southern California
Center for SoftwareEngineering
Factor-of-100 Growth in Software Cost-to-Fix
1000
Relative cost to fix defect
Larger software projects
500
IBM-SSD
200
GTE
100
50
•
80%
•
• Median (TRW survey)
20%
•
SAFEGUARD
20
•
10
2
Smaller software projects
•
5
•
1
Requirements
Design
Code
Development
test
Acceptance
test
Operation
Phase in which defect was fixed
01/10/01
©USC-CSE
32
CeBASE
University of Southern California
Center for SoftwareEngineering
Reducing Software Cost-to-Fix: CCPDS-R
- Royce, 1998
Architecture first
-Integration during the design phase
-Demonstration-based evaluation
Risk Management
Configuration baseline change metrics:
40
Design
Changes
Hours30
Change
Maintenance
Changes and ECP’s
20
10
Implementation Changes
Project Development Schedule
15
20
25
30
35
40
RATIONAL
Software CorporatIon
01/10/01
33
University of Southern California
Center for SoftwareEngineering
CeBASE
COQUALMO: Constructive Quality Model
COCOMO II
COQUALMO
Software Size estimate
Software product,
process, computer and
personnel attributes
Software development effort, cost
and schedule estimate
Defect Introduction
Model
Number of residual defects
Defect density per unit of size
Defect removal capability
levels
01/10/01
Defect Removal
Model
©USC-CSE
34
CeBASE
University of Southern California
Center for SoftwareEngineering
Defect Impact Reduction Opportunity Tree
Business case analysis
Value/Risk Based Defect
Reduction
Decrease
Defect
Impact,
Size (Loss)
Pareto (80-20) analysis
V/R-based reviews
V/R-based testing
Cost/schedule/quality
as independent variable
Fault tolerance
Graceful
Degredation
Self-stabilizing SW
Reduced-capability models
Manual Backup
Rapid recovery
01/10/01
©USC-CSE
35
CeBASE
University of Southern California
Center for SoftwareEngineering
Software Dependability Opportunity Tree
Decrease
Defect
Prob (Loss)
Decrease
Defect
Risk
Exposure
Decrease
Defect
Impact,
Size (Loss)
Continuous
Improvement
Defect Prevention
Defect Detection
and Removal
Value/Risk - Based
Defect Reduction
Graceful Degradation
CI Methods and Metrics
Process, Product, People
Technology
01/10/01
©USC-CSE
36
CeBASE
The Experience Factory Organization
- Exemplar: NASA Software Engineering Lab
Project Organization
1. Characterize
2. Set Goals
3. Choose Process
Execution
plans
4. Execute Process
Experience Factory
environment
characteristics
tailorable
knowledge,
consulting
Project
Support
Generalize
products,
lessons
learned,
models
project
analysis,
process
modification
data,
lessons
learned
6. Package
Tailor
Experience
Base
Formalize
Disseminate
5. Analyze
CeBASE
University of Southern California
Center for SoftwareEngineering
Goal-Model-Question Metric Approach to Testing
Goal
· Accept
product
· CostEffective
Defect
Detection
Models
Questions
· What inputs, outputs verify
· Product requirements
requirements compliance?
· Defect source models
-
· How best to mix and
Orthogonal defect
Classification (ODC)
Defect-prone modules
sequence automated
analysis, reviewing, and
testing?
· Minimize
adverse
effects of
defects
· Percent of requirements
tested
· Defect detection yield by
defect class
· Defect detection costs, ROI
· Expected vs. actual defect
· Defect closure rate models
defects?
· What is their closure status?
detection by class
· Percent of defect reports
closed
- Expected; actual
· Operational profiles
· Reliability estimation
· How well are we progressing
· Estimated vs. target
towards targets?
· How long will it take to reach
targets?
reliability, DDD
· Estimated time to achieve
targets
· How many outstanding
· No known
delivered
defects
· Meeting
target
reliability,
delivered
defect
density
(DDD)
Metrics
models
· DDD estimation models
· What HDC techniques best
· Business-case or mission
address high-value
elements?
· How well are project
techniques minimizing
adverse effects?
models of value by feature,
quality attribute, delivery
time
- Tradeoff relationships
· Risk exposure reduction
vs. time
- Business-case based
"earned value"
- Value-weighted defect
detection yields for
each technique
Note: No one-size-fits-all solution
01/10/01
©USC-CSE
38
University of Southern California
Center for SoftwareEngineering
CeBASE
GMQM Paradigm: Value/Risk - Driven Testing
• Goal: Minimize adverse effects of defects
• Models: Business-case or mission models of value
– By feature, quality attribute, or delivery time
– Attribute tradeoff relationships
• Questions: What HDC techniques best address highvalue elements?
– How well are project techniques minimizing adverse
effects?
• Metrics: Risk exposure reduction vs. time
– Business-case based “earned value”
– Value-weighted defect detection yields by technique
01/10/01
©USC-CSE
39
University of Southern California
Center for SoftwareEngineering
CeBASE
“Dependability” Attributes and Tradeoffs
• Robustness: reliability, availability, survivability
• Protection: security, safety
• Quality of Service: accuracy, fidelity, performance
assurance
• Integrity: correctness, verifiability
• Attributes mostly compatible and synergetic
• Some conflicts and tradeoffs
– Spreading information: survivability vs. security
– Fail-safe: safety vs. quality of service
– Graceful degredation: survivability vs. quality of service
01/10/01
©USC-CSE
40
University of Southern California
Center for SoftwareEngineering
CeBASE
Resulting Reduction in Risk Exposure
Risk
Exposure
from
Defects =
P(L) *S(L)
Acceptance,
Defect-density
testing
(all defects equal)
Value/RiskDriven Testing
(80-20 value distribution)
Time to Ship (amount of testing)
01/10/01
©USC-CSE
41
CeBASE
University of Southern California
Center for SoftwareEngineering
20% of Features Provide 80% of Value:
Focus Testing on These (Bullock, 2000)
100
80
% of
Value
for
Correct
Customer
Billing
60
40
20
5
10
15
Customer Type
01/10/01
©USC-CSE
42
University of Southern California
Center for SoftwareEngineering
CeBASE
Cost, Schedule, Quality: Pick any Two?
C
S
01/10/01
Q
©USC-CSE
43
CeBASE
University of Southern California
Center for SoftwareEngineering
Cost, Schedule, Quality: Pick any Two?
• Consider C, S, Q as Independent Variable
– Feature Set as Dependent Variable
C
S
01/10/01
C
S
Q
©USC-CSE
Q
44
University of Southern California
Center for SoftwareEngineering
CeBASE
C, S, Q as Independent Variable
• Determine Desired Delivered Defect Density (D4)
– Or a value-based equivalent
• Prioritize desired features
– Via QFD, IPT, stakeholder win-win
• Determine Core Capability
– 90% confidence of D4 within cost and schedule
– Balance parametric models and expert judgment
• Architect for ease of adding next-priority features
– Hide sources of change within modules (Parnas)
• Develop core capability to D4 quality level
– Usually in less than available cost and schedule
• Add next priority features as resources permit
• Versions used successfully on 17 of 19 USC digital library
projects
01/10/01
©USC-CSE
45
University of Southern California
Center for SoftwareEngineering
CeBASE
Attacking the Future: Software Trends
“Skate to where the puck is going” --- Wayne Gretzky
• Increased complexity
– Everything connected
– Opportunities for chaos (agents)
– Systems of systems
• Decreased control of content
– Infrastructure
– COTS components
• Faster change
– Time-to-market pressures
– Marry in haste; no leisure to repent
– Adapt or die (e-commerce)
• Fantastic opportunities
– Personal, corporate, national, global
01/10/01
©USC-CSE
46
University of Southern California
Center for SoftwareEngineering
CeBASE
Opportunities for Chaos: User Programming
- From NSF SE Workshop: http://sunset.usc.edu/Activities/aug24-25
• Mass of user-programmers a new challenge
– 40-50% of user programs have nontrivial defects
– By 2005, up to 55M “sorcerer’s apprentice” user-programmers
loosing agents into cyberspace
• Need research on “seat belts, air bags, safe driving
guidance” for user-programmers
– Frameworks (generic, domain-specific)
– Rules of composition, adaptation, extension
– “Crash barriers;” outcome visualization
01/10/01
©USC-CSE
47
University of Southern California
Center for SoftwareEngineering
CeBASE
Dependable COTS-Based systems
• Establish dependability priorities
• Assess COTS dependability risks
– Prototype, benchmark, testbed, operational profiles
• Wrap COTS components
– Just use the parts you need
– Interface standards and API’s help
• Prepare for inevitable COTS changes
– Technology watch, testbeds, synchronized refresh
01/10/01
©USC-CSE
48
University of Southern California
Center for SoftwareEngineering
CeBASE
HDC Technology Prospects
• High-dependability change
– Architecture-based analysis and composition
– Lightweight formal methods
• Model-based analysis and assertion checking
– Domain models, stochastic models
• High-dependability test data generation
• Dependability attribute tradeoff analysis
– Dependable systems form less-dependable components
– Exploiting cheap, powerful hardware
• Self-stabilizing software
• Complementary empirical methods
– Experiments, testbeds, models, experience factory
– Accelerated technology transition, education, training
01/10/01
©USC-CSE
49
University of Southern California
Center for SoftwareEngineering
CeBASE
Conclusions
• Future trends intensify competitive HDC challenges
– Complexity, criticality, decreased control, faster change
• Organizations need tailored, mixed HDC strategies
– No universal HDC sweet spot
– Goal/value/risk analysis useful
– Quantitative data and models becoming available
• HDC Opportunity Tree helps sort out mixed strategies
• Quality is better than free for high-value, long-life systems
• Attractive new HDC technology prospects emerging
– Architecture- and model-based methods
– Lightweight formal methods
– Self-stabilizing software
– Complementary theory and empirical methods
01/10/01
©USC-CSE
50
University of Southern California
Center for SoftwareEngineering
CeBASE
References
V. Basili et al., “SEL’s Software Process Improvement Program,” IEEE Software, November
1995, pp. 83-87.
B. Boehm and V. Basili, “Software Defect Reduction Top 10 List,” IEEE Computer, January 2001
B. Boehm et al., Software Cost Estimation with COCOMO II, Prentice Hall, 2000.
J. Bullock, “Calculating the Value of Testing,” Software Testing and Quality Engineering,
May/June 2000, pp. 56-62
CeBASE (Center for Empirically-Based Software Engineering), http://www.cebase.org
R. Chillarege, “Orthogonal Defect Classification,” in M. Lyu (ed.), Handbook of Software
Reliability Engineering, IEEE-CS Press, 1996, pp. 359-400.
P. Crosby, Quality is Free, Mentor, 1980.
R. Grady, Practical Software Metrics, Prentice Hall, 1992
N. Leveson, Safeware: System Safety and Computers, Addison Wesley, 1995
B. Littlewood et al., “Modeling the Effects of Combining Diverse Fault Detection Techniques,”
IEEE Trans. SW Engr. December 2000, pp. 1157-1167.
M. Lyu (ed), Handbook of Software Reliability Engineering, IEEE-CS Press, 1996
J. Musa and J. Muda, Software Reliability Engineered Testing, McGraw-Hill, 1998
M. Porter, Competitive Strategy, Free Press, 1980.
P. Rook (ed.), Software Reliability Handbook, Elsevier Applied Science, 1990
W. E. Royce, Software Project Management, Addison Wesley, 1998.
01/10/01
©USC-CSE
51
University of Southern California
Center for SoftwareEngineering
CeBASE
CeBASE Software Defect Reduction Top-10 List
- http://www.cebase.org
1.
Finding and fixing a software problem after delivery is often 100 times more expensive
than finding and fixing it during the requirements and design phase.
2.
About 40-50% of the effort on current software projects is spent on avoidable rework.
3.
About 80% of the avoidable rework comes form 20% of the defects.
4.
About 80% of the defects come from 20% of the modules and about half the modules
are defect free.
5.
About 90% of the downtime comes from at most 10% of the defects.
6.
Peer reviews catch 60% of the defects.
7.
Perspective-based reviews catch 35% more defects than non-directed reviews.
8.
Disciplined personal practices can reduce defect introduction rates by up to 75%.
9.
All other things being equal, it costs 50% more per source instruction to develop highdependability software products than to develop low-dependability software products.
However, the investment is more than worth it if significant operations and
maintenance costs are involved.
10. About 40-50% of user programs have nontrivial defects.
01/10/01
©USC-CSE
52
Download