COQUALMO: Quality Model Extension to COCOMO II

advertisement
Quality Management – Lessons of
COQUALMO
(COnstructive QUALity MOdel)
A Software Defect Density Prediction Model
AWBrown and Sunita Chulani, Ph.D.
Presented by Marilee Wheaton
Copyright 1999-2010 USC-CSSE
1
Outline
Behavioral Underpinnings
• Hidden Factory
• Defect Types
COQUALMO Framework
• The Defect Introduction Sub-Model
– Expert-Judgment Model + Some Initial Data Results
• The Defect Removal Sub-Model
– Expert-Judgment Model
(Result of COQUALMO Workshop)
• COQUALMO Integrated with COCOMO II
Copyright 1999-2010 USC-CSSE
2
USC Modeling Methodology
Copyright 1999-2010 USC-CSSE
3
Delphi Assessment
• Ask each expert the range for driver
– Apply personal experience,
– Look at completed projects,
– Guess (WAG),
• Collect and share in a meeting: discuss
why/how different people made their
estimate
• Repeat until no changes
• Final values (for each parameter)
Max=(Hmax + 4*AVEmax + Lmax)/6
Copyright 1999-2010 USC-CSSE
4
Adding Project Data
• Effort Adjustment Multipliers (typical)
– Linear Regression
Copyright 1999-2010 USC-CSSE
5
Fig 11., Pg 170 SwCEwCII
a posteiori Baysian Update
• Used to combine expert judgement with sampled
data: spread of datasets weights combination
Copyright 1999-2010 USC-CSSE
6
Outline
• Model Framework
The Defect Introduction Sub-Model
– Expert-Judgment Model + Some Initial Data Results
• The Defect Removal Sub-Model
– Expert-Judgment Model (Result of COQUALMO
Workshop)
• COQUALMO Integrated with COCOMO II
Copyright 1999-2010 USC-CSSE
7
COQUALMO Model Framework
Copyright 1999-2010 USC-CSSE
8
Software Devel. Process(es) (cont.)
: the hidden factory
Copyright 1999-2010 USC-CSSE
9
Outline
• Model Framework
The Defect Introduction Sub-Model
– Expert-Judgment Model + Some Initial Data Results
• The Defect Removal Sub-Model
– Expert-Judgment Model (Result of COQUALMO
Workshop)
• COQUALMO Integrated with COCOMO II
Copyright 1999-2010 USC-CSSE
10
The Defect Introduction (DI)
Sub-Model
Software Size estimate
Software product,
process, computer and
personnel attributes
(subset of COCOMO II
factors)
Copyright 1999-2010 USC-CSSE
Defect
Introduction
Sub-Model
Number of non-trivial
requirements, design and
code defects introduced
11
A-Priori Expert-Judgment
Based Code DI Ranges
FLEX
RUSE
STOR
TIME
DATA
ACAP
AEXP
PEXP
DOCU
SITE
TEAM
SCED
LTEX
PVOL
PREC
TOOL
PCON
PCAP
CPLX
RESL
RELY
PMAT
1.00
Copyright 1999-2010 USC-CSSE
1.50
2.00
2.50
3.00
12
DI Model Equations
•Estimated
Number
of Defects Introduced =
3
21

j 1




Aj  (Size) j   (DI  driver)ij
B
i 1
j identifies the 3 artifact types (requirements, design and coding).
A is the multiplicative calibration constant.
B is initially set to 1
(DI-driver)ij is the Defect Introduction driver for the jth artifact and the ith factor.
•For each artifact j, Quality Adjustment Factor (QAF)
22
QAFj   DIR - driver
ij
i1
•Estimated Number of Defects Introduced =
3

Aj  (Size) j  QAFj
j 1
Copyright 1999-2010
USC-CSSE
B
13
Initial Data Analysis on the DI Model
Type of
Artifact
Requirements
Design
Code
1970’s
Baseline
DIRs
5
25
15
Quality
Adjustment
Factor
0.5
0.44
0.5
Predicted
DIR
Actual DIR
2.5
11
7.5
4.5
8.4
16.6
Calibrated
Constant
(A)
1.8
0.77
2.21
1990’s
Baseline
DIRs
9
19
33
DIR = Defect Introduction Rate
Copyright 1999-2010 USC-CSSE
14
Outline
• Model Framework
• The Defect Introduction Sub-Model
– Expert-Judgment Model + Some Initial Data
Results
The Defect Removal Sub-Model
– Expert-Judgment Model (Result of
COQUALMO Workshop)
• COQUALMO Integrated with COCOMO II
Copyright 1999-2010 USC-CSSE
15
The Defect Removal (DR) Sub-Model
Number of non-trivial
requirements, design and
coding defects introduced
Defect removal activity
levels
Defect
Removal
Sub-Model
Number of residual
defects/ unit of size
Software Size Estimate
Copyright 1999-2010 USC-CSSE
16
Defect Removal Profiles
• 3 relatively orthogonal profiles
– Automated Analysis
– People Reviews
– Execution Testing and Tools
• Each profile has 6 levels
– Very Low, Low, Nominal,
High, Very High, Extra High
• Very Low--removes the least number of defects
• Extra High--removes the most defects
Copyright 1999-2010 USC-CSSE
17
Automated Analysis
Rating
Very Low
Low
Nominal
High
Very High
Extra High
Automated Analysis
Simple compiler syntax checking.
Basic compiler or additional tools capabilities for static module-level code
analysis, and syntax- and type-checking.
All of the above, plus
Some compiler extensions for static module and inter-module level code
analysis, and syntax- and type-checking.
Basic requirements and design consistency; and traceability checking.
All of the above, plus
Intermediate-level module and inter-module code syntax and semantic analysis.
Simple requirements/design consistency checking across views.
All of the above, plus
More elaborate requirements/design view consistency checking.
Basic distributed-processing and temporal analysis, model checking, symbolic
execution.
All of the above, plus
Formalized* specification and verification.
Advanced distributed-processing and temporal analysis, model checking,
symbolic execution.
*Consistency-checkable pre-conditions and post-conditions, but not
mathematical theorems.
Copyright 1999-2010 USC-CSSE
18
Static [Module-Level Code] Analysis
"Static code analysis is the analysis of computer software
that is performed without actually executing programs
built from that software (analysis performed on executing
programs is known as dynamic analysis). In most cases
the analysis is performed on some version of the source
code and in the other cases some form of the object
code. "*
* http://en.wikipedia.org/wiki/Static_code_analysis
Copyright 1999-2010 USC-CSSE
19
Static [Module-Level Code] Analysis
SWEBOK [sans references]
"4.2. Quality Analysis and Evaluation Techniques Various tools and
techniques can help ensure a software design’s quality.
• "Software design reviews: informal or semiformal, often group-based,
techniques to verify and ensure the quality of design artifacts (for
example, architecture reviews, design reviews and inspections,
scenario-based techniques, requirements tracing)
• "Static analysis: formal or semiformal static (nonexecutable) analysis
that can be used to evaluate a design (for example, fault-tree analysis or
automated cross-checking)
• "Simulation and prototyping: dynamic techniques to evaluate a design
(for example, performance simulation or feasibility prototype)"
* Software Engineering Body of Knowledge. Alain Abran, et al., Swebok_Ironman_June_23_
2004.pdf, pg 54
Copyright 1999-2010 USC-CSSE
20
Peer Reviews
Rating
Peer Reviews
Very Low No people reviews.
Ad-hoc informal walkthroughs
Low
Minimal preparation, no follow-up.
Nominal Well-defined sequence of preparation, review, minimal follow-up.
Informal review roles and procedures.
Formal review roles and procedures applied to all products using basic
High
checklists*, follow up.
Formal review roles and procedures applied to all product artifacts &
Very
changes; formal change control boards.
High
Basic review checklists, root cause analysis.
Use of historical data on inspection rate, preparation rate, fault density.
Formal review roles and procedures for fixes, change control.
Extra
Extensive review checklists, root cause analysis.
High
Continuous review process improvement.
User/Customer involvement, Statistical Process Control.
* Checklists are lists of things to look for or check against (e.g. exit criteria)
Copyright 1999-2010 USC-CSSE
21
Syntactic Versus Semantic Checking
Both sentences below are semantically correct,
only one is semantically correct.
• A panda enters the bar, eats shoots and leaves.
• A panda enters the bar, eats, shoots and leaves.
Copyright 1999-2010 USC-CSSE
22
Execution Testing and Tools
Rating
Execution Testing and Tools
Very Low No testing.
Ad-hoc testing and debugging.
Low
Basic text-based debugger
Nominal Basic unit test, integration test, system test process.
Basic test data management, problem tracking support.
Test criteria based on checklists.
Well-defined test sequence tailored to organization (acceptance, alpha, beta,
High
flight, etc. test).
Basic test coverage tools, test support system.
Basic test process management.
More advanced test tools, test data preparation, basic test oracle support,
Very
distributed monitoring and analysis, active assertion checking.
High
Metrics-based test process management.
Highly advanced tools for test oracles, distributed monitoring and analysis,
Extra
assertion checking.
High
Integration of automated analysis and test tools.
Model-based test process management.
Copyright 1999-2010 USC-CSSE
23
Technique Selection Guidance
“Under specified conditions, …”
• Peer reviews are more effective than functional
testing for faults of omission and incorrect
specification (UMD, USC)
• Functional testing is more effective than reviews
for faults concerning numerical approximations
and control flow (UMD, USC)
Copyright 1999-2010 USC-CSSE
24
Residual Defects Equation
• Estimated Number of Residual Defects
3
DResEst, j  C j  DI Est, j   (1  DRFij )
i=1
i
DResEst, j = Estimated No. of Residual Defects for the jth artifact
Cj = Calibration Constant for the jth artifact
DIEst, j = Estimated No. of Defects Introduced for the jth artifact
(output of DI Sub-Model)
i = Defect Removal profile
DRFij= Defect Removal Fraction
Copyright 1999-2010 USC-CSSE
25
Defect Densities from ExpertJudgment Calibrated COQUALMO
Automated
Analysis DRF
People Reviews
DRF
Execution Testing
and Tools DRF
Product
(1-DRFij)
R
Very D
Low C
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
1.00
1.00
1.00
R
Low D
C
0.00
0.00
0.10
0.25
0.28
0.30
0.23
0.23
0.38
0.58
0.55
0.39
Nom- R
inal D
C
0.10
0.13
0.20
0.40
0.40
0.48
0.40
0.43
0.58
0.32
0.3
0.17
R
High D
C
0.27
0.28
0.30
0.50
0.54
0.60
0.50
0.54
0.69
0.18
0.15
0.09
R
Very D
High C
0.34
0.44
0.48
0.58
0.70
0.73
0.57
0.65
0.78
0.14
0.06
0.03
R
Extra D
High C
0.40
0.50
0.55
0.70
0.78
0.83
0.60
0.70
0.88
0.07
0.03
0.009
Copyright 1999-2010 USC-CSSE
DI/kSLOC
10
20
30
Total:
10
20
30
Total:
10
20
30
Total:
10
20
30
Total:
10
20
30
Total:
10
20
30
Total:
DRes/kSLOC
10
20
30
60
5.8
11
11.7
28.5
3.2
6
5.1
14.3
1.8
3
2.7
7.5
1.4
1.2
0.9
3.5
0.7
0.6
0.27
1.57
26
Validation of
Defect Densities
• Average defect density using Jones’ data weighted by CMM
maturity level distribution of 542 organizations is 13.9
defects/kSLOC
• Average defect density using COQUALMO is
14.3 defects/kSLOC
Residual defect density
542 Organizations
from Jones’ data
Leading Average Lagging
1.4
7.5
18.3
100%
% of Organizations
90%
80%
70%
0.669
60%
50%
(.135*1.4 + .196*7.5 + .669*18.3)
= 13.9 defects/kSLOC
40%
30%
0.196
20%
0.118
10%
0.013
0.004
0%
Initial
Defined
Copyright 1999-2010 USC-CSSE
Optimizing
27
An Independent
Validation Study
• Aim: To validate expert-determined COQUALMO has correct
trends in defect rates
• Sample Project: Size = 110.5 kSLOC
Type of
Artifact
Reqts
Design
Code
Type of
Artifact
Reqts
Design
Code
DI
DIR
209
1043
767
2
9.5
7
Automated
Analysis
(VL)
0
0
0
Peer
Reviews
(L - VL)
0.13
0.14
0.15
Quality Adjustment
Factor (QAFj)
0.74
0.74
0.84
Baseline
DIR
3
13
8
Execution Testing
and Tools (H - VH)
Product
(1-DRF)
0.54
0.61
0.74
0.4
0.34
0.22
1970’s
Baseline DIR
5
25
15
DI/
kSLOC
DRes/
kSLOC
2
0.8
9.5
3.23
7
1.54
Total: 5.57
Actual Defect Density = 6 defects/kSLOC
Copyright 1999-2010 USC-CSSE
28
Outline
• Model Framework
• The Defect Introduction Sub-Model
– Expert-Judgment Model + Some Initial Data
Results
• The Defect Removal Sub-Model
– Expert-Judgment Model (Result of
COQUALMO Workshop)
COQUALMO Integrated with COCOMO II
Copyright 1999-2010 USC-CSSE
29
Integrated COQUALMO
Copyright 1999-2010 USC-CSSE
30
Download