Functionality

advertisement
R&D SDM 1
Quality and Metrics
How to measure and predict
software engineering?
2010
Theo Schouten
Contents
•Software Quality
•Dimensions and factors
•What are “software metrics”
•Function oriented metrics, LOC
•Estimation
•COCOMO 2
Book chapter 26, 15, 22, 23
(version 7e: 14, 23, 25)
2
Views on Quality
• transcendental view: immediately recognizable, but not
explicitly definable
• user view: meeting user goals
• manufacturer’s view: conformance to the specification
• product view: tied to inherent characteristics (e.g., functions
and features) of the product
• value-based view: how much a customer is willing to pay
Software:
• Quality of design: encompasses requirements, specifications,
and the design of the system.
• Quality of conformance: focused primarily on implementation.
• User satisfaction = compliant product + good quality +
delivery within budget and schedule
3
Dissatisfaction
• prof. dr. Marko van Eekelen:
Leven lang Computeren, Leven Lang Foeteren?
• Fingerpointing:
– customers: you deliver buggy software
– developers: you change your requirements, you
want it too quickly
• use of software in an environment where it is not
designed for, security
4
The Software Quality Dilemma
• If you produce a software system that has terrible quality, you
lose because no one will want to buy it.
• If you spend infinite time, extremely large effort, and huge
sums of money to build the absolutely perfect piece of
software, then it's going to take so long to complete and it will
be so expensive that you'll be out of business.
• Either you missed the market window, or you simply
exhausted all your resources.
• So people in industry try to get to that magical middle ground
where the product is good enough not to be rejected right
away, such as during evaluation, but also not the object of so
much perfectionism and so much work that it would take too
long or cost too much to complete. [Ven03]
5
Cost to find and repair an error
• reduce errors introduced in each phase
• try to find and correct as much as possible of them in the next
phase
6
Quality definitions
(degree of) conformance to:
•explicitly stated functional and performance requirements
•explicitly documented developments standards
•implicit characteristics that are expected of all professional
developed software.
•An effective software process applied in a manner that
creates a useful product that provides measurable value for
those who produce it and those who use it.
7
Quality Dimensions (Garvin)
• Performance Quality. delivers all content, functions, and
features that are specified
• Feature quality. provides features that surprise and delight
first-time end-users
• Reliability. delivers all features and capabilities without failure
• Conformance. conforms to local and external software
standards that are relevant to the application? (like design and
coding conventions, user interface expectations)
• Durability. Can the software be maintained (changed) or
corrected (debugged) without unintended side effects?
• Serviceability. in an acceptably short time period.
• Aesthetics. elegance, a unique flow, and an obvious “presence”
that are hard to quantify but evident nonetheless.
8
FURPS, Quality Factors
•Developed at Hewlett-Packard (Grady, Caswell, 1987)
•Functionality:
–Feature set and capability of the system
–Generality of the functions - Security of the overall system
•Usability:
–Human factors (aesthetics, consistency and documentation)
•Reliability:
–Frequency and severity of failure
–Accuracy of output
–MTTF - Failure recovery and predictability
•Performance:
–Speed, response time, resource consumption, throughput and efficiency
•Supportability:
–Extensibility, Maintainability, Configurability, Etc.
9
ISO 9126 Quality Factors
6 key quality attributes, each with several sub-attributes
•Functionality
•Reliability
•Usability
•Efficiency
•Maintainability
•Portability
Also often not direct measurable, but gives ideas for indirect
measures and checklists.
defect : The nonfulfilment of intended usage requirements
nonconformity : The nonfulfilment of specified requirements
superseded by the new project SQuaRE, ISO 25000:200
10
Software Quality Factors
(McCall et all, 1977):
Maintainability
Flexibility
Testability
Portability
Reusability
Interoperability
Product Revision
Product Transition
Product Operation
Correctness
Reliability
Usability
Integrity
Efficiency
and subfactors, e.g.
Usability: understandability, learnability and operability
11
Metrics
•What is a metric?
–“A quantitative measure of the degree to which a system,
component or process pocesses a given attribute” (IEEE
Software Engineering Standards 1993) : Software Quality
•Different from
–Measure (size of a system, component e.d), a single data
point
–Indicator: a metric or combination of metrics that provides
insight into process, project or product
–Measurement (act of determining a measure)
12
Why important, difficult
Why is measurement important
•to characterize
•to evaluate
•to predict
•to improve
•
Why is measurement of a metric difficult?
–No “exact” measure (‘measure the unmeasurable’,
subjective factors)
–Dependent on technical environment
–Dependent on organizational environment
–Dependent on application and ‘fitness for use’
13
McCall metrics
Metrics that affect cq influence software quality factors:
–Software Quality Factors are the dependent, metrics are the
independent variable
–Metrics:
audibility, accuracy, communication commonality, completeness,
consistency, data commonality, error tolerance, execution efficiency,
expandability, generality, hardware independence, instrumentation,
modularity, operability, security, self-documentation, simplicity,
software system independence, traceability, training.
–Software quality factors = c1 m1 + c2 m2 + … + cn mn
–cn is a regression coefficient based on empirical data
14
McCall Matrix
ISO 9126 also provides
a basis for indirect
measurements and
checklist for assessing
the quality of a system.
15
Quantitative Metrics
Desired attributes of Metrics (Ejiogu, 1991)
–Simple and computable
–Empirical and intuitively persuasive
–Consistent and objective
–Consistent in the use of units and dimensions
–Independent of programming language, so directed at models (analysis,
design, test, etc.) or structure of program
–Effective mechanism for quality feedback
Type of Metrics:
–Size oriented
•Focused on the size of the software (LinesOfCode, Errors, Defects, size
of documentation, etc.)
•independent of programming language?
–Function oriented
•Focused on the realization of a function of a system
16
Function Oriented Metrics
Function Point Analysis (FPA)
a method for the measurement of the final functions of an information system
from the perspective of an end user
–on basis of a functional specification
–method is independent of programming language and operational environment
–its empirical parameters are not.
–Usable for
•estimate cost or effort to design, code and test the software
•predict number of components or Lines of Code
•predict the number of errors encountered during testing
•determining a ‘productivity measure’ after delivery
17
FPA: Count System Attributes
• Count the number of each ‘system attribute’ :
1. User (Human or other system) External Inputs (EI)
2. User External Outputs (EO)
3. User External Inquiries (EQ)
4. Internal Logical Master Files (MF)
5. Interfaces to other systems (IF)
External User
Transactions
EI
MFs
EO
EQ
EQ
EO
EI
System Environment
IF
Interface
Transactions
Other Systems
18
FPA: Weighting System Atributes
•Determine per system attribute how difficult it is:
–Low
–Medium
–High
•Use the following matrix to determine the weighting factor:
System Attribute
Low
Medium
High
User External Input
3
4
6
User External Output
4
5
7
User External Inquiry
3
4
6
User Logical Master File
7
10
15
Interfaces
5
7
10
•Calculate the weighted sum of the system attributes:
the ‘Unadjusted Function Points’ (UFP)
19
FPA: Value Adjustment
•The UFPI needs to be adapted to the environment in which the system has to
operate. The ‘degree of influence’ is determined for the 14 ‘values adjustment ‘
factors:.
–Data Communications
–Distributed Processing
–Performance Objectives
–Tight Configuration
–Transaction Volume
–On-line Data Entry
–End User Efficiency
Value between 0 and 5
–Logical File Updates
–Complex Processing
–Design for Re-usability
–Conversion and Installation Ease
–Multiple Site Implementation
–Ease of Change and Use
20
FPA: final function points
1. Total sum of the ‘degree of influence’ (DI) (0-70)
2. Determine Value Adjustment : VA= 0.65+ (0.01*DI)
(0.65-1.35)
3. Function Point Index = VA*UF
Historical data can then be used, e.g.
• 1 FP -> 60 lines of code in object oriented language
• 12 FP's are produced in 1 person-month
• 3 errors per FP during analysis, etc.
21
Other metrics
• chapter 15 (7e: 23): many product metrics
• chapter 22 (7e: 25): many process and project metrics
• chapter 23 (7e: 26) how to use them in estimation
– effort is some function of the metric
22
Lines Of Code
• What's a line of code?
– The measure was first proposed when programs were typed
on cards with one line per card;
– How does this correspond to statements as in Java which
can span several lines or where there can be several
statements on one line.
• What programs should be counted as part of the system?
• Dependent on the programming language and the way LOC’s
are counted.
23
Estimation based on LOC’s
• Determine the functional parts of the system
• Estimate the LOC per parts, using experience,
historical data
• multiply by the average productivity for this kind of
system (and/or part), e.g. 620 LOC/month
24
Same for FP or Object Points
• Object points (alternatively named application points)
are an alternative function-related measure to
function points when 4Gls or similar languages are
used for development.
• Object points are NOT the same as object classes.
• The number of object points in a program is a
weighted estimate of
– The number of separate screens that are displayed;
– The number of reports that are produced by the
system;
– The number of program modules that must be
developed to supplement the database code;
25
Algorithmic cost modelling
• Cost is estimated as a mathematical function of product,
project and process attributes estimated by project managers:
– Effort = A * Size B * M
– A is an organisation-dependent constant, B reflects the
disproportionate effort for large projects and M is a
multiplier reflecting product, process and people attributes.
• The most commonly used product attribute is code size. When
using FP, product attributes are (partly) contained in the FP
• Most models are similar but they use different values for A, B
and M.
26
The Software Equation
• A dynamic multivariable model, developed in 1992,
based on productivity data from 4000 projects
•
E = LOC x B0.333/ P3 x (1/t4)
• E = effort in person-months
• t = project duration in months
• B = “special skills factor”, increases with the need for
integration, testing, etc., increases with LOC
• P = “productivity parameter”
27
The COCOMO model
•
•
•
•
COnstructive COst Model
An empirical model based on project experience.
Supported by software tools
Well-documented, ‘independent’ model which is not
tied to a specific software vendor.
• Long history from initial version published in 1981
(algorithmic cost model) through various
instantiations to COCOMO 2.
• COCOMO 2 takes into account different approaches
to software development, reuse, etc.
28
COCOMO 2 models
• COCOMO 2 incorporates a range of sub-models that
produce increasingly detailed software estimates:
• Application composition model. Used when software
is composed from existing parts.
• Early design model. Used when requirements are
available but design has not yet started.
• Reuse model. Used to compute the effort of
integrating reusable components.
• Post-architecture model. Used once the system
architecture has been designed and more information
about the system is available.
29
Early design model
• Estimates can be made after the requirements have
been agreed.
• Based on a standard formula for algorithmic models
– PM = A  SizeB  M where
– A = 2.94 in initial calibration, Size in KLOC, B
varies from 1.1 to 1.24 depending on novelty of
the project, development flexibility, risk
management approaches and the process maturity.
– M = PERS  RCPX  RUSE  PDIF  PREX 
FCIL  SCED;
30
Multipliers
• Multipliers reflect the capability of the developers,
the non-functional requirements, the familiarity with
the development platform, etc.
– RCPX - product reliability and complexity;
– RUSE - the reuse required;
– PDIF - platform difficulty;
– PREX - personnel experience;
– PERS - personnel capability;
– SCED - required schedule;
– FCIL - the team support facilities.
31
Post-architecture level
• Uses the same formula as the early design model but with 17
rather than 7 associated multipliers.
– Project attributes
– Product attributes
– Computer attributes, contraints imposed by platform
– Personnel attributes
• The code size is estimated as:
– LOC of new code to be developed;
– equivalent number of lines of new code computed using the
reuse model;
– LOC that have to be modified according to requirements
changes.
32
The exponent term
This depends on 5 scale factors
Their sum/100 is added to 1.01

Precedentedness
Reflects the previous experience of the organisation with this type of
project. Very low means no previous experience, Extra high means
that the organisation is comp letely familiar with this application
domain.
Development
flexibility
Reflects the degree of flexibility in the development process. Very
low means a prescribed process is used; Extra high means that the
client only sets general goals.
Architecture/risk
resolution
Reflects the extent of risk analysis carried out. Very low means little
analysis, Extra high means a c omplete a thorough risk analysis.
Team cohesion
Reflects how well the development team know each other and work
together. Very low means very difficult interactions, Extra high
means an integrated and effective team with no comm unication
problems .
Process maturity
Reflects the process maturity of the organisation. The computation
of this value depends on the CMM Maturity Questionnaire but an
estimate can be ac hieved by subtracting the CMM process maturity
level from 5.
33
Remarks
• Metrics are used to get a view on quality
– also to predict cost, effort and time
– also to improve maturity level of company
• historical data is needed
– reuse of components
• experience of software engineers and managers very
important
• research trend: model based development
• rework with new system possibilities: internet
(security), distributed systems, multi-core and parallel
computing, the cloud, etc.
34
Final
• not that much theoretical guidance for managers
• often an “externally” determined end date, cost,
manpower
• but
– work fills up available time
– leaving out “nice to have features”
– less documentation
35
Download