Transparency Masters for Software Engineering: A Practitioner`s

advertisement

Software Engineering: A Practitioner’s Approach

Chapter 25

Process and Project Metrics

copyright © 1996, 2001, 2005

R.S. Pressman & Associates, Inc.

For University Use Only

May be reproduced ONLY for student use at the university level when used in conjunction with Software Engineering: A Practitioner's Approach.

Any other reproduction or use is expressly prohibited.

1

Until you can measure something and express it in numbers, you have only the beginning of understanding .

- Lord Kelvin

2

Until you can measure something and express it in numbers, you have only the beginning of understanding .

- Lord Kelvin

Non-software metrics you use everyday:

- Gas tank scale

- Speedometer

- Thermostat in your house

- Battery monitor in your laptop

What would happen if instead these were not numeric?

What other examples do you have?

Gas Tank

☐ A Lot

☐ Some

 A little

3

Until you can measure something and express it in numbers, you have only the beginning of understanding .

- Lord Kelvin

The problem is nonnumeric measurements are subjective… they mean different things to different people. Numbers are objective… they mean the same thing to everyone!

When your friend says “oh yeah, John/Jane Doe is super attractive, you should go out with him/her”… that is a subjective measurement… so, you ask

“Send me a photo”. Why? Because the subjective term “super attractive” has vastly different meanings for different people!

4

Coming up: Metrics for software

Metrics for software

 When asked to measure something, always try to determine an objective measurement. If not possible, try to get as close as you can!

Coming up: A Good Manager Measures

5

A Good Manager Measures

process process metrics project metrics measurement product metrics product

What do we use as a basis?

• size?

• function?

Coming up: We need a basis to say 20 defects per X lines of code. Why is this important?

6

We need a basis to say 20 defects per X lines of code. Why is this important?

A Because lines of code equals cost

B We want our metrics to be valid across projects of many sizes

C Because you just caused me to die in God of War III… stop asking these questions!

D Because this helps up understand how big our program is

7

Coming up: Why Do We Measure?

Why Do We Measure?

 assess the status of an ongoing project track potential risks uncover problem areas before they go “critical,” adjust work flow or tasks, evaluate the project team’s ability to control quality of software work products.

Coming up: Process versus Project Metrics

8

Process versus Project Metrics

 Process Metrics - Measure the process to help update and change the process as needed across many projects

 Project Metrics - Measure specific aspects of a single project to improve the decisions made on that project

Frequently the same measurements can be used for both purposes

Coming up: Process Measurement

9

Process Measurement

We measure the efficacy of a software process indirectly.

That is, we derive a set of metrics based on the outcomes of the process

Outcomes include

 measures of errors uncovered before release of the software defects delivered to and reported by end-users work products delivered (productivity) human effort expended calendar time expended schedule conformance many others…

We also derive process metrics by measuring the characteristics of specific software engineering tasks.

10

Coming up: Process Metrics Guidelines

Process Metrics Guidelines

Use common sense and organizational sensitivity when interpreting metrics data.

Provide regular feedback to the individuals and teams who collect measures and metrics.

Don’t use metrics to appraise individuals.

Work with practitioners and teams to set clear goals and metrics that will be used to achieve them.

Never use metrics to threaten individuals or teams.

Metrics data that indicate a problem area should not be considered

“negative.” These data are merely an indicator for process improvement.

Don’t obsess on a single metric to the exclusion of other important metrics.

11

Coming up: If I calculate the number of defects per developer and rank them, then using that rank assign salary raises based on that.

If I calculate the number of defects per developer and rank them, then using that rank assign salary raises based on that.

A. This is good

B. This is bad

12

Coming up: Software Process Improvement

Software Process Improvement

Process model

Improvement goals

Process metrics

SPI

Process improvement recommendations

Coming up: Typical Process Metrics

Make your metrics actionable!

13

Typical Process Metrics

Quality-related

 focus on quality of work products and deliverables

Productivity-related

Production of work-products related to effort expended

Maintainability

Statistical SQA data • Integrity error categorization & analysis

Defect removal efficiency

 propagation of errors from process activity to activity

• Severity of errors (1-5)

Reuse data

• MTTF (Mean time to failure)

• MTTR (Mean time to repair)

Within a single project this can also be a “project metric”. Across projects this is a “process metric”.

14

Coming up: Can you calculate a metric that records the number of ‘e’ that appear in a program? A. Yes B. No

Can you calculate a metric that records the number of ‘e’ that appear in a program? A. Yes B. No

Should you calculate the number of ‘e’ in a program?

A. Yes

B. No

Coming up: Effective Metrics (ch 16)

15

Effective Metrics (ch 16)

Simple and computable

Empirically and intuitively persuasive

Consistent and objective

Consistent in use of units and dimensions

Programming language independent

Should be actionable

Coming up: Effective Metrics- Baseball On Base Percentage

16

Effective Metrics- Baseball On Base

Percentage

Simple and computable

Empirically and intuitively persuasive

Consistent and objective

Consistent in use of units and dimensions

Programming language independent

Should be actionable

Coming up: Effective Metrics- Baseball Runs Batted In

17

Effective Metrics- Baseball Runs Batted In

Count of times when the outcome of player’s at-bat results in a run being scored

Simple and computable

Empirically and intuitively persuasive

Consistent and objective

Consistent in use of units and dimensions

Programming language independent

Should be actionable

Coming up: Actionable Metrics

18

Actionable Metrics

Actionable metrics (or information in general) are metrics that guide change or decisions about something

 Actionable: Measure the amount of human effort versus use cases completed.

Too high: more training, more design, etc…

 Very low: maybe we can shorten the schedule

 Not-Actionable: Measure the number of times the letter

“e” appears in code

Coming up: Project Metrics

Think before you measure. Don’t waste people’s time!

19

Project Metrics

 used to minimize the development schedule by making the adjustments necessary to avoid delays and mitigate potential problems and risks used to assess product quality on an ongoing basis and, when necessary, modify the technical approach to improve quality.

every project should measure:

Inputs —measures of the resources (e.g., people, tools) required to do the work.

Outputs —measures of the deliverables or work products created during the software engineering process.

Results —measures that indicate the effectiveness of the deliverables.

20

Coming up: Typical Project Metrics

Typical Project Metrics

Effort/time per software engineering task

Errors uncovered per review hour

Scheduled vs. actual milestone dates

Changes (number) and their characteristics

Distribution of effort on software engineering tasks

Actionable: What do you do if it’s too high/low?

Coming up: Metrics Guidelines

21

Metrics Guidelines

Use common sense and organizational sensitivity when interpreting metrics data.

Provide regular feedback to the individuals and teams who have worked to collect measures and metrics.

Don’t use metrics to appraise individuals.

Work with practitioners and teams to set clear goals and metrics that will be used to achieve them.

Never use metrics to threaten individuals or teams.

Metrics data that indicate a problem area should not be considered

“negative.” These data are merely an indicator for process improvement.

Don’t obsess on a single metric to the exclusion of other important metrics.

Same as process metrics guidelines

Coming up: Typical Size-Oriented Metrics

22

Typical Size-Oriented Metrics

 errors per KLOC (thousand lines of code)

 defects per KLOC

$ per LOC

 pages of documentation per KLOC errors per person-month

Errors per review hour

LOC per person-month

 $ per page of documentation

Coming up: Typical Function-Oriented Metrics

23

Typical Function-Oriented Metrics

 errors per Function Point (FP) defects per FP

$ per FP pages of documentation per FP

FP per person-month

Coming up: But.. What is a Function Point?

24

But.. What is a Function Point?

See Book section 23.2.1 for more detail

Function points (FP) are a unit measure for software size developed at IBM in 1979 by Richard Albrecht

To determine your number of FPs, you classify a system’s features into five classes:

Transactions External Inputs, External Outputs, External

Inquires

Data storage Internal Logical Files and External Interface Files

Each class is then weighted by complexity as low/average/high

 Multiplied by a value adjustment factor (determined by asking questions based on 14 system characteristics

Coming up: But.. What is a Function Point?

25

But.. What is a Function Point?

Count Low Average High Total

External Input x3 x4 x6

External Output

External Inquiries

Internal Logic Files

External Interface

Files x7 x5 x4 x3 x5 x4 x10 x7

Unadjusted Total:

Value Adjustment Factor:

Total Adjusted Value: x7 x6 x15 x10

Coming up: Function Point Categories

26

Function Point Categories

External Inputs (EI) - Info coming into the system

External Outputs (EO) - provides derived info out of a system (use ILF and EIF to create information)

External Inquiries - provides non-derived information out of the system (echoes back EI)

Internal Logical Files (ILF) - Think structures in RAM, datafiles ONLY updated based on EI

External Files - Think database, datafile updated by this code, but also other systems

27

Coming up: Function Point Example

Function Point Example

http://www.his.sunderland.ac.uk/~cs0mel/Alb_Example.doc

28

Coming up: Comparing LOC and FP

Comparing LOC and FP

Programming

Language

Ada

Assembler

C

C++

COBOL

Java

JavaScript

Perl

PL/1

Powerbuilder

SAS

Sma lltalk

SQL

Visual Basic avg.

LOC per Function point median low high

154

337

162

66

77

63

58

60

78

32

40

26

40

47

315

109

53

-

77

53

63

-

67

31

41

19

37

42

104

91

33

29

14

77

42

-

22

11

33

10

7

16

205

694

704

178

400

-

75

-

263

105

49

55

110

158

Representative values developed by QSM

Coming up: At IBM in the 70s or 80s (I don’t remember) they paid people per line-of-code they wrote

29

At IBM in the 70s or 80s (I don’t remember) they paid people per lineof-code they wrote

What happened?

A. The best programmers got paid the most

B. The worst programmers got paid the most

C. The sneakiest programmers, got paid the most

D. The lawyers got paid the most

30

Coming up: Why opt against LOC?

Why opt against LOC?

LOC is not programming language independent

LOC cannot use readily countable characteristics that are determined early in the software process

LOC “penalizes” inventive (short) implementations that use fewer LOC that other more clumsy versions

Other metrics makes it easier to measure the impact of reusable components (screen, widgets, etc…)

 Other options: COCOMO, Planning Poker, SLIM, Story

Points (remember Scrum?), many others…

Coming up: Object-Oriented Metrics

31

Object-Oriented Metrics

Number of scenario scripts (use-cases)

Number of support classes (required to implement the system but are not immediately related to the problem domain)

Average number of support classes per key class

(analysis class)

Number of subsystems (an aggregation of classes that support a function that is visible to the end-user of a system)

Coming up: WebEngineering Project Metrics

32

WebEngineering Project Metrics

Number of static Web pages (the end-user has no control over the content displayed on the page)

Number of dynamic Web pages (end-user actions result in customized content displayed on the page)

Number of internal page links (internal page links are pointers that provide a hyperlink to some other Web page within the WebApp)

Number of persistent data objects

Number of external systems interfaced

Number of static content objects

Number of dynamic content objects

Number of executable functions

33

Coming up: Measuring Quality

Measuring Quality

Correctness — the degree to which a program operates according to specification

Maintainability —the degree to which a program is amenable to change

Integrity

----------------------------------

MTTC

—the degree to which a program is impervious to outside attack time to analyze, design,

Usability implement and deploy

—the degree to which a program is easy to use

Integrity =

1-(threat*(1-security))

Many options. See ch 12

E.g. t=0.25, s=0.95 --> I=0.99

34

Coming up: Defect Removal Efficiency

Defect Removal Efficiency

DRE = E /( E + D )

E is the number of errors found before delivery of the software to the end-user

D is the number of defects found after delivery.

Coming up: Defect Removal Efficiency

35

Defect Removal Efficiency

DRE = E /( E + D )

Defects found during phase:

Requirements (10)

Design (20)

Construction

Implementation (5)

Unit Testing (50)

Testing

Integration Testing (100)

System Testing (250)

Acceptance Testing (5)

By Customer (10)

Coming up: Metrics for Small Organizations

5 / (5 + 50) = 9%

50 / (50 + 100) = 33%

100 / (100 + 250) = 28%

250 / (250 + 5) = 98%

5 / (5 + 10) = 33%

36

Metrics for Small Organizations

 time (hours or days) elapsed from the time a request is made until evaluation is complete, t queue

.

effort (person-hours) to perform the evaluation, W eval

.

time (hours or days) elapsed from completion of evaluation to assignment of change order to personnel, t eval

.

effort (person-hours) required to make the change, W change

.

time required (hours or days) to make the change, t change

.

errors uncovered during work to make change, E change

.

defects uncovered after change is released to the customer base, D change

.

37

Coming up: Establishing a Metrics Program

Establishing a Metrics Program

Set Goals

Identify your business goals.

Identify what you want to know or learn.

Identify your subgoals.

Identify the entities and attributes related to your subgoals.

Formalize your measurement goals.

Determine indicators for goals

 Identify quantifiable questions and the related indicators that you will use to help you achieve your measurement goals.

 Identify the data elements that you will collect to construct the indicators that help answer your questions.

Define Measurements

Define the measures to be used, and make these definitions operational.

Identify the actions that you will take to implement the measures.

 Prepare a plan for implementing the measures.

38

Coming up: Metrics give you information!

Metrics give you information!

Metrics about your process help you determine if you need to make changes or if your process is working

Metrics about your project do they same thing

Metrics about your software can help you understand it better, and see where possible problems may lurk. Let’s see the complexity measurement (after a few questions…)

39

Coming up: Questions

Questions

What are some reasons NOT to use lines of code to measure size?

What do you expect the DRE rate will be for the implementation (or construction) phase of the software lifecycle?

What about for testing?

Give an example of a usability metric?

According to the chart, Smalltalk is much more efficient than Java and C++. Why don’t we use it for everything?

Coming up: References

40

Code Metrics

 Previously we have been discussing product and process metrics.

 However, code metrics are used to make your code better

Complexity - See CyclomaticComplexity slides

Dynamic measurements – Profile Jkaboom

 Memory – garbage collector, object crerations

 CPU performance

 Threading

Coming up: Questions

41

References

 http://www.lansing.lib.il.us/images/baseball_player.gif

End of presentation

42

Download