Uploaded by New Account

SE combined slides

advertisement
Disclaimer-Software Engineering
Slide Set to accompany
Software Engineering: A Practitioner’s Approach, 7/e
by Roger S. Pressman
Slides copyright © 1996, 2001, 2005, 2009 by Roger S. Pressman and Software Engineering (3rd ed.), By K.K Aggarwal &
Yogesh Singh, Copyright © New Age International Publishers, 2007
For non-profit educational use only
May be reproduced ONLY for student use at the university level when used in conjunction with Software Engineering: A
Practitioner's Approach, 7/e and and Software Engineering (3rd ed.), By K.K Aggarwal & Yogesh Singh. Any other
reproduction or use is prohibited without the express written permission of the author.
All copyright information MUST appear if these slides are posted on a website for student use.
1
Disclaimer-Software Engineering
• Software Engineering: A Practitioner’s
Approach, 7/e
by Roger S. Pressman and Software Engineering
(3rd ed.), By K.K Aggarwal & Yogesh Singh
• Slides copyright © 1996, 2001, 2005, 2009 by
Roger S. Pressman and Software Engineering
(3rd ed.), By K.K Aggarwal & Yogesh Singh,
Copyright © New Age International
Publishers, 2007
These slides are designed and adapted from slides provided by Software Engineering: A Practitioner’ s
Approach, 7/e (McGraw-Hill 2009) by Roger Pressman and Software Engineering (3rd ed.), By K.K Aggarwal & Yogesh
Singh, Copyright © New Age International Publishers, 2007
1
Chapter 2
1
Software life Cycle Models
“The period of time that starts when a software
product is conceived and ends when the
product is no longer available for use. The
software life cycle typically includes a
requirement phase, design phase,
implementation phase, test phase, installation
and check out phase, operation and
maintenance phase, and sometimes retirement
phase”.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill, 2009). Slides copyright 2009 by Roger Pressman.
2
A Generic Process Model
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill, 2009). Slides copyright 2009 by Roger Pressman.
3
Process Flow
4
Identifying a Task Set

A task set defines the actual work to be done to
accomplish the objectives of a software
engineering action.



A list of the task to be accomplished
A list of the work products to be produced
A list of the quality assurance filters to be applied
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill, 2009). Slides copyright 2009 by Roger Pressman.
5
Prescriptive Models
Prescriptive process models advocate an orderly
approach to software engineering
That leads to a few questions …
 If prescriptive process models strive for structure and
order, are they inappropriate for a software world that
thrives on change?
 Yet, if we reject traditional process models (and the
order they imply) and replace them with something less
structured, do we make it impossible to achieve
coordination and coherence in software work?

These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill, 2009). Slides copyright 2009 by Roger Pressman.
6
The Waterfall Model
Communicat ion
project init iat ion
requirement gat hering
Planning
estimating
scheduling
tracking
Modeling
analysis
design
Const ruct ion
code
t est
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill, 2009). Slides copyright 2009 by Roger Pressman.
Deployment
delivery
support
f eedback
7
Waterfall Model

Problems of waterfall model
i. It is difficult to define all requirements at the beginning of
a
project
ii. This model is not suitable for accommodating any change
iii. A working version of the system is not seen until late in
the project’s life
iv. It does not scale up well to large projects.
v. Real projects are rarely sequential.
8
The Incremental Model
increment # n
Co m m u n i c a t i o n
Pl a nn i n g
M odeling
analys is
design
Co n s t ru c t i o n
code
t est
De p l o y m e n t
d e l i v e ry
fe e dba c k
delivery of
nt h increment
increment # 2
Co m m u n i c a t i o n
Pla nning
M odeling
analys is
design
Co n s t r u c t i o n
c ode
De p l o y m e n t
t es t
d e l i v e ry
fe e dba c k
increment # 1
delivery of
2nd increment
Co m m u n i c a t i o n
Pl a nn i n g
M odeling
analy sis
des ign
Co n s t ru c t i o n
code
De p l o y m e n t
t est
d e l i v e ry
feedback
delivery of
1st increment
project calendar time
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill, 2009). Slides copyright 2009 by Roger Pressman.
9
Incremental Process Models






They are effective in the situations where
requirements are
defined precisely and there is no confusion
about the
functionality of the final product.
After every cycle a useable product is given to
the customer.
Popular particularly when we have to quickly
deliver a limited
functionality system.
10
Evolutionary Models: Prototyping
Q u i ck p l a n
Co m m u n icat io n
Quick
plan
communication
Mo d e l i n g
Q u ick d e sig n
Modeling
Quick design
Deploym ent
Deployment
D e live r y
delivery
& Fe e d b &
ack
feedback
Construction
of Co
prototype
n st r u ct io n
Construction
of
ofr oprototype
p
t o t yp e
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill, 2009). Slides copyright 2009 by Roger Pressman.
11
Evolutionary Process Model





Evolutionary process model resembles iterative
enhancement
model. The same phases as defined for the
waterfall model occur
here in a cyclical fashion. This model differs
from iterative
enhancement model in the sense that this does
not require a
useable product at the end of each cycle. In
evolutionary
12
Evolutionary Process model






development, requirements are implemented
by category rather
than by priority.
This model is useful for projects using new
technology that is not
well understood. This is also used for complex
projects where all
functionality must be delivered at one time, but
the requirements
are unstable or not well understood at the
beginning.
13
Prototype model







The prototype may be a usable program but is not
suitable as
the final software product.
The code for the prototype is thrown away. However
experience gathered helps in developing the actual
system.
The development of a prototype might involve extra
cost, but
overall cost might turnout to be lower than that of an
equivalent system developed using the waterfall model.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill, 2009). Slides copyright 2009 by Roger Pressman.
14
Spiral Model








Models do not deal with uncertainly which is inherent to
software
projects.
Important software projects have failed because project
risks were
neglected & nobody was prepared when something
unforeseen
happened.
Barry Boehm recognized this and tired to incorporate the
“project
risk” factor into a life cycle model.
The result is the spiral model, which was presented in
1986.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill, 2009). Slides copyright 2009 by Roger Pressman.
15
Spiral Model













An important feature of the spiral model is that each phase is
completed with a review by the people concerned with the
project (designers and programmers)
The advantage of this model is the wide range of options to
accommodate the good features of other life cycle models.
It becomes equivalent to another life cycle model in
appropriate situations.
The spiral model has some difficulties that need to be resolved
before it can be a universally applied life cycle model. These
difficulties include lack of explicit process guidance in
determining
objectives, constraints, alternatives; relying on risk assessment
expertise; and provides more flexibility than required for many
applications.
16
Spiral Model
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill, 2009). Slides copyright 2009 by Roger Pressman.
17
Evolutionary Models: The Spiral
planning
estimation
scheduling
risk analysis
communication
modeling
analysis
design
start
deployment
delivery
feedback
construction
code
test
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill, 2009). Slides copyright 2009 by Roger Pressman.
18
Selection of a life cycle
Model





Selection of a model is based on:
a) Requirements
b) Development team
c) Users
d) Project type and associated risk
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill, 2009). Slides copyright 2009 by Roger Pressman.
19
Selection of a life cycle
Model
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill, 2009). Slides copyright 2009 by Roger Pressman.
20
Selection of a life cycle
Model
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill, 2009). Slides copyright 2009 by Roger Pressman.
21
Selection of a life cycle
Model
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill, 2009). Slides copyright 2009 by Roger Pressman.
22
Selection of a life cycle
Model
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill, 2009). Slides copyright 2009 by Roger Pressman.
23
Chapter 18
Testing
Conventional
Applications
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
1
Testability







Operability—it operates cleanly
Observability—the results of each test case are readily
observed
Controllability—the degree to which testing can be
automated and optimized
Decomposability—testing can be targeted
Simplicity—reduce complex architecture and logic to
simplify tests
Stability—few changes are requested during testing
Understandability—of the design
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
2
What is a “Good” Test?




A good test has a high probability of
finding an error
A good test is not redundant.
A good test should be “best of breed”
A good test should be neither too
simple nor too complex
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
3
Internal and External Views

Any engineered product (and most other
things) can be tested in one of two ways:


Knowing the specified function that a product has
been designed to perform, tests can be conducted
that demonstrate each function is fully operational
while at the same time searching for errors in each
function;
Knowing the internal workings of a product, tests can
be conducted to ensure that "all gears mesh," that is,
internal operations are performed according to
specifications and all internal components have been
adequately exercised.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
4
Test Case Design
"Bugs lurk in corners
and congregate at
boundaries ..."
Boris Beizer
OBJECTIVE
to uncover errors
CRITERIA
in a complete manner
CONSTRAINT with a minimum of effort and time
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
5
Exhaustive Testing
loop < 20 X
14
There are 10 possible paths! If we execute one
test per millisecond, it would take 3,170 years to
test this program!!
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
6
Selective Testing
Selected path
loop < 20 X
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
7
Software Testing
black-box
methods
white-box
methods
Methods
Strategies
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
8
White-Box Testing
... our goal is to ensure that all
statements and conditions have
been executed at least once ...
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
9
Why Cover?
logic errors and incorrect assumptions
are inversely proportional to a path's
execution probability
we often believe that a path is not
likely to be executed; in fact, reality is
often counter intuitive
typographical errors are random; it's
likely that untested paths will contain
some
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
10
Basis Path Testing
First, we compute the cyclomatic
complexity:
number of simple decisions + 1
or
number of enclosed areas + 1
In this case, V(G) = 4
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
11
Cyclomatic Complexity
A number of industry studies have indicated
that the higher V(G), the higher the probability
or errors.
modules
V(G)
modules in this range are
more error prone
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
12
Basis Path Testing
1
Next, we derive the
independent paths:
Since V(G) = 4,
there are four paths
2
Path 1: 1,2,3,6,7,8
3
4
5
6
Path 2: 1,2,3,5,7,8
Path 3: 1,2,4,7,8
Path 4: 1,2,4,7,2,4,...7,8
Finally, we derive test
cases to exercise these
paths.
7
8
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
13
Basis Path Testing Notes
you don't need a flow chart,
but the picture will help when
you trace program paths
count each simple logical test,
compound tests count as 2 or
more
basis path testing should be
applied to critical modules
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
14
Deriving Test Cases

Summarizing:




Using the design or code as a foundation, draw a
corresponding flow graph.
Determine the cyclomatic complexity of the resultant
flow graph.
Determine a basis set of linearly independent paths.
Prepare test cases that will force execution of each
path in the basis set.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
15
Graph Matrices



A graph matrix is a square matrix whose size
(i.e., number of rows and columns) is equal to
the number of nodes on a flow graph
Each row and column corresponds to an
identified node, and matrix entries correspond
to connections (an edge) between nodes.
By adding a link weight to each matrix entry,
the graph matrix can become a powerful tool
for evaluating program control structure
during testing
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
16
Control Structure Testing


Condition testing — a test case design method
that exercises the logical conditions contained
in a program module
Data flow testing — selects test paths of a
program according to the locations of
definitions and uses of variables in the program
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
17
Data Flow Testing

The data flow testing method [Fra93] selects test paths
of a program according to the locations of definitions and
uses of variables in the program.

Assume that each statement in a program is assigned a
unique statement number and that each function does not
modify its parameters or global variables. For a statement
with S as its statement number
• DEF(S) = {X | statement S contains a definition of X}
• USE(S) = {X | statement S contains a use of X}

A definition-use (DU) chain of variable X is of the form [X,
S, S'], where S and S' are statement numbers, X is in
DEF(S) and USE(S'), and the definition of X in statement S
is live at statement S'
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
18
Loop Testing
Simple
loop
Nested
Loops
Concatenated
Loops
Unstructured
Loops
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
19
Loop Testing: Simple Loops
Minimum conditions—
conditions—Simple Loops
1. skip the loop entirely
2. only one pass through the loop
3. two passes through the loop
4. m passes through the loop m < n
5. (n(n-1), n, and (n+1) passes through
the loop
where n is the maximum number
of allowable passes
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
20
Loop Testing: Nested Loops
Nested Loops
Start at the innermost loop. Set all outer loops to their
minimum iteration parameter values.
Test the min+1, typical, maxmax-1 and max for the
innermost loop, while holding the outer loops at their
minimum values.
Move out one loop and set it up as in step 2, holding all
other loops at typical values. Continue this step until
the outermost loop has been tested.
Concatenated Loops
If the loops are independent of one another
then treat each as a simple loop
else* treat as nested loops
endif*
for example, the final loop counter value of loop 1 is
used to initialize loop 2.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
21
Black-Box Testing
requirements
output
input
events
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
22
Black-Box Testing







How is functional validity tested?
How is system behavior and performance tested?
What classes of input will make good test cases?
Is the system particularly sensitive to certain input
values?
How are the boundaries of a data class isolated?
What data rates and data volume can the system
tolerate?
What effect will specific combinations of data have on
system operation?
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
23
Graph-Based Methods
To understand the
objects that are
modeled in
software and the
relationships that
connect these
objects
Directed link
object
#1
object
#2
(link weight)
Undirected link
object
#
3
Parallel links
Node weight
(value
)
(a)
In this context, we
consider the term
“objects” in the broadest
possible context. It
encompasses data
objects, traditional
components (modules),
and object-oriented
elements of computer
software.
new
file
menu select generates
(generation time  1.0 sec)
is represented as
allows editing
of
document
window
Attributes:
contains
document
tex
t
background color: white
text color: default color
or preferences
(b)
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
24
Equivalence Partitioning
user
queries
mouse
picks
FK
input
output
formats
prompts
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
data
25
Sample Equivalence Classes
Valid data
user supplied commands
responses to system prompts
file names
computational data
physical parameters
bounding values
initiation values
output data formatting
responses to error messages
graphical data (e.g., mouse picks)
Invalid data
data outside bounds of the program
physically impossible data
proper value supplied in wrong place
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
26
Boundary Value Analysis
user
queries
mouse
picks
FK
input
output
formats
prompts
input domain
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
data
output
domain
27
Comparison Testing

Used only in situations in which the reliability of
software is absolutely critical (e.g., humanrated systems)



Separate software engineering teams develop
independent versions of an application using the
same specification
Each version can be tested with the same test data
to ensure that all provide identical output
Then all versions are executed in parallel with realtime comparison of results to ensure consistency
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
28
Orthogonal Array Testing

Used when the number of input parameters is
small and the values that each of the
parameters may take are clearly bounded
Z
Y
Z
X
One input item at a time
Y
X
L9 orthogonal array
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
29
Model-Based Testing

Analyze an existing behavioral model for the software or
create one.


Traverse the behavioral model and specify the inputs
that will force the software to make the transition from
state to state.




Recall that a behavioral model indicates how software will
respond to external events or stimuli.
The inputs will trigger events that will cause the transition
to occur.
Review the behavioral model and note the expected
outputs as the software makes the transition from state
to state.
Execute the test cases.
Compare actual and expected results and take
corrective action as required.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
30
Software Testing Patterns


Testing patterns are described in much the
same way as design patterns (Chapter 12).
Example:
• Pattern name: ScenarioTesting
• Abstract: Once unit and integration tests have been
conducted, there is a need to determine whether the
software will perform in a manner that satisfies users.
The ScenarioTesting pattern describes a technique
for exercising the software from the user’s point of
view. A failure at this level indicates that the software
has failed to meet a user visible requirement. [Kan01]
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
31
Chapter 25

Process and Project Metrics
Slide Set to accompany
Software Engineering: A Practitioner’s Approach, 7/e
by Roger S. Pressman
Slides copyright © 1996, 2001, 2005, 2009 by Roger S. Pressman
For non-profit educational use only
May be reproduced ONLY for student use at the university level when used in conjunction
with Software Engineering: A Practitioner's Approach, 7/e. Any other reproduction or use is
prohibited without the express written permission of the author.
All copyright information MUST appear if these slides are posted on a website for student
use.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
1
A Good Manager Measures
process
process metrics
project metrics
measurement
product metrics
product
What do we
use as a
basis?
• size?
• function?
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
2
Why Do We Measure?





assess the status of an ongoing project
track potential risks
uncover problem areas before they go
“critical,”
adjust work flow or tasks,
evaluate the project team’s ability to
control quality of software work products.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
3
Process Measurement

We measure the efficacy of a software process
indirectly.


That is, we derive a set of metrics based on the
outcomes that can be derived from the process.
Outcomes include
• measures of errors uncovered before release of the
software
• defects delivered to and reported by end-users
• work products delivered (productivity)
• human effort expended
• calendar time expended
• schedule conformance
• other measures.

We also derive process metrics by measuring the
characteristics of specific software engineering tasks.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
4
Process Metrics Guidelines







Use common sense and organizational sensitivity when
interpreting metrics data.
Provide regular feedback to the individuals and teams who
collect measures and metrics.
Don’t use metrics to appraise individuals.
Work with practitioners and teams to set clear goals and
metrics that will be used to achieve them.
Never use metrics to threaten individuals or teams.
Metrics data that indicate a problem area should not be
considered “negative.” These data are merely an indicator for
process improvement.
Don’t obsess on a single metric to the exclusion of other
important metrics.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
5
Software Process Improvement
Process model
Process improvement
recommendations
Improvement goals
Process metrics
SPI
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
6
Process Metrics

Quality-related


Productivity-related


error categorization & analysis
Defect removal efficiency


Production of work-products related to effort expended
Statistical SQA data


focus on quality of work products and deliverables
propagation of errors from process activity to activity
Reuse data

The number of components produced and their degree
of reusability
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
7
Project Metrics



used to minimize the development schedule by making the
adjustments necessary to avoid delays and mitigate
potential problems and risks
used to assess product quality on an ongoing basis and,
when necessary, modify the technical approach to improve
quality.
every project should measure:



inputs—measures of the resources (e.g., people, tools)
required to do the work.
outputs—measures of the deliverables or work products
created during the software engineering process.
results—measures that indicate the effectiveness of the
deliverables.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
8
Typical Project Metrics





Effort/time per software engineering task
Errors uncovered per review hour
Scheduled vs. actual milestone dates
Changes (number) and their
characteristics
Distribution of effort on software
engineering tasks
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
9
Metrics Guidelines







Use common sense and organizational sensitivity when
interpreting metrics data.
Provide regular feedback to the individuals and teams
who have worked to collect measures and metrics.
Don’t use metrics to appraise individuals.
Work with practitioners and teams to set clear goals and
metrics that will be used to achieve them.
Never use metrics to threaten individuals or teams.
Metrics data that indicate a problem area should not be
considered “negative.” These data are merely an indicator
for process improvement.
Don’t obsess on a single metric to the exclusion of other
important metrics.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
10
Typical Size-Oriented Metrics








errors per KLOC (thousand lines of code)
defects per KLOC
$ per LOC
pages of documentation per KLOC
errors per person-month
errors per review hour
LOC per person-month
$ per page of documentation
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
11
Typical Function-Oriented Metrics





errors per FP (thousand lines of
code)
defects per FP
$ per FP
pages of documentation per FP
FP per person-month
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
12
Comparing LOC and FP
Programming
Language
Ada
Assembler
C
C++
COBOL
Java
JavaSc ript
Perl
PL/1
Powerbuilder
SAS
Smalltalk
SQL
Visual Basic
LOC per Function point
avg.
median
low
high
154
337
162
66
315
109
53
104
91
33
29
205
694
704
178
77
63
58
60
78
32
40
26
40
47
77
53
63
67
31
41
19
37
42
14
77
42
22
11
33
10
7
16
400
75
263
105
49
55
110
158
Representative values developed by QSM
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
13
Why Opt for FP?




Programming language independent
Used readily countable characteristics that
are determined early in the software process
Does not “penalize” inventive (short)
implementations that use fewer LOC that
other more clumsy versions
Makes it easier to measure the impact of
reusable components
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
14
Object-Oriented Metrics




Number of scenario scripts (use-cases)
Number of support classes (required to
implement the system but are not
immediately related to the problem domain)
Average number of support classes per key
class (analysis class)
Number of subsystems (an aggregation of
classes that support a function that is visible
to the end-user of a system)
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
15
WebApp Project Metrics








Number of static Web pages (the end-user has no control over
the content displayed on the page)
Number of dynamic Web pages (end-user actions result in
customized content displayed on the page)
Number of internal page links (internal page links are pointers
that provide a hyperlink to some other Web page within the
WebApp)
Number of persistent data objects
Number of external systems interfaced
Number of static content objects
Number of dynamic content objects
Number of executable functions
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
16
Measuring Quality




Correctness — the degree to which a program
operates according to specification
Maintainability—the degree to which a program
is amenable to change
Integrity—the degree to which a program is
impervious to outside attack
Usability—the degree to which a program is
easy to use
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
17
Defect Removal Efficiency
DRE = E /(E + D)
where:
E is the number of errors found before
delivery of the software to the end-user
D is the number of defects found after
delivery.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
18
Metrics for Small Organizations







time (hours or days) elapsed from the time a request is
made until evaluation is complete, tqueue.
effort (person-hours) to perform the evaluation, Weval.
time (hours or days) elapsed from completion of
evaluation to assignment of change order to personnel,
teval.
effort (person-hours) required to make the change,
Wchange.
time required (hours or days) to make the change, tchange.
errors uncovered during work to make change, Echange.
defects uncovered after change is released to the
customer base, Dchange.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
19
Establishing a Metrics Program










Identify your business goals.
Identify what you want to know or learn.
Identify your subgoals.
Identify the entities and attributes related to your subgoals.
Formalize your measurement goals.
Identify quantifiable questions and the related indicators that
you will use to help you achieve your measurement goals.
Identify the data elements that you will collect to construct
the indicators that help answer your questions.
Define the measures to be used, and make these definitions
operational.
Identify the actions that you will take to implement the
measures.
Prepare a plan for implementing the measures.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
20
Chapter 30

Software Process Improvement
Slide Set to accompany
Software Engineering: A Practitioner’s Approach, 7/e
by Roger S. Pressman
Slides copyright © 1996, 2001, 2005, 2009 by Roger S. Pressman
For non-profit educational use only
May be reproduced ONLY for student use at the university level when used in conjunction
with Software Engineering: A Practitioner's Approach, 7/e. Any other reproduction or use is
prohibited without the express written permission of the author.
All copyright information MUST appear if these slides are posted on a website for student
use.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
1
What is SPI?

SPI implies that




elements of an effective software process can be defined in
an effective mann
an existing organizational approach to software
development can be assessed against those elements, and
a meaningful strategy for improvement can be defined.
The SPI strategy transforms the existing approach to
software development into something that is more
focused, more repeatable, and more reliable (in terms of
the quality of the product produced and the timeliness of
delivery).
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
2
SPI Framework





a set of characteristics that must be present if an
effective software process is to be achieved
a method for assessing whether those characteristics
are present
a mechanism for summarizing the results of any
assessment, and
a strategy for assisting a software organization in
implementing those process characteristics that have
been found to be weak or missing.
An SPI framework assesses the “maturity” of an
organization’s software process and provides a
qualitative indication of a maturity level.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
3
Elements of a SPI Framework
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
4
Constituencies

Quality certifiers
 Quality(Process) --> Quality(Product)

Formalists
Tool advocates
Practitioners
Reformers
Ideologists




These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
5
Maturity Models


A maturity model is applied within the context
of an SPI framework.
The intent of the maturity model is to provide
an overall indication of the “process maturity”
exhibited by a software organization.


an indication of the quality of the software process,
the degree to which practitioner’s understand and
apply the process,
the general state of software engineering practice.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
6
Is SPI for Everyone?

Can a small company initiate SPI activities and
do it successfully?


Answer: a qualified “yes”
It should come as no surprise that small
organizations are more informal, apply fewer
standard practices, and tend to be selforganizing.

SPI will be approved and implemented only after its
proponents demonstrate financial leverage.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
7
The SPI Process—I

Assessment and Gap Analysis

Assessment examines a wide range of actions and tasks
that will lead to a high quality process.
• Consistency. Are important activities, actions and tasks
applied consistently across all software projects and by all
software teams?
• Sophistication. Are management and technical actions
performed with a level of sophistication that implies a
thorough understanding of best practice?
• Acceptance. Is the software process and software
engineering practice widely accepted by management and
technical staff?
• Commitment. Has management committed the resources
required to achieve consistency, sophistication and
acceptance?

Gap analysis—The difference between local application
and best practice represents a “gap” that offers
opportunities for improvement.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
8
The SPI Process—II


Education and Training
Three types of education and training should be
conducted:



Generic concepts and methods. Directed toward both managers and
practitioners, this category stresses both process and practice. The intent is
to provide professionals with the intellectual tools they need to apply the
software process effectively and to make rational decisions about
improvements to the process.
Specific technology and tools. Directed primarily toward practitioners, this
category stresses technologies and tools that have been adopted for local
use. For example, if UML has been chosen for analysis and design
modeling, a training curriculum for software engineering using UML would
be established.
Business communication and quality-related topics. Directed toward all
stakeholders, this category focuses on “soft” topics that help enable better
communication among stakeholders and foster a greater quality focus.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
9
The SPI Process—III

Selection and Justification




choose the process model (Chapters 2 and 3) that best fits
your organization, its stakeholders, and the software that
you build
decide on the set of framework activities that will be
applied, the major work products that will be produced and
the quality assurance checkpoints that will enable your
team to assess progress
develop a work breakdown for each framework activity
(e.g., modeling), defining the task set that would be applied
for a typical project
Once a choice is made, time and money must be
expended to install it within an organization and these
resource expenditures should be justified.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
10
The SPI Process—IV

Installation/Migration


actually software process redesign (SPR) activities.
Scacchi [Sca00] states that “SPR is concerned with
identification, application, and refinement of new
ways to dramatically improve and transform software
processes.”
three different process models are considered:
• the existing (“as-is”) process,
• a transitional (“here-to-there”) process, and
• the target (“to be”) process.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
11
The SPI Process—V

Evaluation




assesses the degree to which changes have been
instantiated and adopted,
the degree to which such changes result in better software
quality or other tangible process benefits, and
the overall status of the process and the organizational
culture as SPI activities proceed
From a qualitative point of view, past management and
practitioner attitudes about the software process can be
compared to attitudes polled after installation of process
changes.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
12
Risk Management for SPI

manage risk at three key points in the SPI process [Sta97b]:




prior to the initiation of the SPI roadmap,
during the execution of SPI activities (assessment, education,
selection, installation), and
during the evaluation activity that follows the instantiation of some
process characteristic.
In general, the following categories [Sta97b] can be identified
for SPI risk factors:
•
•
•
•
•
•
•
•
•
budget and cost
content and deliverables culture
maintenance of SPI deliverables
mission and goals
organizational management and organizational stability
process stakeholders
schedule for SPI development
SPI development environment and process
SPI project management and SPI staff
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
13
Dritical Success Factors

The top five CSFs are [Ste99]:





Management commitment and support
Staff involvement
Process integration and understanding
A customized SPI strategy
A customized SPI strategy
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
14
The CMMI



a comprehensive process meta-model that is predicated
on a set of system and software engineering capabilities
that should be present as organizations reach different
levels of process capability and maturity
a process meta-model in two different ways: (1) as a
“continuous” model and (2) as a “staged” model
defines each process area in terms of “specific goals”
and the “specific practices” required to achieve these
goals. Specific goals establish the characteristics that
must exist if the activities implied by a process area are
to be effective. Specific practices refine a goal into a set
of process-related activities.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
15
The People CMM


“a roadmap for implementing workforce
practices that continuously improve the
capability of an organization’s workforce.”
[Cur02]
defines a set of five organizational maturity
levels that provide an indication of the relative
sophistication of workforce practices and
processes
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
16
P-CMM Process Areas
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
17
Other SPI Frameworks




SPICE— a international initiative to support the
International Standard ISO/IEC 15504 for (Software)
Process Assessment [ISO08]
Bootstrap—a SPI framework for small and medium
sized organizations that conforms to SPICE [Boo06],
PSP and TSP—individual and team specific SPI
frameworks ([Hum97], [Hum00]) that focus on process
in-the-small, a more rigorous approach to software
development coupled with measurement
TickIT—an auditing method [Tic05] that assesses an
organization compliance to ISO Standard 9001:2000
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
18
SPI Return on Investment

“How do I know that we’ll achieve a reasonable return for
the money we’re spending?”


ROI = [S (benefits) – S (costs)] / S (costs)] 3 100%
where
• benefits include the cost savings associated with higher
product quality (fewer defects), less rework, reduced effort
associated with changes, and the income that accrues from
shorter time-to-market.
• costs include both direct SPI costs (e.g., training,
measurement) and indirect costs associated with greater
emphasis on quality control and change management
activities and more rigorous application of software
engineering methods (e.g., the creation of a design model).
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
19
SPI Trends




future SPI frameworks must become significantly more
agile
Rather than an organizational focus (that can take years
to complete successfully), contemporary SPI efforts
should focus on the project level
To achieve meaningful results (even at the project level)
in a short time frame, complex framework models may
give way to simpler models.
Rather than dozens of key practices and hundreds of
supplementary practices, an agile SPI framework should
emphasize only a few pivotal practices
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
20
Chapter 19
Testing ObjectOriented
Applications
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
1
OO Testing

To adequately test OO systems, three things
must be done:



the definition of testing must be broadened to include
error discovery techniques applied to object-oriented
analysis and design models
the strategy for unit and integration testing must
change significantly, and
the design of test cases must account for the unique
characteristics of OO software.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
2
‘Testing’ OO Models


The review of OO analysis and design models
is especially useful because the same
semantic constructs (e.g., classes, attributes,
operations, messages) appear at the analysis,
design, and code level
Therefore, a problem in the definition of class
attributes that is uncovered during analysis will
circumvent side affects that might occur if the
problem were not discovered until design or
code (or even the next iteration of analysis).
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
3
Correctness of OO Models




During analysis and design, semantic correctness can
be asesssed based on the model’s conformance to the
real world problem domain.
If the model accurately reflects the real world (to a level
of detail that is appropriate to the stage of development
at which the model is reviewed) then it is semantically
correct.
To determine whether the model does, in fact, reflect
real world requirements, it should be presented to
problem domain experts who will examine the class
definitions and hierarchy for omissions and ambiguity.
Class relationships (instance connections) are evaluated
to determine whether they accurately reflect real-world
object connections.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
4
Class Model Consistency





Revisit the CRC model and the object-relationship
model.
Inspect the description of each CRC index card to
determine if a delegated responsibility is part of the
collaborator’s definition.
Invert the connection to ensure that each collaborator
that is asked for service is receiving requests from a
reasonable source.
Using the inverted connections examined in the
preceding step, determine whether other classes might
be required or whether responsibilities are properly
grouped among the classes.
Determine whether widely requested responsibilities
might be combined into a single responsibility.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
5
OO Testing Strategies

Unit testing




the concept of the unit changes
the smallest testable unit is the encapsulated class
a single operation can no longer be tested in isolation (the
conventional view of unit testing) but rather, as part of a
class
Integration Testing



Thread-based testing integrates the set of classes required to
respond to one input or event for the system
Use-based testing begins the construction of the system by testing
those classes (called independent classes) that use very few (if
any) of server classes. After the independent classes are tested,
the next layer of classes, called dependent classes
Cluster testing [McG94] defines a cluster of collaborating classes
(determined by examining the CRC and object-relationship model)
is exercised by designing test cases that attempt to uncover errors
in the collaborations.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
6
OO Testing Strategies

Validation Testing
 details of class connections disappear
 draw upon use cases (Chapters 5 and 6) that are
part of the requirements model
 Conventional black-box testing methods (Chapter 18)
can be used to drive validation tests
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
7
OOT Methods
Berard [Ber93] proposes the following approach:
1.
Each test case should be uniquely identified and should be explicitly
associated with the class to be tested,
2.
The purpose of the test should be stated,
3.
A list of testing steps should be developed for each test and should
contain [BER94]:
a.
a list of specified states for the object that is to be tested
b.
a list of messages and operations that will be exercised as
a consequence of the test
c.
a list of exceptions that may occur as the object is tested
d.
a list of external conditions (i.e., changes in the environment
external to the software that must exist in order to properly
conduct the test)
e.
supplementary information that will aid in understanding or
implementing the test.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
8
Testing Methods

Fault-based testing


Class Testing and the Class Hierarchy


The tester looks for plausible faults (i.e., aspects of the
implementation of the system that may result in defects). To
determine whether these faults exist, test cases are designed to
exercise the design or code.
Inheritance does not obviate the need for thorough testing of all
derived classes. In fact, it can actually complicate the testing
process.
Scenario-Based Test Design

Scenario-based testing concentrates on what the user does, not
what the product does. This means capturing the tasks (via usecases) that the user has to perform, then applying them and their
variants as tests.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
9
OOT Methods: Random Testing

Random testing



identify operations applicable to a class
define constraints on their use
identify a minimum test sequence
• an operation sequence that defines the minimum life
history of the class (object)

generate a variety of random (but valid) test sequences
• exercise other (more complex) class instance life
histories
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
10
OOT Methods: Partition Testing

Partition Testing


reduces the number of test cases required to test a class in
much the same way as equivalence partitioning for
conventional software
state-based partitioning
• categorize and test operations based on their ability to change
the state of a class

attribute-based partitioning
• categorize and test operations based on the attributes that they
use

category-based partitioning
• categorize and test operations based on the generic function
each performs
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
11
OOT Methods: Inter-Class Testing

Inter-class testing




For each client class, use the list of class operators to
generate a series of random test sequences. The
operators will send messages to other server classes.
For each message that is generated, determine the
collaborator class and the corresponding operator in the
server object.
For each operator in the server object (that has been
invoked by messages sent from the client object),
determine the messages that it transmits.
For each of the messages, determine the next level of
operators that are invoked and incorporate these into the
test sequence
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
12
OOT Methods: Behavior Testing
The tests to be
designed should
achieve all state
coverage [KIR94].
That is, the
operation
sequences should
cause the
Account class to
make transition
through all
allowable states
open
empty
acct
setup Accnt
set up
acct
deposit
(initial)
deposit
balance
credit
accntInfo
working
acct
withdraw
withdrawal
(final)
dead
acct
close
nonworking
acct
Figure 14.3 St at e diagram f or Account class (adapt ed f rom [ KIR94] )
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
13
Chapter 28

Risk Analysis
Slide Set to accompany
Software Engineering: A Practitioner’s Approach, 7/e
by Roger S. Pressman
Slides copyright © 1996, 2001, 2005, 2009 by Roger S. Pressman
For non-profit educational use only
May be reproduced ONLY for student use at the university level when used in conjunction
with Software Engineering: A Practitioner's Approach, 7/e. Any other reproduction or use is
prohibited without the express written permission of the author.
All copyright information MUST appear if these slides are posted on a website for student
use.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
1
Project Risks
What can go wrong?
What is the likelihood?
What will the damage be?
What can we do about it?
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
2
Reactive Risk Management




project team reacts to risks when they
occur
mitigation—plan for additional resources
in anticipation of fire fighting
fix on failure—resource are found and
applied when the risk strikes
crisis management—failure does not
respond to applied resources and
project is in jeopardy
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
3
Proactive Risk Management


formal risk analysis is performed
organization corrects the root causes of risk



TQM concepts and statistical SQA
examining risk sources that lie beyond the
bounds of the software
developing the skill to manage change
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
4
Seven Principles







Maintain a global perspective—view software risks within the context
of system and the business problem
Take a forward-looking view—think about the risks that may arise in
the future; establish contingency plans
Encourage open communication—if someone states a potential risk,
don’t discount it.
Integrate—a consideration of risk must be integrated into the software
process
Emphasize a continuous process—the team must be vigilant
throughout the software process, modifying identified risks as more
information is known and adding new ones as better insight is
achieved.
Develop a shared product vision—if all stakeholders share the same
vision of the software, it likely that better risk identification and
assessment will occur.
Encourage teamwork—the talents, skills and knowledge of all
stakeholder should be pooled
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
5
Risk Management Paradigm
control
track
identify
plan
analyze
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
6
Risk Identification







Product size—risks associated with the overall size of the software to be
built or modified.
Business impact—risks associated with constraints imposed by
management or the marketplace.
Customer characteristics—risks associated with the sophistication of the
customer and the developer's ability to communicate with the customer in
a timely manner.
Process definition—risks associated with the degree to which the
software process has been defined and is followed by the development
organization.
Development environment—risks associated with the availability and
quality of the tools to be used to build the product.
Technology to be built—risks associated with the complexity of the
system to be built and the "newness" of the technology that is packaged
by the system.
Staff size and experience—risks associated with the overall technical
and project experience of the software engineers who will do the work.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
7
Assessing Project Risk-I





Have top software and customer managers
formally committed to support the project?
Are end-users enthusiastically committed to the
project and the system/product to be built?
Are requirements fully understood by the
software engineering team and their
customers?
Have customers been involved fully in the
definition of requirements?
Do end-users have realistic expectations?
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
8
Assessing Project Risk-II






Is project scope stable?
Does the software engineering team have the right mix
of skills?
Are project requirements stable?
Does the project team have experience with the
technology to be implemented?
Is the number of people on the project team adequate to
do the job?
Do all customer/user constituencies agree on the
importance of the project and on the requirements for the
system/product to be built?
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
9
Risk Components




performance risk—the degree of uncertainty
that the product will meet its requirements and
be fit for its intended use.
cost risk—the degree of uncertainty that the
project budget will be maintained.
support risk—the degree of uncertainty that the
resultant software will be easy to correct,
adapt, and enhance.
schedule risk—the degree of uncertainty that
the project schedule will be maintained and
that the product will be delivered on time.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
10
Risk Projection

Risk projection, also called risk estimation,
attempts to rate each risk in two ways



the likelihood or probability that the risk is real
the consequences of the problems associated with
the risk, should it occur.
The are four risk projection steps:




establish a scale that reflects the perceived likelihood
of a risk
delineate the consequences of the risk
estimate the impact of the risk on the project and the
product,
note the overall accuracy of the risk projection so that
there will be no misunderstandings.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
11
Building a Risk Table
Risk
Probability
Impact
RMMM
Risk
Mitigation
Monitoring
&
Management
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
12
Building the Risk Table


Estimate the probability of occurrence
Estimate the impact on the project on a
scale of 1 to 5, where



1 = low impact on project success
5 = catastrophic impact on project success
sort the table by probability and impact
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
13
Risk Exposure (Impact)
The overall risk exposure, RE, is determined
using the following relationship [Hal98]:
RE = P x C
where
P is the probability of occurrence for a risk, and
C is the cost to the project should the risk
occur.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
14
Risk Exposure Example




Risk identification. Only 70 percent of the software
components scheduled for reuse will, in fact, be integrated into
the application. The remaining functionality will have to be
custom developed.
Risk probability. 80% (likely).
Risk impact. 60 reusable software components were planned.
If only 70 percent can be used, 18 components would have to
be developed from scratch (in addition to other custom
software that has been scheduled for development). Since the
average component is 100 LOC and local data indicate that the
software engineering cost for each LOC is $14.00, the overall
cost (impact) to develop the components would be 18 x 100 x
14 = $25,200.
Risk exposure. RE = 0.80 x 25,200 ~ $20,200.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
15
Risk Mitigation, Monitoring,
and Management



mitigation—how can we avoid the risk?
monitoring—what factors can we track that
will enable us to determine if the risk is
becoming more or less likely?
management—what contingency plans do
we have if the risk becomes a reality?
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
16
Risk Due to Product Size
Attributes that affect risk:
• estimated size of the product in LOC or FP?
• estimated size of product in number of programs,
files, transactions?
• percentage deviation in size of product from
average for previous products?
• size of database created or used by the product?
• number of users of the product?
• number of projected changes to the requirements
for the product? before delivery? after delivery?
• amount of reused software?
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
17
Risk Due to Business Impact
Attributes that affect risk:
• affect of this product on company revenue?
• visibility of this product by senior management?
• reasonableness of delivery deadline?
• number of customers who will use this product
• interoperability constraints
• sophistication of end users?
• amount and quality of product documentation that
must be produced and delivered to the customer?
• governmental constraints
• costs associated with late delivery?
• costs associated with a defective product?
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
18
Risks Due to the Customer
Questions that must be answered:
• Have you worked with the customer in the past?
• Does the customer have a solid idea of requirements?
• Has the customer agreed to spend time with you?
• Is the customer willing to participate in reviews?
• Is the customer technically sophisticated?
• Is the customer willing to let your people do their
job—
job
—that is, will the customer resist looking over your
shoulder during technically detailed work?
• Does the customer understand the software
engineering process?
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
19
Risks Due to Process Maturity
Questions that must be answered:
• Have you established a common process framework?
• Is it followed by project teams?
• Do you have management support for
software engineering
• Do you have a proactive approach to SQA?
• Do you conduct formal technical reviews?
• Are CASE tools used for analysis, design and
testing?
• Are the tools integrated with one another?
• Have document formats been established?
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
20
Technology Risks
Questions that must be answered:
•
•
•
•
Is the technology new to your organization?
Are new algorithms, I/O technology required?
Is new or unproven hardware involved?
Does the application interface with new software?
• Is a specialized user interface required?
• Is the application radically different?
• Are you using new software engineering methods?
• Are you using unconventional software development
methods, such as formal methods, AIAI-based approaches,
artificial neural networks?
• Are there significant performance constraints?
• Is there doubt the functionality requested is "do"do-able?"
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
21
Staff/People Risks
Questions that must be answered:
•
•
•
•
•
•
•
•
Are the best people available?
Does staff have the right skills?
Are enough people available?
Are staff committed for entire duration?
Will some people work part time?
Do staff have the right expectations?
Have staff received necessary training?
Will turnover among staff be low?
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
22
Recording Risk Information
Project: Embedded software for XYZ system
Risk type: schedule risk
Priority (1 low ... 5 critical): 4
Risk factor: Project completion will depend on tests which require
hardware component under development. Hardware component
delivery may be delayed
Probability: 60 %
Impact: Project completion will be delayed for each day that
hardware is unavailable for use in software testing
Monitoring approach:
Scheduled milestone reviews with hardware group
Contingency plan:
Modification of testing strategy to accommodate delay using
software simulation
Estimated resources: 6 additional person months beginning in July
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
23
Chapter 23

Product Metrics
Slide Set to accompany
Software Engineering: A Practitioner’s Approach, 7/e
by Roger S. Pressman
Slides copyright © 1996, 2001, 2005, 2009 by Roger S. Pressman
For non-profit educational use only
May be reproduced ONLY for student use at the university level when used in conjunction
with Software Engineering: A Practitioner's Approach, 7/e. Any other reproduction or use is
prohibited without the express written permission of the author.
All copyright information MUST appear if these slides are posted on a website for student
use.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
1
McCall’s Triangle of Quality
Maintainability
Flexibility
Portability
Testability
Interoperability
Reusability
PRODUCT REVISION
PRODUCT TRANSITION
PRODUCT OPERATION
Correctness
Usability
Efficiency
Integrity
Reliability
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
2
A Comment
McCall’s quality factors were proposed in
the early 1970s. They are as valid today as
they were in that time. It’s likely that
software built to conform to these factors
will exhibit high quality well into the 21st
century, even if there are dramatic changes
in technology.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
3
Measures, Metrics and Indicators



A measure provides a quantitative indication of
the extent, amount, dimension, capacity, or
size of some attribute of a product or process
The IEEE glossary defines a metric as “a
quantitative measure of the degree to which a
system, component, or process possesses a
given attribute.”
An indicator is a metric or combination of
metrics that provide insight into the software
process, a software project, or the product itself
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
4
Measurement Principles




The objectives of measurement should be established
before data collection begins;
Each technical metric should be defined in an
unambiguous manner;
Metrics should be derived based on a theory that is valid
for the domain of application (e.g., metrics for design
should draw upon basic design concepts and principles
and attempt to provide an indication of the presence of an
attribute that is deemed desirable);
Metrics should be tailored to best accommodate specific
products and processes [Bas84]
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
5
Measurement Process





Formulation. The derivation of software measures and metrics
appropriate for the representation of the software that is being
considered.
Collection. The mechanism used to accumulate data required
to derive the formulated metrics.
Analysis. The computation of metrics and the application of
mathematical tools.
Interpretation. The evaluation of metrics results in an effort to
gain insight into the quality of the representation.
Feedback. Recommendations derived from the interpretation of
product metrics transmitted to the software team.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
6
Goal-Oriented Software Measurement

The Goal/Question/Metric Paradigm




(1) establish an explicit measurement goal that is specific to the
process activity or product characteristic that is to be assessed
(2) define a set of questions that must be answered in order to
achieve the goal, and
(3) identify well-formulated metrics that help to answer these
questions.
Goal definition template





Analyze {the name of activity or attribute to be measured}
for the purpose of {the overall objective of the analysis}
with respect to {the aspect of the activity or attribute that is
considered}
from the viewpoint of {the people who have an interest in the
measurement}
in the context of {the environment in which the measurement takes
place}.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
7
Metrics Attributes






Simple and computable. It should be relatively easy to learn how to
derive the metric, and its computation should not demand inordinate
effort or time
Empirically and intuitively persuasive. The metric should satisfy the
engineer’s intuitive notions about the product attribute under
consideration
Consistent and objective. The metric should always yield results that
are unambiguous.
Consistent in its use of units and dimensions. The mathematical
computation of the metric should use measures that do not lead to
bizarre combinations of unit.
Programming language independent. Metrics should be based on the
analysis model, the design model, or the structure of the program
itself.
Effective mechanism for quality feedback. That is, the metric should
provide a software engineer with information that can lead to a higher
quality end product
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
8
Collection and Analysis Principles



Whenever possible, data collection and
analysis should be automated;
Valid statistical techniques should be applied to
establish relationship between internal product
attributes and external quality characteristics
Interpretative guidelines and recommendations
should be established for each metric
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
9
Metrics for the Requirements Model


Function-based metrics: use the function point
as a normalizing factor or as a measure of the
“size” of the specification
Specification metrics: used as an indication of
quality by measuring number of requirements
by type
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
10
Function-Based Metrics



The function point metric (FP), first proposed by Albrecht [ALB79], can
be used effectively as a means for measuring the functionality
delivered by a system.
Function points are derived using an empirical relationship based on
countable (direct) measures of software's information domain and
assessments of software complexity
Information domain values are defined in the following manner:





number of external inputs (EIs)
number of external outputs (EOs)
number of external inquiries (EQs)
number of internal logical files (ILFs)
Number of external interface files (EIFs)
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
11
Function Points
Information
Domain Value
Weighting factor
simple average complex
Count
=
External Inputs (EIs)
3
3
4
6
External Outputs (EOs)
3
4
5
7
External Inquiries E
( Qs)
3
3
4
6
=
Internal Logical Files I(LFs)
3
7
10
15
=
External Interface Files E
( IFs)
3
5
7
10
=
=
Count total
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
12
Architectural Design Metrics

Architectural design metrics





Structural complexity = g(fan-out)
Data complexity = f(input & output variables, fan-out)
System complexity = h(structural & data complexity)
HK metric: architectural complexity as a
function of fan-in and fan-out
Morphology metrics: a function of the number
of modules and the number of interfaces
between modules
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
13
Metrics for OO Design-I

Whitmire [Whi97] describes nine distinct and measurable
characteristics of an OO design:

Size
• Size is defined in terms of four views: population, volume, length, and
functionality

Complexity
• How classes of an OO design are interrelated to one another

Coupling
• The physical connections between elements of the OO design

Sufficiency
• “the degree to which an abstraction possesses the features required of it,
or the degree to which a design component possesses features in its
abstraction, from the point of view of the current application.”

Completeness
• An indirect implication about the degree to which the abstraction or design
component can be reused
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
14
Metrics for OO Design-II

Cohesion
• The degree to which all operations working together to
achieve a single, well-defined purpose

Primitiveness
• Applied to both operations and classes, the degree to
which an operation is atomic

Similarity
• The degree to which two or more classes are similar in
terms of their structure, function, behavior, or purpose

Volatility
• Measures the likelihood that a change will occur
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
15
Distinguishing Characteristics
Berard [Ber95] argues that the following characteristics
require that special OO metrics be developed:





Localization—the way in which information is concentrated in
a program
Encapsulation—the packaging of data and processing
Information hiding—the way in which information about
operational details is hidden by a secure interface
Inheritance—the manner in which the responsibilities of one
class are propagated to another
Abstraction—the mechanism that allows a design to focus on
essential details
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
16
Class-Oriented Metrics
Proposed by Chidamber and Kemerer [Chi94]:
[Chi94]:






weighted methods per class
depth of the inheritance tree
number of children
coupling between object classes
response for a class
lack of cohesion in methods
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
17
Class-Oriented Metrics
Proposed by Lorenz and Kidd [Lor94]:




class size
number of operations overridden by a
subclass
number of operations added by a
subclass
specialization index
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
18
Class-Oriented Metrics
The MOOD Metrics Suite [Har98b]:



Method inheritance factor
Coupling factor
Polymorphism factor
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
19
Operation-Oriented Metrics
Proposed by Lorenz and Kidd [Lor94]:



average operation size
operation complexity
average number of parameters per
operation
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
20
Component-Level Design Metrics



Cohesion metrics: a function of data
objects and the locus of their definition
Coupling metrics: a function of input and
output parameters, global variables, and
modules called
Complexity metrics: hundreds have been
proposed (e.g., cyclomatic complexity)
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
21
Interface Design Metrics

Layout appropriateness: a function of layout
entities, the geographic position and the “cost”
of making transitions among entities
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
22
Design Metrics for WebApps






Does the user interface promote usability?
Are the aesthetics of the WebApp appropriate for the
application domain and pleasing to the user?
Is the content designed in a manner that imparts the
most information with the least effort?
Is navigation efficient and straightforward?
Has the WebApp architecture been designed to
accommodate the special goals and objectives of
WebApp users, the structure of content and functionality,
and the flow of navigation required to use the system
effectively?
Are components designed in a manner that reduces
procedural complexity and enhances the correctness,
reliability and performance?
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
23
Code Metrics

Halstead’s Software Science: a
comprehensive collection of metrics all
predicated on the number (count and
occurrence) of operators and operands within a
component or program

It should be noted that Halstead’s “laws” have
generated substantial controversy, and many believe
that the underlying theory has flaws. However,
experimental verification for selected programming
languages has been performed (e.g. [FEL89]).
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
24
Metrics for Testing


Testing effort can also be estimated using
metrics derived from Halstead measures
Binder [Bin94] suggests a broad array of
design metrics that have a direct influence on
the “testability” of an OO system.






Lack of cohesion in methods (LCOM).
Percent public and protected (PAP).
Public access to data members (PAD).
Number of root classes (NOR).
Fan-in (FIN).
Number of children (NOC) and depth of the
inheritance tree (DIT).
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
25
Maintenance Metrics

IEEE Std. 982.1-1988 [IEE94] suggests a software
maturity index (SMI) that provides an indication of the
stability of a software product (based on changes that
occur for each release of the product). The following
information is determined:
• MT = the number of modules in the current release
• Fc =
the number of modules in the current release that
have been changed
• Fa =
the number of modules in the current release that
have been added
• Fd =
the number of modules from the preceding release
that were deleted in the current release

The software maturity index is computed in the following
manner:
• SMI = [MT - (Fa + Fc + Fd)]/MT

As SMI approaches 1.0, the product begins to stabilize.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
26
Chapter 16
1
Comment on Quality

Phil Crosby once said:

The problem of quality management is not what people
don't know about it. The problem is what they think they do
know . . . In this regard, quality has much in common with
sex.

Everybody is for it. (Under certain conditions, of course.)

Everyone feels they understand it. (Even though they
wouldn't want to explain it.)

Everyone thinks execution is only a matter of following
natural inclinations. (After all, we do get along somehow.)

And, of course, most people feel that problems in these
areas are caused by other people. (If only they would take
the time to do things right.)
2
Elements of SQA










Standards
Reviews and Audits
Testing
Error/defect collection and analysis
Change management
Education
Vendor management
Security management
Safety
Risk management
3
Elements of SQA


Standards- like IEEE,Iso may be adopted
voluntarily by Se Organ8isation or i9mposed by
the Customer or stakeholders. SQAQ has to
ensure that standards that have been adopted
are being followed and work products conform
to it.
Reviews and audits- Technical Reviews for
uncovering errors, audits by SQA personnel to
ensure quality guidelines are being followed
are done.
4
Elements of SQA


Testing-Software testing is a quality control
function that is done with the primary goal of
uncovering errors. SQA has to ensure that test
plans are efficiently conducted to get maximum
errors.
Error/Defect Collection and analysis-SQA
collect and analyse errors and defects data to
understand how errors are introduced and
how best they can be removed.
.
5
Elements of SQA



Change Management-Change needs to be
monitored strictly else it can create a havoc
and confusion and poor quality. SQA ensure
change management practices are taken care
of.
Ass. Q – How can change create havoc?
Education- In order to improve SE practices of
Software Engineers, the key contributor is
education. The SQA leads this process and
sponsors educational programs for SE.
6
Elements of SQA



Vendor management-SQA ensures that a high
quality software results by ensuring that the
vendor follow specific quality practices and
incorporate in the contract.
Ass Q- what three type of vendors are there
give real world examples?
Security Management-SQA ensures proper
process has been followed for software
security to protect it against cyber crime and
abides the new govt. regulation policies.
7
Elements of SQA



Safety- SQA is responsible for assessing the
effect of software failure and for initiating steps
to ensure to reduce risk and catastrophe due to
defects.
Risk management-SQA ensure RMM activities
have been conducted and contingency plan
has been established.
Ass Q- Explain any five SQA activities
8
SQA Goals (see Figure 16.1 of
Pressman)




Requirements quality. The correctness, completeness,
and consistency of the requirements model will have a
strong influence on the quality of all work products that
follow.
Design quality. Every element of the design model
should be assessed by the software team to ensure that
it exhibits high quality and that the design itself conforms
to requirements.
Code quality. Source code and related work products
(e.g., other descriptive information) must conform to local
coding standards and exhibit characteristics that will
facilitate maintainability.
Quality control effectiveness. A software team should
apply limited resources in a way that has the highest
likelihood of achieving a high quality result.
9
Statistical SQA
Product
& Process
Collect information on all defects
Find the causes of the defects
Move to provide fixes for the process
measurement
... an understanding of how
to improve quality ...
10
Software Reliability

A simple measure of reliability is mean-timebetween-failure (MTBF), where
MTBF = MTTF + MTTR


The acronyms MTTF and MTTR are meantime-to-failure and mean-time-to-repair,
respectively.
Software availability is the probability that a
program is operating according to
requirements at a given point in time and is
defined as
Availability = [MTTF/(MTTF + MTTR)] x 100%
11
ISO 9001:2000 Standard



ISO 9001:2000 is the quality assurance standard that
applies to software engineering.
The standard contains 20 requirements that must be
present for an effective quality assurance system.
The requirements delineated by ISO 9001:2000 address
topics such as

management responsibility, quality system, contract
review, design control, document and data control, product
identification and traceability, process control, inspection
and testing, corrective and preventive action, control of
quality records, internal quality audits, training, servicing,
and statistical techniques.
12
Chapter 17
Software
Testing
Strategies
1
Software Testing
Testing is the process of exercising
a program with the specific intent of
finding errors prior to delivery to the
end user.
2
What Testing Shows
errors
requirements conformance
performance
an indication
of quality
3
Strategic Approach






To perform effective testing, you should conduct
effective technical reviews. By doing this, many errors
will be eliminated before testing commences.
Testing begins at the component level and works
"outward" toward the integration of the entire computerbased system.
Different testing techniques are appropriate for different
software engineering approaches and at different points
in time.
Testing is conducted by the developer of the software
and (for large projects) an independent test group.
Testing and debugging are different activities, but
debugging must be accommodated in any testing
strategy.
Ass Q why is testing different from Debugging.
4
V&V


Verification refers to the set of tasks that ensure
that software correctly implements a specific
function.
Validation refers to a different set of tasks that
ensure that the software that has been built is
traceable to customer requirements. Boehm
[Boe81] states this another way:


Verification: "Are we building the product right?"
Validation: "Are we building the right product?"
5
Who Tests the Software?
developer
Understands the system
but, will test "gently"
and, is driven by "delivery"
independent tester
Must learn about the system,
but, will attempt to break it
and, is driven by quality
6
Testing Strategy
System engineering
Analysis modeling
Design modeling
Code generation Unit test
Integration test
Validation test
System test
7
Testing Strategy


We begin by ‘testing-in-the-small’ and move
toward ‘testing-in-the-large’
For conventional software



The module (component) is our initial focus
Integration of modules follows
For OO software

our focus when “testing in the small” changes from
an individual module (the conventional view) to an
OO class that encompasses attributes and
operations and implies communication and
collaboration
8
Strategic Issues








Specify product requirements in a quantifiable manner
long before testing commences.
State testing objectives explicitly.
Understand the users of the software and develop a
profile for each user category.
Develop a testing plan that emphasizes “rapid cycle
testing.”
Build “robust” software that is designed to test itself
Use effective technical reviews as a filter prior to testing
Conduct technical reviews to assess the test strategy
and test cases themselves.
Develop a continuous improvement approach for the
testing process.
9
Unit Testing
module
to be
tested
results
software
engineer
test cases
10
Unit Testing
module
to be
tested
interface
local data structures
boundary conditions
independent paths
error handling paths
test cases
11
Unit Test Environment
driver
interface
local data structures
Module
boundary conditions
independent paths
error handling paths
stub
stub
test cases
RESULTS
12
Integration Testing Strategies
Options:
• the “big bang” approach
• an incremental construction strategy
13
Top Down Integration
A
B
F
top module is tested with
stubs
G
stubs are replaced one at
a time, "depth first"
C
as new modules are integrated,
some subset of tests is rere-run
D
E
14
Bottom-Up Integration
A
B
G
drivers are replaced one at a
time, "depth first"
C
D
F
E
worker modules are grouped into
builds and integrated
cluster
15
Sandwich Testing
A
B
F
Top modules are
tested with stubs
G
C
D
E
Worker modules are grouped into
builds and integrated
cluster
16
Regression Testing




Regression testing is the re-execution of some subset of
tests that have already been conducted to ensure that
changes have not propagated unintended side effects
Whenever software is corrected, some aspect of the
software configuration (the program, its documentation,
or the data that support it) is changed.
Regression testing helps to ensure that changes (due to
testing or for other reasons) do not introduce unintended
behavior or additional errors.
Regression testing may be conducted manually, by reexecuting a subset of all test cases or using automated
capture/playback tools.
17
Smoke Testing


A common approach for creating “daily builds” for product
software
Smoke testing steps:

Software components that have been translated into code are
integrated into a “build.”
• A build includes all data files, libraries, reusable modules, and engineered
components that are required to implement one or more product functions.

A series of tests is designed to expose errors that will keep the build
from properly performing its function.
• The intent should be to uncover “show stopper” errors that have the
highest likelihood of throwing the software project behind schedule.

The build is integrated with other builds and the entire product (in its
current form) is smoke tested daily.
• The integration approach may be top down or bottom up.
18
Smoke Testing


Requires Daily frequency from end to end . It
need not be exhaustive, but should aim at
exposing problems, so target problem areas
are only tested.
Benefits of Smoke testing•
•
•
•
Integration risk is minimised- as problem areas are being tested
daily.
Quality of end product is improved -as probability of uncovering
functional, architectural and functional level design errors
increases.
Error diagnosis and Correction are simplified- as errors are
associated with new software increment only due to daily testing
Progress is easier to assess.
19
Strategic Options

Bottom Up Approach




Adv- no need of stubs
Easier test case design
Disadv-Program as an entity does not exist till the
last module is added ,so wait for testing is long.
Top Down




Disadv-Need for stubs and difficulties associated with
it.
Adv- major control functions tested early.
Ass Q-What is Sandwich testing
Ass Q-What is a Critical Module
20
Integration Test work
Products



Plan for integration of the software and specific
tests are documented in a Test Specification
like SRS.
This incorporates test plan and test procedures
and is a part of Software Configuration.
Eg- Safe Home System Test phases:

User Interaction Command– i/p, o/p etc.
Sensor Processing- sensor o/p, sensor conditions
etc.
21
Integration Test work
Products
• Communication Functions- for Central
Monitoring system
• Alarm Processing- Software actions that
cause alarm.

Test Category for al test phases
• Interface integrity tests- internal and
external interfaces are tested,
• Functional validity test-tests to uncover
functional errors,
• Information Content-test for local and
global data structures,
22
Integration Test work
Products

Performance- tests to verify performance are conducted.

Ass Q- Give a real world example for Integration test
phases and details of schedule for Integration testing.
You can use your lab project case also.
23
High Order Testing

Validation testing- user visible actions and user recognisable o/p




Focus is on software requirements
Test criteria
Configuration review
System testing

Focus is on system integration
Alpha/Beta testing
Acceptaance testing



Recovery testing


verifies that protection mechanisms built into a system will, in fact, protect it
from improper penetration
Stress testing



forces the software to fail fault tolerance in a variety of ways and verifies that
recovery is properly performed MTTR
Security testing


Focus is on customer usage
executes a system in a manner that demands resources in abnormal quantity,
frequency, or volume
Senstivity testing instability of the system
Performance Testing


test the run-time performance of software within the context of an integrated
system
Deployment Tesing-platforms, environments
24
Final Thoughts




Think -- before you act to correct
Use tools to gain additional insight
If you’re at an impasse, get help from someone
else
Once you correct the bug, use regression
testing to uncover any side effects
25
Chapter 27

Project Scheduling
Slide Set to accompany
Software Engineering: A Practitioner’s Approach, 7/e
by Roger S. Pressman
Slides copyright © 1996, 2001, 2005, 2009 by Roger S. Pressman
For non-profit educational use only
May be reproduced ONLY for student use at the university level when used in conjunction
with Software Engineering: A Practitioner's Approach, 7/e. Any other reproduction or use is
prohibited without the express written permission of the author.
All copyright information MUST appear if these slides are posted on a website for student
use.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
1
Why Are Projects Late?








an unrealistic deadline established by someone outside the
software development group
changing customer requirements that are not reflected in
schedule changes;
an honest underestimate of the amount of effort and/or the
number of resources that will be required to do the job;
predictable and/or unpredictable risks that were not considered
when the project commenced;
technical difficulties that could not have been foreseen in
advance;
human difficulties that could not have been foreseen in advance;
miscommunication among project staff that results in delays;
a failure by project management to recognize that the project is
falling behind schedule and a lack of action to correct the
problem
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
2
Scheduling Principles






compartmentalization—define distinct tasks
interdependency—indicate task
interrelationship
effort validation—be sure resources are
available
defined responsibilities—people must be
assigned
defined outcomes—each task must have an
output
defined milestones—review for quality
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
3
Effort and Delivery Time
Effort
Cost
Ea = m (t d4 / t a4)
Impossible
region
Ea = effort in person-months
t d = nominal delivery time for schedule
t o = optimal development time (in terms of cost)
t a = actual delivery time desired
Ed
Eo
td
to
development time
Tmin = 0.75T d
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
4
Effort Allocation





40--50%
40


customer communication
analysis
design
review and modification
construction activities

15--20%
15
30--40%
30
“front end” activities
coding or code
generation
testing and installation



unit, integration
white-box, black box
regression
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
5
Defining Task Sets




determine type of project
assess the degree of rigor required
identify adaptation criteria
select appropriate software engineering tasks
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
6
Task Set Refinement
1.1 Concept scoping determines the overall scope of the
project.
Task definition: Task 1.1 Concept Scoping
1.1.1
Identify need, benefits and potential customers;
1.1.2
Define desired output/control and input events that drive the application;
Begin Task 1.1.2
1.1.2.1
FTR: Review written description of need
FTR indicates that a formal technical review (Chapter 26) is to be conducted.
1.1.2.2
Derive a list of customer visible outputs/inputs
1.1.2.3
FTR: Review outputs/inputs with customer and revise as required;
endtask Task 1.1.2
1.1.3
Define the functionality/behavior for each major function;
Begin Task 1.1.3
1.1.3.1
FTR: Review output and input data objects derived in task 1.1.2;
1.1.3.2
Derive a model of functions/behaviors;
1.1.3.3
FTR: Review functions/behaviors with customer and revise as required;
endtask Task 1.1.3
is refined to
1.1.4
Isolate those elements of the technology to be implemented in software;
1.1.5
Research availability of existing software;
1.1.6
Define technical feasibility;
1.1.7
Make quick estimate of size;
1.1.8
Create a Scope Definition;
endTask definition: Task 1.1
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
7
Define a Task Network
I.5a
Concept
Implement.
I.3a
Tech. Risk
Assessment
I.1
Concept
scoping
I.2
Concept
planning
I.3b
Tech. Risk
Assessment
I.3c
Tech. Risk
Assessment
Three I.3 tasks are
applied in parallel to
3 different concept
functions
I.4
Proof of
Concept
I.5b
Concept
Implement.
Integrate
a, b, c
I.6
Customer
Reaction
I.5c
Concept
Implement.
Three I.3 tasks are
applied in parallel to
3 different concept
functions
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
8
Timeline Charts
Tasks
Week 1
Week 2
Week 3
Week 4
Week n
Task 1
Task 2
Task 3
Task
4
Task 5
Task 6
Task 7
Task 8
Task 9
Task 10
Task
11
Task 12
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
9
Use Automated Tools to
Derive a Timeline Chart
Work tasks
I.1.1
I.1.2
I.1.3
I.1.4
I.1.5
I.1.6
I.1.7
I.1.8
week 1
week 2
week 3
week 4
week 5
Identify need and benefits
Meet with customers
Identify needs and project constraints
Establish product statement
Milestone: product statement defined
Define desired output/control/input (OCI)
Scope keyboard functions
Scope voice input functions
Scope modes of interaction
Scope document diagnostics
Scope other WP functions
Document OCI
FTR: Review OCI with customer
Revise OCI as required;
Milestone; OCI defined
Define the functionality/behavior
Define keyboard functions
Define voice input functions
Decribe modes of interaction
Decribe spell/grammar check
Decribe other WP functions
FTR: Review OCI definition with customer
Revise as required
Milestone: OCI defintition complete
Isolate software elements
Milestone: Software elements defined
Research availability of existing software
Reseach text editiong components
Research voice input components
Research file management components
Research Spell/Grammar check components
Milestone: Reusable components identified
Define technical feasibility
Evaluate voice input
Evaluate grammar checking
Milestone: Technical feasibility assessed
Make quick estimate of size
Create a Scope Definition
Review scope document with customer
Revise document as required
Milestone: Scope document complete
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
10
Schedule Tracking






conduct periodic project status meetings in which
each team member reports progress and problems.
evaluate the results of all reviews conducted
throughout the software engineering process.
determine whether formal project milestones (the
diamonds shown in Figure 27.3) have been
accomplished by the scheduled date.
compare actual start-date to planned start-date for
each project task listed in the resource table (Figure
27.4).
meet informally with practitioners to obtain their
subjective assessment of progress to date and
problems on the horizon.
use earned value analysis (Section 27.6) to assess
progress quantitatively.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
11
Progress on an OO Project-I

Technical milestone: OO analysis completed
• All classes and the class hierarchy have been defined and reviewed.
• Class attributes and operations associated with a class have been
defined and reviewed.
• Class relationships (Chapter 8) have been established and reviewed.
• A behavioral model (Chapter 8) has been created and reviewed.
• Reusable classes have been noted.

Technical milestone: OO design completed
•
•
•
•
•
•
The set of subsystems (Chapter 9) has been defined and reviewed.
Classes are allocated to subsystems and reviewed.
Task allocation has been established and reviewed.
Responsibilities and collaborations (Chapter 9) have been identified.
Attributes and operations have been designed and reviewed.
The communication model has been created and reviewed.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
12
Progress on an OO Project-II

Technical milestone: OO programming completed
• Each new class has been implemented in code from the
design model.
• Extracted classes (from a reuse library) have been
implemented.
• Prototype or increment has been built.

Technical milestone: OO testing
• The correctness and completeness of OO analysis and design
models has been reviewed.
• A class-responsibility-collaboration network (Chapter 6) has
been developed and reviewed.
• Test cases are designed and class-level tests (Chapter 19)
have been conducted for each class.
• Test cases are designed and cluster testing (Chapter 19) is
completed and the classes are integrated.
• System level tests have been completed.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
13
Earned Value Analysis (EVA)

Earned value



is a measure of progress
enables us to assess the “percent of completeness”
of a project using quantitative analysis rather than
rely on a gut feeling
“provides accurate and reliable readings of
performance from as early as 15 percent into the
project.” [Fle98]
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
14
Computing Earned Value-I

The budgeted cost of work scheduled (BCWS) is
determined for each work task represented in the
schedule.



BCWSi is the effort planned for work task i.
To determine progress at a given point along the project
schedule, the value of BCWS is the sum of the BCWSi
values for all work tasks that should have been completed
by that point in time on the project schedule.
The BCWS values for all work tasks are summed to
derive the budget at completion, BAC. Hence,
BAC = ∑ (BCWSk) for all tasks k
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
15
Computing Earned Value-II

Next, the value for budgeted cost of work performed
(BCWP) is computed.



The value for BCWP is the sum of the BCWS values for all
work tasks that have actually been completed by a point in time
on the project schedule.
“the distinction between the BCWS and the BCWP is that
the former represents the budget of the activities that were
planned to be completed and the latter represents the
budget of the activities that actually were completed.”
[Wil99]
Given values for BCWS, BAC, and BCWP, important
progress indicators can be computed:
• Schedule performance index, SPI = BCWP/BCWS
• Schedule variance, SV = BCWP – BCWS
• SPI is an indication of the efficiency with which the project is
utilizing scheduled resources.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
16
Computing Earned Value-III

Percent scheduled for completion = BCWS/BAC


Percent complete = BCWP/BAC


provides an indication of the percentage of work that should have
been completed by time t.
provides a quantitative indication of the percent of completeness
of the project at a given point in time, t.
Actual cost of work performed, ACWP, is the sum of the effort
actually expended on work tasks that have been completed by
a point in time on the project schedule. It is then possible to
compute
• Cost performance index, CPI = BCWP/ACWP
• Cost variance, CV = BCWP – ACWP
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
17
Download