Software Maintenance Changing software Kristian Sandahl

advertisement
Software Maintenance
Kristian Sandahl
krs@ida.liu.se
Changing software
Goal:
• Types of maintenance
• Maintenance process
• Experimentation in
software engineering
• Design for
Maintainability
• Research in Linköping
• Code understanding
Definition
“The modification of a software product after
delivery to correct faults, to improve
performance or other attributes, or to adapt
the product to a modified environment”
IEEE-STD 1219
• About 70% of all software costs!
Classification
• Corrective
maintenance
– Preventive
maintenance
• Adaptive
maintenance
• Perfective
maintenance
60%
50%
40%
Perfective
Adaptive
Corrective
30%
20%
10%
0%
Cost
Process
• On signal turn you paper
• Start working on your change
• When you are ready, write down the latest
number on the blackboard in slot ”Time”
• Stop working
• On signal, give/receive the paper to/from
someone else
• Check if solution is correct, if so write an “R” in
slot ”Correct”, otherwise write and “F”
Experimentation
• Needed for adaptation of general methods
• Body for scientific knowledge => replications =>
material, process well defined
• Design experiment: combination of indpendent
variables
• Randomise subjects
• Data collection of dependent variables
• Careful analysis
IEEE maintenance model
• Driven by Modification
Request (MR)
• Output new baseline
(| approved product)
• 7 steps:
–
–
–
–
–
–
–
Problem identification
Analysis
Design
Implementation
System test
Acceptance test
Delivery
Problem identification
•
•
•
•
•
•
Unique identification
Classification
Prioritisation
Decision (accept, reject, further evaluation)
Scheduling
Metrics count: number, date of arrival etc.
Analysis
• Feasibility
– Impact analysis
– Cost estimation
– Benefits
•
•
•
•
Requirements formulation
Safety and security impact analysis
Test strategy
Plan
Design
•
•
•
•
Design
Verification of requirements
Test cases
Update design model
Implementation
•
•
•
•
Code
Unit test
Update implementation model
Follow up impact analysis
Testing
• System testing
• Acceptance testing
Maintainability
Boehm et al: Predict maintenance size:
• Size = ASLOC *0.01*
– Assessment and Assimilation (0-8)
– Software Understanding (10-50)
– 0.4 * percentage of changed design
– 0.3 * percentage of changed code
– 0.3 * percentage of integrated external code
Effort
• Effort = C1 EAF (Size)P1
– Effort = number of staff months
– C1 = scaling constant
– EAF = Effort Adjustment Factor
– Size = number of delivered, human produced
source code instructions (KDSI)
– P1 = exponent describing the scaling inherent
of the process (0.91-1.23)
Metrics
Complexity:
• Cyclomatic number (McCabe(1976)): V(G) = e – n + 2p
Modularity:
• avg methods per class / avg lines of code per class
Instrumentation:
• avg number of probe points
Self-descriptiveness
• Readability: 0.295*avg var
length+0.499*statement lines+0.13V(G)
Traceability
analysis
design
implementation
Design for maintenance
•
•
•
•
Configuration management
Change control
Low McCabe complexity
Identify change-prone properties
– Factor out parameters
– Explicitly handle rules and equations
• Low coupling
• High cohesion
Change-prone properties
Instead of:
plot(145,150)
plot(163, 300)
Write:
y:=150
plot(145, y)
plot(163, y*2)
Equation:
working week = 38.75 h
Rule:
if permanently employed
and more than three
years before
retirement
then offer home-PC
Coupling and cohesion
class
method
method
method
class
method
method
method
many
class
method
method
method
few
class
method
method
method
Research: Impact analysis
actual
predicted
unnecessary correct
wrong
Lindvall
Lindvall&&Sandahl:
Sandahl:
Case-study
PMR:
Case-study PMR:
Underprediction
Underpredictionfactor
factor1,5
1,5- -66
Research: Tracing system
dynamics
Corba object
Corba object
Trace log
1
Corba object
6
2,3,4
Corba object
5
Corba object
Visualistation
Statistics
Method for improvement by understanding
the dynamic behaviour in distributed systems
By Johan Moe
Collect
data
A user activates
observation or/and
simulation during
prototyping, test
or operation
A user can change
the observed
system or
test suites
Summarise
statistics
Present
results
Toolbox
Parser
Parser
Observation
Observation
Trace sequences
(Correct/non correct)
Statistics
Log
Logserver
server
Raw traces
(txt)
Hypothesis-Driven Understanding Process During
Corrective Maintenance of Large Scale Software
by
Anneliese von Mayrhauser
and
A. Marie Vans
International Conference on Software Maintenance,
October 1-3, Bari, Italy, 1997.
Theory
• A Goal or Question drives the process of
understanding
• They can be explicit or implicit
• According to a certain Strategy, Hypothesis are
formed
• A Strategy can be:
– Systematic
– Opportunistic
– Cross-referencing
Design of study
•
•
•
•
•
•
•
Four professional programmers
Software at least 40 KLOC
2 hour video/audio recording
Think aloud
Corrective maintenance
Transcription
Coding:
– Actions: stating a goal, stating a hypothesis, supporting
actions X
program, top-down (domain), situation (algorithm)
– Hypothesis type: what, why, how
– Resolution: confirmed, abandoned, rejected
Type of hypotheses
• Domain level: most in localising faults,
procedure/function concepts, environment
and tools
• Program model: statement execution order,
code correctness, variable
• Situation model: functionality at procedure
level (abstracted from program)
• Types: few Why (attitude: I’ll do my part?)
Hypothesis resolution
• Surprisingly many abandoned
• Explanation:
– Flexible in approach to comprehension
– ”Arsenal” approach
• Earlier study in porting programs generated
fewer hypotheses (38/51). More things
unknown in corrective maintenance.
Hypothesis switching
• More than half of hypotheses caused a
switch
• Confirms the belief that the situation model
is a bridge between top-down model and
program model
Dynamic resolution process
• Within area of expertise:
– Many abandoned hypotheses
– Large steps
– ”Arsenal” behaviour
• Outside expertise:
– Hard to abandon hypotheses
– Small steps
Conclusion
• Corrective maintenance has a need to
understand software at all levels
• There is lot of switching between levels
• Goal completion is sequential.
• Support cross-referencing
• Support flexible starting points
Applications
activity
Testing
Testing Usage
System
evaluation
Evaluation Tuning
Uses
Simulation
tool
Coverage
Operational
profiles
Built on
Active probing
mechanism
Observation
Visualisation
Download