Evaluation of software designs

advertisement
Evaluation of software designs
Abstract
This paper is intended to present the study of different phases
and stages of software design evaluation.
In software design evaluation there are two types of
approaches,
- Tool based approach
- Process based approach.
Here we study both these approaches but mainly concentrate
on the tool base approach.
Tool-based approach uses subjective evaluations as input to
tool analysis. These approaches or their combination are
expected to improve software design and promote
organizational learning about software design.
In process based approach developers study and improve their
system’s structure at fixed intervals.
We discuss both these aspects in this paper and present well
furnished details regarding the functionality and the
application of the above.
Introduction
The past decade has seen a dramatic change in the break down of
projects
involving
hardware
and
software,
both
with
respect
to
functionality and economic considerations.
For some large applications, the software can exceed 75% of the
total system cost. As the price of computer hard ware continues to fall,
larger and more complex computer systems become economically
feasible; hence the ability to design large complex software systems of
high quality at minimum cost is essential. The increasing demand for
low-cost highquality software can be satisfied by identifying possible
problem areas in the early part of system development. This in turn
means measuring the quality of the software product in the infancy of its
lifecycle. It is generally accepted that measuring the quality of software is
becoming increasingly important. Unfortunately, most of the work in this
area to date has centred around the source program. This has the
disadvantage that it emphasizes only one aspect of the entire lifecycle
(the lifecycle being described as a sequence of steps, beginning with
requirement specification, design, coding and testing phases, through to
the maintenance phase).
Measurement of quality should consider each of these phases and,
in particular, emphasis should be placed on the early phases such as
requirement and design. Monitoring the quality of these two phases has
been shown to provide significant improvements in software quality and
a significant decrease in development costs [2]. Design measurement is
desirable because it allows you to capture important aspects of the
product early in the lifecycle. Studies have concluded that many of the
problems associated with software can be detected before testing begins.
From studies of design methodologies [3], we can find that 90% of the
problems found in the testing phase could have been found in earlier
stages. Much work in software quality is centred around quality metrics.
It was therefore decided that a set of metrics should be investigated to
evaluate quality at the design stage.
Software design metrics
Design metrics fall into two categories
Product metrics:
Derived from design representations, these can be used to predict
the extent of a future activity in a software project, as well as assessing
the quality of the design in its own right. Product metrics are to be
further divided into network, stability and information flow metrics
Process metrics:
Metrics derived from the various activities that make up the design
phase. They include effort, timescale metrics, fault and change metrics.
These are normally used for error detection, the time spent at each phase
of development, measuring the cost etc. When they are recorded on a
unit basis, they can also be used for unit quality control.
Of the two types, product metrics are the most suitable for
evaluating software design quality, and so these are discussed further
Network metrics
These metrics sometimes referred to as call graph metrics, are
based on the shape of the calling hierarchy within the software system.
Their com- plexity metric was based on measuring how far a design
deviates from a tree structure with neither common calls to modules nor
common access to a database. The theory on which this metric was
based is that both common calls and common database access increase
the coupling between the modules.
Stability metrics
Stability metrics are based on the resistance to change that occurs
in a software system during maintenance. The principle behind this type
of metric is that a poor system is one where a change to one module has
a high probability of giving rise to changes in other modules. This, in
turn, has a high probability of giving rise to further changes in other
modules. The work is an expansion of a metric, which relies on the
subjective estimation of the effect that a change to one module had on
another.
This early work has now been refined . Design stability measures
can now be obtained at any point in the design process, allowing
examination
of
the
program
early
in
its
life-cycle
for
possible
maintenance problems. Design stability measurement requires a more
in-depth analysis of the interfaces of modules and an account of the
‘ripple effect’ as a consequence of program modifications (stability of the
program). The potential ‘ripple effect’ is defined as the total number of
assumptions made by other modules, which invoke a module whose
stability is being measured, share global data or files with modules, or
are invoked by the module.
During program maintenance, if changes are made that affect
these assumptions, a ‘ripple effect’ may occur through the program,
requiring additional costly changes. It is possible to calculate the ‘ripple
effect’ consequent on modifying the module.
The design stability of a piece of software will be calculated on the
basis of the total potential ‘ripple effect’ of all its modules. This approach
allows the calculation of design stability measures at any point in the
design process. Areas of the program with poor stability can then be
redesigned to improve the situation.
Evaluation of Software Design Quality
Here we discuss the development of an approach for software
design
quality
improvement.
This
approach
will
utilize
software
developers’ subjective opinions on software design. The solution should
bring forward the software developers’ tacit knowledge of good or poor
software design. The ideas on what is considered as good or poor design
by different developers will likely differ. However, being aware of these
differences should allow the organization to define good software design.
Simultaneously it should help the less skilled programmers to produce
software with better design.
Currently there are two viable solutions to this issue, namely

The process approach and

Tool-based approach.
The process approach assumes that iterative and incremental
process is used to develop the software. In this approach every time the
developers see code that is in need for refactoring they will make a note
in a technical debt list. With a technical debt list the organization can
track parts of the software that need refactoring. After each iteration the
developers would go through these items to see what is the actual
problem, how has it gotten such, and how they plan to fix it to make the
design better. The study of the poorly structured code pieces and the
design improvement ideas would allow the less skilled developers to learn
about good software design.
The tool-based approach would utilize the developers’ opinions on
software design as an input to a tool. Developers would first subjectively
evaluate software elements as good or bad design. The tool would then
analyze each of these software elements. After sufficient amount of
subjectively evaluated software elements had been analyzed with the
tool, it should be possible to create heuristic rules.
Regarding the tool based design, an automated aid helps the
expert personnel to even boost up the performance level in development.
The tool which we are going to discuss about is called selector [1], which
works on the decision support system. It selects among the alternative
decisions hence called as selector.
Overview of SELECTOR [1]
1. By prompting the user as to the effect each attribute has on the choice
of the final product, the system will evaluate the importance of each over
all solution, generate a figure of merit and order the potential solutions
from most favorable to least favorable
2. Prototyping is used to provide the additional information that often is
needed to make a decision. Selector will guide the manager in developing
appropriate prototypes. Using techniques from decision theory
a).the risk associated with each standard solution is evaluated.
b). Attributes which should be tested by a prototyping experiment to
provide the most information are indicated.
c). Potential payoff from using the proto type can be estimated.
d).The maximal amount to spend on the prototype can be computed.
3. The system can be used to allow the manager to try a series of “what
if” scenarios. The manager can repeatedly enter a series of assumptions
in order to determine their effect on alternative design strategies. This
might provide additional data before a complex expensive implementation
or prototype is undertaken.
Given a specification, how does one choose an appropriate design
which meets that specification? The study of formal methods and
program verification only partially address this issue. We certainly want
to produce correct programs. However correct functionality is only one
attribute our system must have. We need to schedule the development to
have the product built with in our budget, with in our available time
frame, and not to use more computing resources than you wish to
allocate for this task. However , how do we make such decisions?
We consider two cases for this problem. In the first the manager
knows the relevant information about trade-offs and relative importance
to the various attributes of the solutions. We have developed an
evaluation measure, called the performance level, that allows a manager
to choose from among several solutions when the relative desirabilitiesof
the attribute values are known. We call this the certainty case. We then
extend the model to include the more realistic case where the effects of
each decision are not exactly known, but we can a give a probabilistic
estimation for the various possibilities. We call this the uncertainty case.
The following subsections briefly describes each model.
A. Decisions under Certainty:
Let X be the functionality of a program x. The program x is correct
with respect to specification B if and only if
X is a subset of B. We
extend this model to include the other attributes as well. Since these
other attributes are often concerned with non functional characteristics
such as resource usage, schedules and performance. Now assume that
our specifications are vector of attributes, including the functionality as
one of the elements of the vector. For example X and Y are vectors of
attributes that specify alternative solutions to specification B. Let S be a
vector of objective functions, with domain being the set of specification
attributes and range[0..1]. We call Si
a scaling function and it is the
degree to which a given attributes meets its goals. We state that X solves
Y if for all
We extend our previous definition of correctness to the following.
Design x is viable with respect to specification B and scaling function
vector S if and only if P solves SB.
B. Decisions under uncertainty:
We have assumed that the relative importance of each attributes is
known a priori. However we rarely know this with certainty. We therefore
consider the following model based up on aspects from economic
decision theory.
We can present the performance level as a matrix PL, where PL i,j of
performance level matrix PL as the pay off for solution i understate j .
For example assume that we have two potential solutions x and x² and
and assume that we have 3 potential states of nature st1, st2 and st3,
which are represented as the six possible pay offs in the matrix.
Conclusion
More research should be undertaken in the measurement of
software design, adopting different design methodologies using industrial
software data. In this study, data complexity and control flow were used
to measure of the quality of the program. In addition to these two
metrics, software can be said to have other as of quality, such as the
measures of maintainability and reliability, which can be affected by the
quality of design. Therefore, further research in these two aspects of
quality is recommended. This study deals with perhaps one of the more
sensitive areas of software quality and has shed some light on the
problems faced in this type of research.
References:
[1] Developing New Approaches for Software Design Quality
Improvement Based on Subjective Evaluations
Mika V. Mäntylä
[2]Test Software Evaluation Using Data Logging
Winston Chou
John L.Anderson Jr.
[3]A management tool for evaluation of software designs
Sergio Cardenas-Garcia and Marvin V.Zelkowitz
Download