9080 02239 1160 3

advertisement
MIT LIBRARIES
3 9080 02239 1160
HD28
.M414
A MULTI-DIMENSIONAL APPROACH
TO PERFORMANCE EVALUATION
FOR l/S DEVELOPMENT
Jay G. Cooprider
John C. Henderson
December 1989
CISR
Sloan
WP
WP
©1989 Massachusetts
No. 197
No. 3103-89
Institute of
Technology
Center for Information Systems Research
Sloan School of Management
Massachusetts Institute of Technology
NOV
1
4 2000
J
LIBRARIES*
A
Multi-Dimensional Approach
for I/S
Performance Evaluation
to
Development
Jay G. Cooprider
John C. Henderson
ABSTRACT
Information systems are a major force
the
Unfortunately,
in today's organization.
development of information systems remains very labor-intensive and quite
difficult to
manage.
It
is
not surprising that I/S
management
improved methods for evaluating the performance of
makes an
is
searching for
This paper
I/S developers.
step in the evaluation of the I/S function by presenting an approach
initial
for evaluating the
performance of
I/S
development units and describing
its
use
in a
large international technology manufacturing firm.
We
present a diagnostic, behavior-based model for measuring software
first
development from a management perspective.
development processes and products from the
of analysis.
We
This model proposes measures of
task, social,
then describe the application of
technology manufacturing firm.
this
and organizational
model
levels
in a large international
Data Envelopment Analysis (DEA)
is
used as a
technique for applying the model to the firm's performance data. The results of the
DEA
analysis
management
is
then used to investigate the performance impacts of various
policies.
This evaluation approach provides a systematic method for
evaluating development performance.
of behavior-based
methodology
for
measures
for
It
highlights the
evaluating
importance of using a range
performance,
and
examining performance based on such measures.
it
illustrates
a
A
Multi-Dimensional Approach
for I/S
to
Performance Evaluation
Development
1.0 Introduction
Information systems are a major force
how
business
is
in today's organization.
They impact
conducted and directly affect organizational success.
They enable
problem-solving capabilities that could not have been imagined even a few years ago.
It
is
management
not surprising that
is
searching for improved methods for
evaluating the processes used in information system (I/S) development and the
Enhanced evaluation
resulting products.
capabilities are required
both to identify
opportunities for improved development practices and to establish the feedback
necessary to
manage
the development process.
The development
of information systems remains very labor-intensive and,
unfortunately, quite difficult to manage. Despite over thirty years of experience, the
management and measurement
(Boehm
1987, Brooks 1987).
management
description,
of software
Many have
of I/S development
is
development remains problematic
argued that a
to identify
first
step in mastering the
and develop methodologies
measurement, and understanding (Conte
et al.
1986,
to allow
Boehm
There are a number of reasons why such methodologies have been slow
acceptance, but perhaps the most significant
development does not
1989).
I/S
exist
is
that a definitive causal
(Brooks 1987, Curtis
The development environment
is
et al. 1988,
its
1987).
to
win
model of
I/S
Elam and Thomas
rich in alternative processes for converting
resources into usable applications. However, these various processes are not well
understood. There
is
no systematic method
for examining the relationship
the resources used and the products delivered
I/S "productivity"
One
is
(Boehm
1987).
between
The very concept
of
not well understood (Brancheau and Wetherbe 1987).
factor that has limited the ability of organizations to effectively evaluate
the I/S function
is
the lack of a systematic approach to
measurement using multiple
inputs and outputs
(Boehm
1987).
Many
productivity measures but are finding
by
presenting
units
and
manufacturing firm.
We
development
from a
management
relating
first
its
perspective.
1
measurement approach
Finally,
we
use
analyze the collected data in a
step
in
evaluating
in
a
large
the evaluation of the I/S
the
performance
international
of
I/S
technology
present a model for measuring software development
We
then
demonstrate
to operationalize the
from a group of software development
discussed.
initial
approach for
an
Envelopment Analysis (DEA)
of this
difficult to
This research makes an
meaningful way.
function
it
numerous
organizations have gathered
sites in
the
use
Data
model, using data collected
a large technology firm.
for evaluating I/S
of
management
The
potential
practices
discuss the contributions of this research
is
then
and recommend
directions for further study.
2.0
Background
is not defined.
You also
cannot tell whether you have improved something if you
have not measured its performance." (Strassman 1985,
page 100)
"You cannot measure what
Previous research attempting to evaluate I/S development performance has
suggested a wide range of possible measures (Conte et
et al. 1987).
However,
insight necessary to
(1987) state that
I/S
this
al.
1986,
Boehm
research has not generally provided the
improve the development process.
1987,
Banker
management
Hirschheim and Smithson
evaluation has been misdirected toward tools and techniques for
measurement and away from understanding. For example, many researchers have
based their productivity measurements on the number of
a given
1
DEA
is
amount of
code produced for
This approach has long been criticized because of a
programming-based technique that converts multiple input and output measures
comprehensive measure of production efficiency. See Section 5.0 for a description of
a linear
into a single
effort.
lines of
DEA and its use.
number
the
of obvious problems. For example, each line of source code does not have
same value and does not require
the
same amount of time or
However, most software productivity researchers
focus
(Boehm
month" are
still
effort to produce.
use lines of code as a primary
1987). While performance measures such as "source lines of code per
relatively easy to gather
and may provide some
insight into the basic
production processes of software development, their narrow focus
overall productivity picture
may
distort the
and cause managers to overlook promising avenues
performance improvement.
The
ultimate goal of I/S development
for
is
to provide
to provide a
mechanism
applications that benefit the organization, not to create lines of code.
3.0
A Performance
One
for
Evaluation Model
performance analysis that
causal
measurement approach
potentially significant
is
diagnostic
is
and behavior-based. Since no
model of the software development process
exists,
definitive
a model of software
development performance should not be expected to provide management with
unambiguous
a
means
for
signals.
Rather, a development performance model should be used as
management
to "diagnose" the
development process:
of trends, relationships, and useful classification schemes.
model
management
for
serves to
raise issues, reveal preferences,
1982, Epstein and
A
Henderson
demonstrates
measurement
that
provoke
interest,
1989).
recommend an approach toward measurement
(1981)
to
and help surface assumptions (Einhorn and Hogarth
also provide insights into the
Mason and Swanson
behaviors underlying the processes being measured.
"scientific"
awareness
A diagnostic measurement
meet these needs by tending
management measurement model should
traditional
to gain
the
to
management
that
include behavioral dimensions.
perceptions
management, and programmers are very
for
of
upper
management,
(1979)
extends
Boehm
middle
different concerning the. factors that have
the most leverage for affecting development productivity. These perceptions can be
important motivational factors for affecting
I/S
development productivity and they
should be reflected in any performance measurement system.
that top executives, I/S
if
perspectives
management
of the
management of the
3.1 Process
These
conflicting views
can be mitigated to
the language (or measures, in this context) used by each of these
the same, or
is
asserts
managers, and users often have conflicting views of which
project proposals should get approval.
some degree
Buss (1987)
I/S
is
at least
translatable.
In order to improve the
development process, a measurement system
actual behaviors of the process
is
tied
to
required.
and Product
Figure
1
development
presents a general measurement model for the performance of I/S
units
that
reflects
Many
both process and product measures.
researchers have suggested the usefulness of separate measures for the products and
processes of I/S development.
For example, Agresti (1981) uses an
industrial
engineering perspective to propose process and product measures of software
development. Since a primary goal of performance measurement
production processes involved, not only must the
effort
final
is
to
improve the
product of the development
be evaluated, but the processes used to obtain that product must also be
considered.
Case (1985) argues that
because there
is
it
is
important to use both types of measures
a potential conflict between the efficiency of the process and the
quality of the product.
This process-product dimension reflects the
measurement
for example,
or
in
more general
traditions
of
organizational control theory. Ouchi (1979) and Eisenhardt (1985),
have categorized control measures as either behavior (process) based
outcome (product) based. The measurement model presented
viewed as an application of these ideas to the
I/S
in
development arena.
Figure
1
can be
economic perspective of
points per
efficiency (Packer 1983).
work-month are
typical I/S
measures evaluate the output of
this
Source
measures of
lines of
Quality
Task product
this perspective.
These measures often
production process.
reflect the quality of the software product.
code or function
may be measured
in
terms of
defects per unit output (e.g. bugs per thousand lines of code). Alternatively, quality
measures may
reflect the product's overall technical
performance rather than
its
defect count. Examples of this form of task product measure include run-time speed
and object code
size.
These task-oriented measures dominate the
I/S
performance
literature
(Hirschheim and Smithson 1987). However, there are a number of problems with
focusing exclusively on these types of measures.
output
(in
an economic sense) rather than outcome (the
They are much more
closely aligned with efficiency
output) than with effectiveness (how the input
The
organization).
provides a
In general, they are
is
total
(how
measures of
impact of the process).
well input
is
converted to
used to accomplish the goals of the
addition of social and organizational perspectives (see Figure 1)
more complete performance evaluation framework.
Hirschheim and Smithson (1987) argue that both technical (task-oriented)
and non-technical
meaningful.
(social)
criteria
must be included
Social measures generally evaluate
how
for
I/S
evaluation
well the development
to
be
team
performs as a whole. They are significant because members of development teams
typically
work
as groups rather than as separate individuals.
These measures are
group oriented and, from a management perspective, are concerned with "doing the
right things." Social process
measures indicate how well members of the development
team function together. From the standpoint of
is
process dependability.
Internal
to
the
I/S
development, the primary issue
team, these measures involve team
maintenance (Gladstein 1984) and include such things as commitment, quality of
work
life,
and
satisfaction with the team.
From an
external perspective, the issue
is
the perceived dependability of the team.
track record
-
This
is
largely a function of the group's
record of success or failure on past projects.
its
measures include the
ability to
meet delivery schedules and
measures evaluate
Social product
design
financial
These measures can be
critical in
refers to as a "challenge to certainty."
He
more and more time and use ever more
wrong problem more
test
is,
does
the
satisfy a real business
providing what Bjorn-Andersen (1984)
states that I/S developers tend to
spend
refined technological tools for solving the
Social product
precisely.
commitments.
That
validity.
development team meet project requirements that actually
need?
Frequently used
measures must provide the
ability to
design validity, and the best source of these tests comes from communication
with non-team stakeholders (Churchman 1971, Henderson 1987). Measures for
this
perspective include the users' evaluation of I/S importance to their jobs and actual
system usage. System usage has been operationalized
in several
been advocated as an indicator of the extent to which
I/S
ways and has long
is
meeting business
requirements.
Performance measurement occurs within an organizational context, and,
should
hence,
include
Churchman (1968)
from an organizational perspective.
refers to this as the "systemic" level of
manager's problem situation
in
evaluation
explicit
in a larger context.
Very
little
measurement: placing a
progress has been
made
discovering definitive organizational measures of performance (Crowston and
Treacy 1986, Packer 1983).
In this paper, the
measures of
I/S
development
performance from an organizational perspective involve the contribution of
I/S to
business success.
Organizational process measures are primarily concerned with the ability of I/S
to quickly
need
to
respond to changes
manage
in
business need.
The motivation
for this view
is
the
organizational resources in the face of significant environmental
changes. There are two
common
approaches for these measures. The.first approach
Cyert and March (1963) argue that satisfaction
evaluates user satisfaction.
reasonable surrogate measure for
is
one way
to
a
individuals perceive that their needs are being
Elam and Thomas
met. Nolan and Seward (1974),
that user satisfaction
how
is
(1989),
and others have argued
measure the responsiveness dimension of
I/S.
performance.
The second perspective
main concern
changes
in the
in this
focuses on the organizational resource of time.
perspective
A
environment.
break-even: the length of time
it
is
the speed with which the organization reacts to
fundamental measure
in this
perspective
its
of rapid response to environmental changes
criticality
The
I/S
and/or improve
its
I/S
It
is
rapidly
performance (Boddie 1987).
organizational product issue
concerned with whether
time-to-
development costs for the organization.
while directly incorporating aspects of quality and maintainability.
becoming a key measure of
is
takes an organization to convert an I/S concept into
a viable product that has earned back
This measure highlights the
The
is
fundamentally one of business value.
It is
products contribute to the success of the organization
competitive position.
Two common measures
of this dimension
are market share (did the system result in a position of competitive advantage for the
organization?) and benefit/cost ratios (what was the payoff of the application?).
These measures generally
result
from business and competitive
4.0 Operationalization of the
Model
The performance evaluation model
for analyzing data
collected
illustrated in
systematic
way
sites for a
to
use
this
1
was used
as a guide
and marketing organization. The firm had
collected data on over sixty measures of I/S
development
Figure
during a major measurement initiative of a large
international technology manufacturing
I/S
analysis.
number
development productivity
of years.
However,
I/S
data to evaluate development
at
each of
its
management had no
performance.
The
measurement model described above provided management with
framework
a
for
analyzing the collected data.
The data used
development
sites in
I/S functional
site
These
research had been collected by the firm from 67 I/S
1986 and 1987. The
groups operating
each development
each
in this
site is
which data was collected represent
at a business unit/staff level.
The average
management team and have a
vary in terms of user type
(i.e.,
marketing,
geographic region (a significant number of non-US
measurement
collected as part of a corporate
R&D,
sites
initiative
size of
The developers
114 employees, ranging from 9 to 513.
report to a distinct
sites
sites for
at
clear user clientele.
corporate
staff, etc.)
are included).
and
The data was
and involved a management
review, clarification, and sign-off procedure.
As might be
due
expected,
some
to organizational changes,
1987.
The data reported here
years.
An
of
all sites
average of each
random
did not report every data element.
sites
were not represented
Further,
for both 1986
and
includes only those sites that reported data for both
site's
data for these two years
is
used to reduce the effects
variation across years (analysis of data for units reporting data in at least
one of these two years
results in a substantial increase in
are consistent with this
more conservative approach, thereby
sample
size,
and the
results
indicating the analysis
is
fairly robust).
The
inputs and outputs used for the performance analysis of each site are
described below and are illustrated
indicators that
in
Figure
2.
While one
would enhance the operationalization of
reflect the data that could
this
be obtained for a reasonable
from enhancing these measures
is
may
well imagine
model, these measures
cost.
The
potential gains
discussed in Section 7.0 (Conclusions).
INPUTS:
1.
2
.
Direct
OperatJ
OUTPUTS:
Task
Measures
Social
Measures
Organizational
Measures
Figure 2: Operatioi
I/S
I
portion of the direct development expense
operations support expense
is
facility
is
labor costs, while the major share of the
and machine
costs.
These two inputs represent
surrogate measures for management's choice between investments in labor or
In this context, each site's
capital.
manager must decide
people (labor: direct development costs) or
in additional
to invest in
development
machine power or software
tools (capital: operations support costs).
4.2
Outputs
Efficiency.
The
first
output
is
function points per work-month.
Each
site
was
asked to report the total function points delivered during the year and the work-
months expended
to develop/deliver those function points.
were new development, enhancements,
and the
Projects to be included
selection,
modification,
shared or purchased applications delivered during the year.
installation of
and
The
output was calculated by dividing the total function points delivered by the work-
months expended
Quality.
Point),
and
for them.
The second output
produced by the
site.
Each
site
the year
to
deviations
from user approved
correct
applications in
was asked
Work-Months per
to report the
application defects.
Installed Function
work-months spent during
Application defects were defined as
specifications that require unique corrections to the
the installed application base (including purchased and shared
This excluded work to implement suggested changes and user errors.
The reported work-months were
site at
l/(Defect
used as a measure of the quality of the products that have been
is
applications).
is
divided by the total function points installed at the
year-end to normalize for
site size.
The
inverse of this result was then
calculated so that this output would positively correlate with the models' inputs (an
assumption of the Data Envelopment Analysis technique described below).
11
Dependability. Direct measures of quality of work
an indicator of
is
rationale for this
development group should
Each
a better track record.
months spent on
its
The
used.
characteristics within a
dependability:
I/S
were not
site
development work (whole
is
available.
As
development
social process variable, the percentage of the
this
commitments met
life
that positive social process
result in the group's increased
was asked
to report the total work-
projects, releases, or phases) that
met
schedule, cost, and function commitments. This reported value was divided by the
site's
reported total work-months for
scheduled for completion
Validity.
to the user
The
all I/S
development work with commitments
in the current year.
fourth output measures the importance of the I/S applications
community.
If
developers are solving the right business problems
-
their solutions
have
to the user.
Each user was asked the question "How important
validity
-- if
they will be providing applications that are important
processing to you in doing your job?"
percentage of respondents for the
The value reported
site that
is
information
for this output
is
the
answered "of utmost importance" or "of
considerable importance" to this question.
Responsiveness.
The
fifth
users with their I/S applications.
satisfaction should
question "Overall,
use?"
that
The
output measures the level of satisfaction of the
The motivation
be higher for more responsive
how
satisfied are
satisfied"
and
I/S units.
output
is
that user
Each user was asked
the
you with the Information Processing services you
value reported for this output
answered "very
for this
is
the percentage of respondents for the
"satisfied" to this question.
12
site
Value.
The
output measures the contribution of I/S to the organization
final
by measuring the
ratio
of expected gross benefits to related I/S development
expenses for projects completed during the year.
summary
of the benefits and expenses of
its
Each
site
was asked
to report a
business cases for projects completed
during the current year. The benefits and expenses included applicable business case
adjustments
the time value of money), and they were reviewed
(e.g.,
the controller of the reporting
planned
total aggregate gross benefits
expenses for the
current year.
this
The
site.
and expenses reported are the
and the planned
total
aggregate related
I/S
period of the business cases for the projects completed in the
full
The
benefits
and approved by
benefits are divided by the expenses to calculate the ratio used for
output.
5.0 Analysis of
The data
DEA
Performance Data:
collected by the firm for 1986
Envelopment Analysis (DEA).
DEA
is
and 1987 was analyzed using Data
a linear programming-based technique that
converts multiple input and output measures into a single comprehensive measure of
productive efficiency (Charnes, Cooper, and Rhodes 1978). This methodology uses
observed outputs and inputs for each unit
an analysis to derive an empirically-
in
based efficiency frontier rather than using a production function to generate a
theoretical
The
one.
development
units
is
choice
of
DEA
analyzing the
for
performance of
I/S
motivated by the need to simultaneously consider multiple
inputs and outputs and to not impose an arbitrary parametric form on the underlying
software production processes. In addition, a distinguishing property of
it
optimizes on each unit
in the analysis.
This
is
DEA
in contrast with ratio analyses
is
that
which
use no optimizing principle and regression-based techniques where optimization
across
all
attribute
is
observations and results in an average (rather than extremal) value. This
makes
DEA
a meaningful tool for
13
management use
iind for efficiency
DEA
evaluation purposes.
(Elam and Thomas
has been applied to a
number
of I/S-related settings
and Kemerer (1987), Chismar and
(1989), Banker, Datar,
Kriebel (1986), Kauffman and Kriebel (1988)).
DEA
accomplishes
its
analysis
by the construction of an empirically based
production frontier and by the identification of peer groups. Each unit
by comparisons against a composite unit that
of other decision-making units
and
is
is
assigned a
(DMUs)
in its
is
sum
evaluated
constructed as a convex combination
peer group (Banker and Morey 1986),
DEA efficiency rating between zero and one.
a ratio of a weighted
is
of outputs to a weighted
sum
The
efficiency rating
of inputs, with the weights
associated with the inputs and outputs determined uniquely by solving a linear
programming model
computed weights
that
result in
assigns
the most favorable weights possible.
an efficiency ratio of one, the unit
is
on the
If
frontier.
the
The
reader should review Charnes, Cooper and Rhodes (1978), Bessent, Bessent, Elam,
and Clark (1988), and Epstein and Henderson (1989)
of
for a description of the details
DEA analysis.
5.1
Applying
The
DEA to the Performance Models
evaluation model operationalization illustrated in Figure 2 above was
utilized to analyze the
this,
three separate
performance of the reporting
DEA models were used.
analysis of the overall
sites.
Each model
In order to accomplish
reflects
one of the
performance model. By separating the models
possible to analyze the performance and
dimension independently. Figure 3
management
14
way,
it
is
practices of each site for each
illustrates the inputs
three models.
in this
levels of
and outputs of each of the
Model Name:
suggests the generalization of this analysis, even to this single firm, must be limited. 2
Follow-up interviews with the firm's senior management indicated that the response
rate
was not the
performance
result of a systematic
Rather, the consistent
bias.
feedback was that the organization's extensive data collection
combined with an
previously been
improve performance.
strictly as a cost
A
a result,
by
analysis process usable
management viewed
site
rather than a benefit, and, therefore,
was used
specialized software package
this study.
I/S
As
many sites
had not
managers
site
to
the collection effort
did not participate.
perform the
to
initiative
DEA
analyses for
Using the measures described above, each of the models was run for the
development
sites for
which data was
collected.
analyses for sites with both 1986 and 1987 data.
the results illustrated in this table. First,
it
Table
Two
1
shows the
points should be
1988), there
is
unenveloped
for analyzing
some debate concerning
performance of such unenveloped
sites
made about
should be noted that extremely few
the models have values that are inefficient and enveloped.
number of approaches
results of the
sites
the ability of
with
DEA
Though
DEA
there are a
(Bessent et
relatively
few
sites
al.
to reliably assess the
(Epstein and Henderson 1989).
each of the models there are generally a
sites in
which are
Second,
in
efficient or
nearly efficient, and a large gap exists between the efficiency ratings of those sites
and the
ratings of the other sites. Banker,
analysis can suffer greatly
when
large
measurement
data, especially for data sets with a small
errors might
be
artificially raising
Gadh, and Gorr (1989) suggest
sample
that
DEA
errors are present in observed
size.
There was concern
that such
the efficiency ratings of the few firms on the
DEA
frontier.
2
As
indicated earlier, additional runs were
greatly increased the
sample
made
using sites that had data from a single year.
size but did not lead to significant
Therefore, this conservative approach
is
taken.
16
changes
in
This
the analysis results.
Table
1:
All Sites
Site
DEA Efficiency Ratings
With Data Are Included
Because of these
was not
factors, there
truly representative of the larger set of sites.
adverse impact on a
had unique characteristics
exceptionally low
wage
were removed
management,
rates, for
are enveloped and that there
used for
developments
this issue
in
is
address
DEA
2.
a
It
frontier
1989). After review with
that the efficient sitessites
this situation,
For the purposes of
and/or had
these unique
this study, sites
with
Twenty-seven
sites
set.
models, and the results of
this
second
should be noted that a larger percentage of
more even
distribution of efficiency ratings.
It
sites
These
should be noted that some recent
DEA (stochastic DEA (Banker
1989), for example) have addressed
and small sample
size.
significance of the different dimensions of
by the results shown
in
Tables
1
and
2.
efficiency ratings for each of the models.
performance are highlighted
In Table 2, for example, Site
1
has high
This can be interpreted as indicating that
has high performance on each of the dimensions of performance. Site
the other hand, has high efficiency for every
at that site
model but Task. Management
13,
on
attention
might be directed toward improving basic production capabilities -
through the use of
relatively
DEA
of outliers and measurement error, but they were not used here due to the
The
1
have a sizable
outliers can
was determined
To
further analyses.
all
study's exploratory nature
Site
Data
were removed from the data
one of the
analysis are presented in Table
results are
DEA
were exceptionally small
example).
for further analyses.
in at least
it
(generally, they
efficiency ratings greater than .8
were used
that the derived
DEA analysis (Epstein and Henderson
the organization's senior I/S
sites
was concern
CASE
tools or reusable
code modules, for example.
Site 5 has
low performance on each of the dimensions, and requires more general
management
attention.
18
Table
2:
"Outlier" Sites
Site
DEA Efficiency Ratings
Removed From Each Model
6.0 Analysis of
Management
Practices
Further analyses were carried out to provide management with additional
information for judging
management
relate inputs to outputs.
What
between performance
links
this link,
is
practices.
missing from such analyses
ratings
and
specific
is
management
directly
any evidence of direct
To
practices.
establish
In essence, these two variables are treated as controlled
site.
The
policy variables by the local I/S manager.
management
Specifically, the
analysis
is
intended to assess
are:
The annual number
of student days invested in formal I/S
and career development education was collected for each
education included classroom, guided self-study, and remote
measure was collected
in
two
Technical education included
parts: technical
all
total
formal education on
number
Formal
This
education and career education.
Career education
I/S subjects.
education, manufacturing, sales training, quality,
were divided by the
site.
satellite education.
included fundamental business process education not related to I/S
management
how
practices relate to the overall performance of the I/S function.
two policy variables
Education.
technical
models
efficiency
data was gathered on the investment in education and strategic planning
processes found at each
these
DEA
etc.).
(e.g.,
advanced
These values
of I/S employees at the site to obtain a value for
average technical student days per employee, career student days per employee, and
total student days
per employee.
Strategic Linkage.
The
general or line manager(s) at each
evaluation of the involvement of the I/S organization
This ranking was on a scale from
and
'4'
to 4, with
'0'
in strategic
meaning
indicating that an I/S strategy existed that
that
no
site
reported an
planning
activities.
I/S strategy existed,
was developed by joint "pro-active"
participation of I/S and business function participants.
In order to analyze the impact of these policy variables
on the
different
perspectives of performance analysis, a simple correlation was calculated between
20
the values of each policy variable
three models.
From
this analysis,
and of the
DEA efficiency
ratings for
each of the
the relationships between the policy variables and
the development performance dimensions can be discussed.
Figure 4 shows the correlations between the
models and the education policy
drawn from these
correlations.
variables.
DEA
efficiency ratings of the.
Several tentative conclusions can be
education significantly
First,
correlates
with
This significance holds for the task and social models, and the
performance.
organizational model has a large positive correlation but with a sample size too small
to establish statistical significance. This
education can play
result
I/S
is
in
is
clearly indicative of the important role that
improving the performance of
consistent with a range of research reporting
I/S
development
on the
dimension of performance.
developers
similar findings by
Banker
in particular, strongly correlates
While
DEA efficiency
all
efficiency ratings.
with each
of the
sites
correlations are positive,
surprising that the only relationship that
line
also
I/S
et al. (1987).
their reported values for strategic linkage.
between the
It
areas as well as technical topics, corroborating
Figure 5 shows the correlations between the
somewhat
et al. 1987).
This result highlights the value of educating
in business functional
on
effect of education
performance (Chrysler 1978, Lutz, 1984, Jones 1986, Banker
bears mentioning that career education,
This
units.
is
statistically
and
it
significant
is
is
managers' assessment of strategic linkage and the task model
It
might have been expected that the social and, especially,
organizational models would
show
significant correlations instead.
Given the limited
ability to
obtain multiple measures, one must be careful not to eliminate potential
problems
in the
measures as an explanation for these low correlations. Regardless,
the positive correlations at
all
levels are indicative of the significance of strategic
linkage to overall I/S development performance, even
the relationship
is
not established.
21
if
the statistical significance of
7.0
Conclusions
This paper has presented an approach for evaluating the performance of
development
The approach
units.
begins by establishing an overall model for
This model uses two dimensions: process-
analyzing developing performance.
product and level of analysis
(task, social,
and organizational).
technique for applying the model to an actual business situation.
DEA,
model
the overall
is
model
analyzed.
management
is
The
I/S
DEA
used as a
is
In order to apply
divided into separate models by level of analysis, and each
results of these analyses
were then correlated with various
policy variables to investigate the impact of specific policies
on the
various dimensions of performance.
This approach provides a systematic method for evaluating development unit
performance and establishes a way to assess the performance of individual
well as the impact of particular
management
practices.
units as
highlights the importance
It
of using a range of behavior-based measures for evaluating performance, and
illustrates a
technique for examining performance based on such measures.
Since this approach
measures
it
is
critical.
is
comparative, the issue of appropriate input and output
While the measures selected for
examples, additional effort
is
develop additional measures.
this
clearly required to refine
In particular,
these measures and to
measures from an organizational
perspective need further development, particularly in the area
Similarly,
more
variables.
Still
study provide useful
attention could be given to the effective
if
I/S responsiveness.
measurement of
the policy
these results suggest that efforts to obtain better measures could be
further justified by the ability to incorporate
them
in a
meaningful assessment
approach.
Generalizations
effectiveness of the
fact
that
only
a
about
the
approach have
single
specific
to
results
of this
study
or
be limited given the small sample
organization was
23
involved.
However, ...the
about the
size
I/S
and the
senior
management team
committed
did evaluate
to incorporate this
and use these
approach
addition, the results have served as a
efforts in the organization.
findings.
in future
As
a result,
management has
performance analysis
efforts.
In
major motivation for future data collection
This response
is
evidence that the approach can be a
major element of larger programs to measure and improve
24
I/S processes.
Bibliography
Agresti. W. "Applying Industrial Engineering to the Software Development Process,"
in Proceedings: IEEE Fall
IEEE Computer Society Press, Washington
COMPCON,
D.C., 1981, pp. 264-270.
Banker, R. "Stochastic Data Envelopment Analysis," Management Science,
forthcoming.
Banker, R., Datar, S., and Kemerer, C. "Factors Affecting Software Maintenance
An Exploratory Study," Proceedings of the Eighth International
Conference on Information Systems, Pittsburgh, PA, December 1987, pp. 160-175.
Productivity:
Banker, R., Gadh, V., and Gorr, W. "A Monte Carlo Comparison of Efficiency
Estimation Methods," Working Paper, Carnegie Mellon University, 1989.
Banker, R. and Morey, R. 'The Use of Categorical Variables
Analysis,"
Management Science
(32:12),
December
in
Data Envelopment
1986, pp. 1613-1627.
W. Bessent, J. Elam, and T. Clark. "Efficiency Frontier Determination
by Constrained Facet Analysis," Operations Research (36:5), May 1988, pp. 785-796.
Bessent, A.,
Bjorn-Andersen, N. "Challenge to Certainty," in Beyond Productivity: Information
Systems Development for Organizational Effectiveness, T. Bemelmans (ed.), NorthHolland, Amsterdam, 1984.
Boddie,
Hall,
J.
Crunch Mode: Building Effective Systems on a Tight Schedule, Prentice
Englewood
Boehm,
Cliffs,
NJ, 1987.
B. "Improving Software Productivity," in Proceedings:
IEEE Fall
COMPCON, IEEE Computer Society Press, Washington D.C.,
Boehm,
B. "Improving Software Productivity,"
1987, pp. 43-57.
Brancheau,
MIS
J.
and Wetherbe,
Quarterly (11:1),
March
1981, pp. 264-270.
IEEE Computer (20:9), September
J. "Key Issues in Information Systems
1987, pp. 23-45.
Management,"
Brooks, F. "No Silver Bullet: Essence and Accidents of Software Engineering,"
IEEE Computer (20:4),
April 1987, pp. 10-19.
Buss, M. "How to Rank Computer Projects," in Information Analysis: Selected
Readings, R. Galliers (ed.). Addison- Wesley, Sydney, 1987.
Case, A. "Computer- Aided Software Engineering." Database (17:1), Fall 1986, pp.
35-43.
Charnes, A., Cooper, W., and Rhodes, E. "Measuring the Efficiency of Decision
Making Units," The European Journal of Operational Research (2:6), 1978, pp. 429444.
Chismar, W. and Kriebel, C. "A Method for Assessing the Economic Impact of
Information Systems Technology on Organizations," Proceedings of the Sixth
25
International Conference on Information Systems, Indianapolis,
45-56.
Chrysler, E.
December
"Some Basic Determinants of Computer Programming
Communications of the ACM
1986, pp.
Productivity,"
(21:6), June 1978, pp. 472-483.
"Suggestive, Predictive, Decisive, and Systemic Measurement."
Paper presented at the Second Symposium on Industrial Safety Performance
Measurement, National Safety Council, Chicago, Dec. 1968.
Churchman, C.
Churchman, C.
TfieDesigtt of Inquiring Systems. Basic Books, 1971.
Dunsmore, H., and Shen, V.. Software Engineering Metrics and Models.
The Benjamin/Cummings Publishing Co., Menlo Park, CA, 1986.
Conte,
S.,
Crowston, K. and Treacy, M. "Assessing the Impact of Information Technology on
Enterprise Level Performance," in Proceedings of the Seventh International
Conference on Information Systems, San Diego, CA, December 1986, pp. 299-310.
and Henderson, J. "Evaluating Investments in Information Technology:
Review of Key Models," Proceedings of the 1989 ACM SIGOIS Workshop on the
Impact and Value of Information Systems, Minneapolis, MN, June 1989.
Curley, K.
and Iscoe, N. "A Field Study of the Software Design Process
(31:11), November 1988, pp. 1268Large Systems," Communications of the
Curtis, B., Krasner, H.,
for
A
ACM
1286.
Cyert, R.
Cliffs,
and March,
J.
A
Behavioral Tlieory of the Firm. Prentice-Hall, Englewood
NJ, 1963.
Einhorn, H. and Hogarth, R. "Prediction, Diagnosis, and Causal Thinking
Forecasting," Journal of Forecasting (1:1), 1982, pp. 23-26.
in
Eisenhardt, K. "Control: Organizational and Economic Approaches," Management
Science (31:2), February 1985, pp. 134-149.
J. "Evaluating the Productivity of Information Systems
Government Environments," Public Productivity Review (12:3),
State
Organizations in
263-277.
Spring 1989, pp.
Elam,
J.
and Thomas,
Epstein, M. and Henderson, J. "Data Envelopment Analysis for Managerial Control
and Diagnosis," Decision Sciences (20:1), Winter 1989, pp. 90-119.
A
Model of Task Group Effectiveness,"
Gladstein, D. "Groups in Context:
Administrative Science Quarterly, Vol. 29, 1984, pp. 499-517.
Henderson, J. "Managing the IS Design Environment," Center for Information
Systems Research Working Paper No. 158, Sloan School of Management, M.I.T.,
Cambridge, MA, May 1987.
Hirschheim, R. and Smithson, S. "Information Systems Evaluation: Myth and
Reality," in Information Analysis: Selected Readings, R. Galliers (ed.), AddisonWesley, Sydney, 1987.
26
Jones, T.C.
Programming
Productivity.
McGraw-Hill,
New York,
1986.
Kauffman, R. and Kriebel, C. "Measuring and Modeling the Business Value of IT,"
in Measuring Business Value of Information Technologies, ICIT Research Study Team
#2, ICIT Press, Washington, D.C., 1988.
Lutz, T. Foundation for Growth -- Productivity and Quality in Application
Development. Nolan, Norton and Company, Lexington, MA, 1984.
Mason, R. and E. Swanson. "Measurement for Management Decision: A
Perspective," California Management Review (21:3), Spring 1979, pp. 70-81.
Nolan, R. and Seward, H. "Measuring User Satisfaction to Evaluate Information
Systems," in Managing the Data Resource Function, R. Nolan (ed.) West Publishing
Co., Los Angeles, 1974.
,
W. "A Conceptual Framework for the Design of Organizational Control
Mechanisms," Management Science (25:9), September 1979, pp. 833-848.
Ouchi,
Packer, M. "Measuring the Intangible
February/March 1983, pp. 48-57.
in Productivity,"
Technology Review,
Strassman, P. Information Payoff, the Transformation of Work in the Electronic Age.
The Free
Press,
New York,
1985.
27
M9I
019
Date Due
"
iiiiiiMitiiii inn
iiiii
j
Mi
M
mm urn
9080 02239 1160
::v:/-r.;^/-^
:
'.
v/:;:
;
;;:
•:
r-im?-xWM
Download