Document 10478695

advertisement
4
WD2o
H4 1
lAUG 181988
.
ALFRED
P.
WORKING PAPER
SLOAN SCHOOL OF MANAGEMENT
DECOMPOSITION AND THE CONTROL OF ERRORS
IN DECISION ANALYTIC MODELS*
Don N. Kleinmuntz
Sloan School of Management
Massachusetts Institute of Technology
SSWP S2025-88
May 1988
MASSACHUSETTS
INSTITUTE OF TECHNOLOGY
50 MEMORIAL DRIVE
CAMBRIDGE, MASSACHUSETTS 02139
DECOMPOSITION AND THE CONTROL OF ERRORS
IN DECISION ANALYTIC MODELS*
Don N. Kleinmuntz
Sloan School of Management
Massachusetts Institute of Technology
SSWP #2025-88
May 1988
*For presentation at the Conference on Insights in Decision
Theory and Applications, University of Chicago, June 12-14,
Making:
I
would like to thank Jim Dyer and H.V. Ravinder for providing
1988.
many suggestions and stimulating ideas about the topics covered in
The comments and suggestions of the participants of the
this paper.
M.I.T. Judgment and Decision Making Research Seminar are appreciated.
Tjn.T.
UBHAHlbb
RECEWED
Decomposition and the Control of Errors
in Decision Analytic Models*
Don
N. Kleinmuntz
Sloan School of
Management
Massachusetts Institute of Technology
May
ABSTRACT:
1988
Decision analytic models rely upon the general principle of
problem decomposition: Large and complex decision problems are reduced
set of relatively simple
to
a
judgments. The component judgments are then combined
This paper discusses
using mathematical rules derived from normative theory.
the value of decomposition as a procedure for improving the consistency of deci-
sion making.
Various definitions of error and consistency are discussed. Linear
decomposition models are argued
dom
to be particularly useful for the control of ran-
response errors in the component judgments.
and practice
analysis research
of the costs
and
are considered,
Implications for decision
and decision makers' evaluations
benefits of decision analysis are discussed.
Introduction
1
One
of the distinctive characteristics of decision research
is
the continuing interaction
between descriptive and normative theories of judgment and choice. Historically,
this involved a one-sided exchange, where the normative theory was taken as a
given
— intuitive
responses were compared to normative standards of optimality or
rationality and, often,
Kahneman,
Slovic,
&
were deficient (Edwards, 1961; Einhorn
Tversky, 1982; Rapoport
*For presentation at the Confrrourc
oi\ LisiglitK
University of Chicftgo, June 12 14, 1988.
I
would
in
&
of the participants of the M.I.T.
Seminar are appreciated.
1
Hogarth, 1981;
DrciRinn Mnkiiig: Tlirory nnd Applicntions,
like to
providing mnjiy suggeRtions and stimulating ideas ahout
comments and suggestions
&;
Wallsten, 1972; Slovic, FischhofF,
tlmnk Jim Dyer «nd H.V. Pftvinder for
llie topies covered in this paper.
The
Judgment and Decision Making Research
Kleinmuntz
2
&
More
Lichtenstein, 1977).
recently, the
exchange of ideas has become a dialogue.
For instance, psychologists and economists have raised important questions about
the descriptive validity of rationality assumptions
in
economic theory (Hogarth
&
Reder, 1987; Simon, 1978; Thaler, 1985). Another exciting development has been
the use of the results of descriptive studies of decision making to guide attempts
to reformulate the axiomatic foundations of utility theory (Bell
&
Farquhar, 1986;
Fishburn, 1982; Machina, 1987).
One area that blends normative logic with descriptive insight is the set of techniques known as decision analysis. These techniques represent an engineering approach
making, drawing on both normative and descriptive theory to pre-
to decision
methods intended to improve the effectiveness of decision making (von Win& Edwards, 1986; Winkler, 1982). Decision analysis relies upon the general
principle of problem decomposition: A large and complex decision problem is reduced to a set of alternatives, beliefs, and preferences of the decision maker. These
component judgments can then be combined using mathematical rules derived from
scribe
terfeldt
An
normative theory.
obvious advantage of this approach
is
the reduction of infor-
mation processing demands, since the decision maker can focus on individual components of the problem
in turn. Also,
combination can be performed by model, typically
thermore, the framework
is
demanding task of information
implemented on a computer. Fur-
the cognitively
general enough to incorporate information from diverse
sources, including both 'hard' data and 'soft' subjective assessments.
One
ysis has
of the important contributions of psychological research to decision anal-
been
in
understanding the methodological problems associated with the
assessment of the component judgments.
In particular, research has
emphasized
the potential for serious errors in the assessment of uncertainty (Hogarth, 1975;
Lichtenstein, Fischhoff,
&
Phillips, 1982;
(Farquhar, 1984; FischhofT, Slovic,
&;
Wallsten
(1) features of
Budescu, 1983) and preference
Lichtenstein, 1980; Hershey, Kunreuther,
Schoemaker, 1982). The general conclusion
narily sensitive to:
&
is
&
that these judgments are extraordi-
assessment procedures
like
response modes and
question formats, (2) aspects of the problem context, (3) the decision maker's
own
preconceptions, and (4) the interaction between the decision maker and the analyst (FischhofT, 1980).
has emerged or
mendations
is
For a variety of reasons, no single assessment methodology
likely to
emerge
as error-free (FischhofT, 1982).
Practical recom-
for coping with error include using multiple assessment procedures for
consistency checking and using sensitivity analysis to determine whether a model
is
robust over a plausible range of assessment errors (von Winterfeldt
&
Edwards,
1986).
Another approach that has been suggested
to predict
is
to build error theories that
can help
and explain the cumulative impact of assessment errors (Fischer, 1976;
Decomposition and the Control of Eitots
Fischhoff, 1980).
The purpose
3
of this paper
an error theory of this type. Specifically,
I
is
will
to contribute to the
development of
argue that decompositions that use
a linear aggregation rule have a built-in error control mechanism that can help decision
makers achieve greater consistency despite the presence of errors
in
component
assessments. Understanding the nature of this error control can provide useful
sights into the theory
and practice of decision analysis and can help
to identify
in-
some
unresolved issues related to the use of decomposition as a decision-aiding approach.
The paper
is
organized as follows: First,
in
judgment. Second, the
is
discussed.
ability of linear
I
will discuss
various definitions of error
decompositions to control response errors
Third, some practical issues related to the use of decomposition and
some important
issues for further research are discussed.
The
discussion concludes
with an analysis of the conflicting objectives associated with the use of decomposition
in decision analysis.
Definitions of Error in
2
There are numerous ways
Much
of the decision
to define
making
Judgment
and measure the quality of judgment and choice.
literature
is
concerned
issue of consistency or inconsistency of behavior.
in
one way or another with the
Hogarth (1982)
raises a
number
of
key points about the meaning of consistency that are worth repeating here. First,
consistency
is
a relative concept
— behavior
is
consistent or inconsistent
pared to some criterion or standard. One possible standard
rules of a normative
and so
on).
system
(e.g., utility
Hogarth uses the term
comparison has an all-or-none
logical premises of the
A
difll^erent
is
when com-
consistency with the
theory, probability theory, deductive logic,
logical consistency to
convey the notion that
this
quality, since responses are either consistent with the
normative system
at a particular point in time or not.
comparison can be obtained by examining the consistency of behav-
ioral processes across time.
Hogarth uses the term process consistency
the extent to which an individual applies the
same psychological
to refer to
rule or strategy
when making judgments. One form of process consistency is the extent to which an
individual applies the same underlying process across different problems. An example is provided by subjects who demonstrate differenl preferences depending on how
a decision problem is described (Tversky fe Kahneman, 1981). Note that Tversky
and Kahneman have argued that these 'framing' effects are also a violation of logical consistency. Specifically, they identify a normative rule called invariance: "Two
characterizations that the decision maker, on reflection, would view as alternative
benefit of
—
same problem should lead to the same choice even without the
such reflection" (Tversky & Kahneman, 1987, p. 69). However, violations
descriptions of the
of process consistency do not necessarily lead to violations of logical consistency or
Kleinmuntz
4
maker might
vice versa. For instance, a decision
reflect
on two problem descriptions
and decide that they are noi alternate descriptions of the same problem. The use
of different decision processes in those two problems need not have any normative
implications.
Another form
of process consistency involves the consistency of responses to
same problem on two independent occasions. This form
the
might be caused by unpredictable fluctuations
One approach
strategy.
of response variability
in attention or shifts in
processing
to analyzing the two forms of process consistency
is
to
use a statistical model of the response process: Judgments are jointly determined
by a sysiemaiic component and a random component (Eliashberg
Hershey
&
Schoemaker, 1985; Laskey
The two components may be
fe
Fischer, 1987; Wallsten
interpreted in a
number
fc
&
Hauser, 1985;
Budescu, 1983).
of different ways. For instance,
one could view the systematic component as the individual's
'true' internal opinion,
and the random component as distortions introduced in the expression of those
judgments. A strong version of this assumption is to assume that the internal
opinions are logically consistent. Under this assumption, in principle it is possible
to estimate the normatively appropriate true opinion
(Lindley, 1986; Lindley, Tversky,
t
from the observed judgments
Brown, 1979).
Unfortunately, there are persuasive arguments against the existence of logically
consistent internal opinions.
been shown
One
of the
Judgments
and preferences have
of both probabilities
to suffer from systematic response errors as well as random variability.
most pervasive systematic efl^ects are response mode biases the use of
—
different question
forms or response scales
will
produce consistent, predictable
ferences in assessed probabilities (Hogarth, 1975; Wallsten
in assessed preferences (Goldstein &;
&
dif-
Budescu, 1983) and
Einhorn, 1987; Hershey, Kunreuther,
&
Schoe-
&; Schoemaker, 1985; Slovic fe Lichtenstein, 1983). Another
problem with assuming well-formed internal opinion is that there seem to be many
maker, 1982; Hershey
instances where both uncertainties and preferences are ambiguous (Einhorn
arth, 1985; Fischhoff", Slovic, fc Lichtenstein, 1980;
both beliefs and preferences are not known
in
March, 1978).
It
&
Hog-
appears that
advance, but are constructed when
needed.
The
lack of fixed, precise internal
suring response error:
If
the
same
judgments has important implications
situation were repeated a
number
for
mea-
of times, and
an individual was able to respond independently each time, then the set of responses
would constitute a hypothetical
distribution
is
the systematic
statistical distribution.
component
The expected
value of this
of judgment, which should be viewed not
as a true internal opinion but rather, as jointly determined by the person, task,
situation.
which
The
reflects
variance of this distribution
is
the
random component
and
of judgment,
both unpredictable response errors and, perhaps, inherently stochas-
Decomposition and the Control of Eriors
tic variability in
of
judgment
underlying opinions (Eliashberg
into systematic
concepts of validity and
is
Hauser, 1985). This partition
fc
and random components
reliability, so
is
&
of course, a 'loaded' question, since there
one standard
exists.
based on the psychometric
there are a variety of techniques available for
Budescu, 1983, pp. 152-154).
the appropriate standard for consistency
estimation (Wallsten
What
5
Because decision analysis
normative theory, von Neumann-Morgenstern
in decision
making? This
firmly grounded in a particular
is
utility theory, logical
However, recent developments
usually been granted precedence.
is,
no reason to presuppose that only
is
consistency has
in utility
theory
have created an embarrassment of riches: Recent reviews report a flurry of research
on alternative
utility theories (Bell
developments involve weakening
&
Farquhar, 1986; Machina, 1987).
utility
Many
of these
theory axioms related to either iransHivity of
The
preferences or independence of preferences and probabilities.
issue
is
no longer
whether decisions ought to be consistent with a normative standard, but rather,
which normative standard to be consistent with.
Unfortunately, to date no one has proposed a logical basis for choosing a logical
basis for choice.
many
of the
However, an important consideration
new formulations
is
related to the fact that
of utility theory are nonlinear.
whether the analytical convenience of linear models
tained with nonlinear models (Bell
&
in decision
Farquhar, 1986).
have other important advantages. Specifically,
It is
in the
as yet uncertain
analysis can be main-
In addition, linearity
next section
it is
may
argued that
the use of linear decompositions in decision analysis can promote process consistency
by controlling random response errors.
3
How Does
Decomposition Control Error?
Consider two different strategies
holistic
for
making a judgment: One
is
an unaided intuitive
procedure, where the individual does the best he or she can given presumably
limited information processing capabilities.
The
other strategy
is
a formal decompo-
where a large number of intuitive component judgments are made
and a mathematical rule is used to aggregate the judgments. What impact does
sition procedure,
the use of decomposition have on response error? Following the previous discussion,
systematic and random error are considered
3.1
in turn.
Systematic Error and Convergent Validity
One way
compare the judgments obtained through decomposition to holistic judgments. These comparisons are tests of
convergent validity: If the systematic portion of two different judgment procedures
to address the issue of systematic error
is
to
Kleinmuntz
6
agree, then
it
should be possible to observe a high degree of correlation between
judgments obtained using each procedure.
A number
correlation between multiattribute utility models
of studies have
and
examined the
intuitive ratings or rankings
of a set of alternatives. For a set of eleven studies, the typical range of correlations
was between
.7
and
which supports the notion that the systematic components
.9,
converge (von Winterfeldt
fc
Edwards, 1986, pp. 364-366). Most of these studies foutility model, riskless additive models. However,
cused on one type of multiattribute
there
is
between
also evidence indicating a high degree of convergence
risky utility models, as well as additive
von Winterfeldt,
&
riskless
and
and multiplicative decompositions (Barron,
Fischer, 1984; Fischer, 1977).
where decomposed and holistic judgments do tend to agree, it
seems plausible to conclude that decomposition neither increases nor decreases the
frequency or severity of systematic response errors. The limited evidence accumulated to date indicates a high degree of agreement between the systematic portion of
In those situations
decomposition models and the presumably defective systematic portion of holistic
judgments.
approach
This raises serious questions about the efficacy of the decomposition
for controlling systematic biases in
ing these results
judgment. Some caution
in interpret-
needed: High correlations simply indicate that the systematic
is
portions of the two judgments are linearly related.
To
actually determine that de-
composition and holistic judgments are the same requires examining the intercept
and slope of the regression of one on the other
reported in convergent validity studies.
— a comparison
that
is
not typically
Clearly, additional investigations of the
influence of decomposition models on systematic biases are needed.
3.2
Random
Error and Models of Judgment
There have been numerous studies that used statistical methods to develop descriptive models and measure random response errors in holistic judgment (Anderson,
1981; Einhorn, Kleinmuntz,
Steinmann, 1975; Slovic
&
Stewart, Brehmer,
&
Lichtenstein, 1971). In a typical study, an individual
is
&
Kleinmuntz, 1979;
Hammond,
presented with a series of multidimensional stimuli and responses are recorded.
statistical
technique
tional relationship
(e.g.,
multiple regression)
is
used to
fit
between the stimuli and the judgments.
random
A
a model of the func-
The
statistical
model
whose variability can be estimated along
with other model parameters. Using either this measure of variability or a closely
related goodness-of-fit measure (e.g., the correlation between predicted and actual
judgments), it is possible to directly assess the consistency of the judgment process
(Hammond, Hursch, fc Todd, 1964; Hammond fc Summers, 1972). In other words,
the statistical model explicitly measures the relative contribution of the systematic
incorporates an explicit
error term,
Decomposition and the Control of Ettois
and random components
The
of holistic
7
judgment.
ability to separate the systematic
has an important practical implication:
It
and random components of judgments
possible to improve the consistency
is
judgment by replacing the human decision maker with the systematic portion
of the judgment process that was captured in the model (Bowman, 1963; Dawes,
1971; Goldberg, 1970). There is considerable evidence that replacing an expert's
judgments with a linear model of the expert leads to superior performance because
of
own knowledge, but
the model uses the judge's
1981; Dawes, 1979).
pert
On
it
more
consistently (Camerer,
the other hand, this procedure of bootsirapping the ex-
"vulnerable to any misconceptions or biases the judge
is
the use of bootstrapping
tal to
uses
is
may
have. Implicit in
the assumption that these biases will be less detrimen-
performance than the inconsistency of unaided human judgment" (Slovic
Lichtenstein, 1971, p. 722). This assumption
is
&;
plausible in large part because the
systematic components of linear models tend to be insensitive to differences in model
formulation or parameter estimation (Dawes
&;
Corrigan, 1974; von Winterfeldt
&
Edwards, 1986, pp. 420-447).
The
success of bootstrapping suggests that formal models, broadly defined, con-
stitute a useful strategy for controlling
one
specific source of
random
error, the in-
consistencies introduced by the judge's attempts to combine information intuitively.
In fact, the psychological literature clearly supports the use of
mathematical aggre-
gation rules in place of intuitive information combination processes (Meehl, 1954;
Sawyer, 1966; Einhorn, 1972). However, there
is
an additional source of error when
The components themselves are judgments and
and inconsistencies. The critical issue is whether
using decomposition:
are therefore
subject to errors
errors in the
component judgments propagate through the aggregation process and influence the
resulting judgment.
3.3
Error Propagation
A model
in
Linear Decomposition
was recently proposed by Ravinder, Kleinmuntz, and Dyer (1988, abbreviated as RKD). RKD originally developed this model
to analyze the impact of decomposition on probability assessment. The specific question addressed was whether it would be better to intuitively assess the probability
of
of this error propagation process
some uncertain future event A
or use a decomposition that explicitly uses infor-
mation about the relationship between that event and a
denoted Bi,
.
,
.
.
composed of the
Bn-
Each
set of
background events,
of the events B, could be a single event or a scenario
intersection of multiple events. For instance, one could assess
ferent distributions for the future price of
in the political situation in the
oil
dif-
conditional on different developments
Persian Gulf, or conditional on a set of more com-
Kleinmuntz
8
plex scenarios. In addition,
also necessary to assess the marginal probability of
it is
The major
each conditioning event.
technical requirement
is
that the background
events form a mutually exclusive and exhaustive set of events.
Pr(B,
n
event
is:
BJ =
for j
^
k and Er=i Pr(-5.)
Pr{A)
=
RKD
model was
1>
This implies that
so the probability of the target
J:PTiA\B.)PT{B,).
i
While the
=
(1)
=l
originally intended to deal only with probability
assessment, the assumptions are general enough to permit the analysis of almost any
linear decomposition that can be formulated as a weighted average. This includes
additive multiattribute utility or value functions, where an
X
is
of single-attribute utility
that
outcome or alternative
x„. The component assessments consist
assessments u,{x,) and scaling constants (or weights) k^
described by a set of attributes ii,
sum
to
1.
The
result
is
.
.
.
,
the aggregate utility assessment:
U{X) = J2kM^.)-
(2)
t=l
Another common example
using subjective expected
denoted
x,,
is
utility.
the evaluation of lotteries or other risky prospects
A
lottery
each having probability
p,
L
consists of a set of uncertain
of occurring,
where Xl"=iPi
=
outcomes
1-
If
the
preferences for the outcomes are described by a utility function tii(x,), then the
expected
utility is
a linear combination of the utilities of each outcome weighted by
the probabilities:
EU{L) = J2pM^.)i
The
(3)
=l
RKD
model examines the way in which random errors in component judgments combine when the decomposition rule is a linear function. The model uses
the psychometric perspective outlined in the previous section to analyze linear de-
composition rules of the general form
n
component weighis b, and component
evaluaiions Cj.
For analytical convenience, all of the component judgments are
assumed to be scaled on the interval from
to 1. The weights (6,) are assumed to
where a denotes the
result of aggregating
sum to 1.
The RKD model examines
estimate (the
left
hand
the
amount
of
random
error in the decomposition
side of equation 4) as a function of characteristics of the
Decomposition and ihe Control of Errors
9
component judgments. Each one has an expected value (systematic portion) and a
standard deviation (random portion). The random errors may be correlated. The
main results of the analysis are derived from an expression for the variance of a, the
measure of error in the decomposition estimate.
Four major factors help to determine the amount of decomposition error: First,
as the component judgments become more precise (i.e., less error), the decomposition
estimate also becomes more precise. Second, the value of the systematic portions
of the components influences decomposition error, although the relationship is not
simple. However, the expected values of the weights (6,) appear to be the most
important factor. In particular, when all the componeni c, evaluations are assessed
with equal amounts of error, then an equal-weighting scheme is desirable. However,
if a particular component evaluation is known to be unreliable, then decomposition error is improved by shifting weight away from this component and towards
components that are estimated with greater precision. A third factor that strongly
influences decomposition error is dependency among the errors in the components.
In particular, if the c, components have positively correlated errors, then the error
associated with decomposition will be seriously inflated. Finally,
it is
possible to an-
number of components used. Initially,
components are added to the decomposition, error decreases rapidly. However,
this improvement only occurs up to a point: Eventually, decomposition error begins
alyze decomposition error as a function of the
as
to increase, although the rate of increase
An
is
relatively slow.
intuitive interpretation of the benefits of decomposition can be obtained
viewing the decomposition model (equation 4) as a weighted average of the
ponents.
has
than
its
components. As the number of components
the marginal error reduction value of each additional
term
random
is offset
component
the
is
increased,
component decreases. At the
time, recall that the weights being used in the composite are themselves sub-
ject to
Cj
com-
Decomposition tends to reduce random error because a linear composite
less variability
same
c,
by
error, so that eventually the
by the error
errors
added error reduction benefit of another
in the associated weight.
Positive dependencies
among
undermine the value of decomposition because the quantities
being averaged are to some extent redundant, reducing the potential benefit to be
derived from averaging.'
Finally, the
RKD
model can be used
to identify conditions
where decomposition
where holistic judgments are preferable.
components relative to the precision of
decomposition is easiest to make when the compo-
successfully controls error versus conditions
The
critical factor is the precision of the
holistic
judgments. The case
for
nent judgments are more reliable than the holistic judgment, perhaps because the
'Trrhnicftl issnos rclafcd to Hio coiiKtraiiit tlint
Rnalysis.
These
Coiisiderafioiis are
beyond
tlie
tlir
wriplits
scope of
must
this j>ai)er.
i\<\<[
u]>
t(i
1
iiiny roiuplirfttr tliis
Kleinmuntz
10
components are
relatively simple stimuli (e.g., single
dimensions rather than multi-
components have small enough errors, decomposition
is virtually guaranteed to decrease random error. However, even if the components
are estimated with no greater amount of reliability than the holistic judgment, there
is still considerable potential for the averaging process to improve judgment.
dimensional objects).
If
the
Implications for Research and Practice
4
What
are the implications of the preceding discussion for the use of decomposition
as a decision aiding technique?
There are
still
a number of open issues surrounding
decomposition, particularly with respect to the control of systematic biases. However, decomposition does
random
errors.
On
appear to be an extremely
the other hand, the
way
in
can influence the chance that decomposition
tation issues will be discussed in turn: (1)
in the
components;
effective
method
which decomposition
will
for controlling
is
implemented
succeed. Four specific implemen-
methods
for
reducing the amount of error
(2) the implications of selecting different weighting
schemes; (3)
the problem of dependent errors; and (4) the appropriate level of detail for decomposition.
Methods
reducing component errors. The
commaker and the
characteristics of the problem. However, the analyst can use certain methods to
reduce the component errors. One approach is to use the model fitting approach
1.
ponent judgments
for
is
reliability of the
largely determined by ability of the decision
described above (section 3.2) to bootstrap the decision maker.
For example, in
a multiattribute evaluation problem, rather than intuitively assessing the weights
for
each attribute, a representative sample of alternatives can be presented to the
model used to estimate the weights. These modelbased estimates are likely to be more precise than intuitive assessments (Hammond,
Stewart, Brehmer, fc Steinmann, 1975; Laskey fc Fischer, 1987). Another approach
that may be quite effective is to use nested decomposition
the components of a
decision
maker and a
statistical
—
decomposition can themselves be estimated with decomposition.
ing can be extended to multiple levels.
discretion, the
number
of
Of
In principle, nest-
course, unless this approach
is
used with
judgments required from the decision maker could
easily
exceed any reasonable bounds.
A
the
less
promising approach
is
to try to obtain multiple
same component and take the average. This approach
independent judgments of
fails in
practice because an
individual can not easily provide repeated independenl opinions.
A
closely related
alternative that has been proposed in the context of probability assessment
ask multiple experts for their judgments.
Of
formulated their opinions using shared information, there
still
is
to
may have
may be problems
course, since the experts
Decomposition and the Control of Errors
with dependent errors (Clemen
A
&;
11
Winkler, 1985; Winkler, 1981).
technique also uses multiple judges: Selected component judgments can
final
be delegated to experts who have specialized knowledge about that component. For
instance, estimates of probabilities of decision outcomes might be estimated by scientists or engineers, while
decision
idence on this issue,
reliable
judgments of values
maker (Hammond & Adelman,
judgments
or preferences are provided by the
1976).
Although there
seems plausible that experts
it
in their area of expertise
will
is little
direct ev-
be able to provide more
than a novice could (Wallsten
&
Bude-
scu, 1983).
Picking the best weighting scheme. From the perspective of controlling
decomposition error, the ideal set of weights is one that maximizes the influence of
precisely estimated components and minimizes the influence of unreliable components. However, random assessment errors in the weights themselves are a concern.
One method for dealing with these problem is to use a unit weighting scheme, where
2.
a predetermined set of equal weights are applied to the components.
is
random assessment
that the
errors associated with the weights are removed. Un-
advantage can be
fortunately, this
The advantage
offset
by the introduction of a systematic
bias,
since the weights that would have been selected judgmentally are unlikely to exactly
match the
unit weights.
systematic bias
is
On
likely to
the other hand, under certain conditions, the size of the
be small enough to make the reduction of random error
appear relatively attractive (Einhorn, 1976; Einhorn
Dealing with dependent errors. One
3.
error-reduction potential of decomposition
is
&
of the
Hogarth, 1975).
most serious threats
the presence of dependent errors.
time two judgments are based on shared information, there
for
dependent
errors.
to the
This might be a particular problem
is
for
Any
considerable potential
judgments that are ob-
tained using an anchoring- and- adjnsimeni strategy (Tversky
& Kahneman,
1974).
Previous research on this strategy has focused on problems with an inappropriate
choice of an anchor or the tendency to adjust insufficiently from the anchor.
How-
judgments share the same anchor, then any response errors in
the anchor will be present in both judgments. In fact, in many preference assessment methods, a sequence of judgments are obtained and there are linkages between
ever,
if
two
different
successive judgments. For instance, the results of one assessment
may be used
as an
anchor for the next assessment, inducing a positive serial correlation in the response
errors (Laskey
fe
Fischer, 1987). Furthermore, factors like question presentation for-
mats and response modes can strongly interact with the anchoring process (Johnson
& Schkade, in press; Schkade h Johnson, 1987). Much more research is needed on
the distribution of response errors associated with various assessment techniques.
4.
Choosing the appropriate
ing a decomposition
is
level of detail.
A
critical issue in
implement-
determining how detailed to make the model. For instance.
Kleinmuntz
12
in multiattribute evaluations, the analyst
how many
has considerable leeway in determining
The
attributes to include in the model.
analysis of
random
error sug-
gests that the relationship between the precision of decomposition and the
number
components is single-peaked: Adding detail to the model helps control error, but
only up to a point. Beyond that point, the incremental error associated with a new
attribute may outweigh any error cancellation effect. On the margin, if a component
is known to be assessed unreliably, it will probably not be worth incorporating in
of
the analysis.
On
the
the other hand, there
wrong
level of detail:
that there are problems
may be systematic
Specifically, there are a
when models
fail
errors associated with choosing
number
of studies that suggest
enough components. For
to incorporate
instance, in multiattribute evaluation models, using an incomplete set of factors may,
under certain conditions, lead to biased evaluations (Aschenbrenner, 1977; Barron,
1987; Barron fe Kleinmuntz, 1986). Another example involves using fault trees to
estimate the probability that a complex system
will fail.
When
trees are
pruned
by incorporating specific failure events into a catch-all 'other problems' category,
decision makers systematically underestimate the probability of the missing events
and are unaware that the problem was incompletely represented
sight
is
An
is
also out of
mind
(FischhofT, Slovic,
&
— what
is
out of
Lichtenstein, 1978).
intriguing discussion of the relative merits of simple versus complex analyses
provided by Politser and Fineberg (1988). They reviewed 24 published medical
decision analyses that incorporated a decision tree and expected utility calculations
and found a relationship between the complexity of the analysis and the clarity
of the results. Complexity was measured by the number of terminal branches in
the decision tree, while clarity was assessed in terms of the expected utility to be
gained by performing the analysis (Lindley, 1986).
The
positive correlation between the complexity of the tree
Although
this relation held
even after controlling
spurious correlation, some additional research
if
is
results indicated a strong
and the
clarity of the results.
for a variety of possible
needed. For instance,
sources of
it is
the simple trees were reflections of simple problems or of insufficiently
not clear
decomposed
complex problems.
Common
sense dictates that more complexity and detail in analyses
a good thing.
is
not always
Larger models require greater expenditures of effort from both the
analyst and the decision maker.
As a model becomes more and more
sophisticated,
maker may have to struggle to cope with the added detail and complexity. Since the benefits of added complexity are likely to diminish on the margin,
and since the negative consequences are likely to escalate as more and more detailed
is heaped onto an already complex model, it seems plausible to argue that there is
some optimal intermediate level of complexity that appropriately balances the good
the decision
Decomposition and the Control of Ettois
and the bad (Coombs
analysts
is
fe
to determine
13
Avninin, 1977). Of course, the tough question
for decision
whether current modeling practices are above or below
The answer does not seem obvious.
this
level of optimal complexity.
The Costs and
5
On
Benefits of Decision Analysis
the other hand, decisions often have to be
itations of time
and money.
made
in
the presence of severe lim-
Since decision analysis can be both costly and time
may be ruled out under all but the most unusual
question for many decision makers is not whether to use
consuming, comprehensive analysis
circumstances.
The
relevant
simple or complex models, but whether to use a simplified model or intuition (Behn
&
Vaupel, 1982).
In this paper, decision analysis has
decision strategy: Error-prone intuitive processes are
procedures.
In reality, there are a
to varying degrees on intuition
the fact that
it is
the decision
been treated as an explicit
augmented by a
continuum of available
set of
formal
strategies, each relying
and formal analysis. This perspective emphasizes
maker who ultimately must determine the extent
to
which decomposition and formal analysis should be used.
Recent research on the selection of decision strategies has emphasized the cognitive trade-offs
between various positive and negative dimensions of alternative
strategies. Different strategies are selected
under different task conditions
ues of these dimensions change (Payne, 1982).
A
if
the val-
generalization of this cost-benefit
view is to formulate the strategy selection process as a meiadecision problem, in
which one "decides how to choose" (Einhorn & Hogarth, 1981, p. 69). Thus, each
decision strategy can be viewed as a multidimensional object, with the dimensions
corresponding to the associated costs and benefits.
This paper has focused on two specific dimensions, the control of random and
systematic response errors, and has touched on two others, logical consistency with
normative principles and the
effort required to
implement decomposition.
A num-
ber of other costs and benefits have been associated with decision analytic models.
Negative dimensions include: resistance to the use of quantification and failure to
appreciate the limitations of formal models (Howard, 1980); reluctance to reveal
sensitive information in an analysis (Keeney, 1982);
and reliance on the uncertain
interpersonal and technical skills of the decision analyst (Fischhoff, 1980). Positive
dimensions include: forcing assumptions to be scrutinized and justified (Hogarth,
1987, chap. 9); the development of insight and understanding of complex problems
through "immersion"
problem (Howard, 1980); and the proresolution, and communication (Hammond,
in the details of the
motion of consensus formation, conflict
1965; Hammond & Adelman, 1976; Raiffa, 1982).
How do we
decide
how
to choose?
The
role of intuitive
judgment
is
inescapable.
Kleinmuntz
14
maker must assess whether
decomposition, intuition) meet each objective. Ultimately, the decision maker must balance
these conflicting objectives and decide what decision making strategy is best. This is
complicated by many uncertainties and ambiguities associated with the cost-benefit
since the decision
many
dimensions. For example,
On
approaches
(e.g.,
of the purported benefits of decision analysis can be
notoriously difficult to evaluate objectively,
1980).
different
if
they are evaluated at
(Fischhoff,
all
the other hand, the costs of using decision analysis are readily apparent.
Since decision makers often pay
the costs of decision analysis
more attention
may outweigh
to factors they
know more about,
the benefits in the minds of
many
deci-
makers (Kleinmuntz & Schkade, 1988). Furthermore, the difficulty of obtaining
accurate measures of the benefits create precisely the conditions of incomplete and
sion
ambiguous feedback that can lead
Einhorn
&
to overconfidence in one's abilities (Einhorn, 1980;
Hogarth, 1978). Therefore, the unverified claims of analysts and decision
makers about the
efficacy of decision analysis should be treated with a
measure of
skepticism.
An
increased sensitivity to decision makers' perceptions of the costs and benefits
of analysis can be helpful. For instance, recognizing that decision
concerned with conserving cost and cognitive
effort
makers are more
than with issues of
strict logical
consistency has led to the development of simplified modeling techniques (Einhorn
McCoach,
1977; Gardiner
&
fc
Edwards, 1975). These linear decomposition techniques
are valuable because: (1) the components judgments tend to be less complex, so they
are less effortful and, perhaps,
more
reliable; (2) process consistency is
because the underlying decomposition
is
linear; (3) these
promoted
models often do a good
a normative framework; and (4) when
nonlinear, using a linear approximation increases the chance
job approximating models constructed
the normative model
is
in
that less technically sophisticated decision makers can understand and use the
(Dyer
A
t
model
Larsen, 1984).
recurrent theme in psychological research
is
the adaptive interaction of the or-
ganism with the environment (Brunswik, 1952; Simon, 1981). Presumably focusing
on this interaction will make it possible to understand and predict the effectiveness
of
human
decision makers across a variety of tasks and settings.
One
goal of this
research should be to identify those situations where decision aids like decomposition are needed.
However,
it is
also important to extend our conceptual
framework
mutual interaction of the decision maker, the environment, and the
decision aid, since the aid mediates the interaction between the environment and
the individual. This can help to improve the design and implementation of decision
to include the
aiding techniques, and ultimately, lead to better decision making.
Decomposition and the Control of Etiois
15
References
Anderson, N. H. (1981). Foundations of infoTmaiion iniegTaiion.
demic Press.
New
York: Aca-
Aschenbrenner, K. M. (1977). Influence of attribnte formulation on the evaluation
of apartments by multiattribute utility procedures.
In H.
de Zeeuw (Eds.), Decision making and change in human
The Netherlands:
Jungermann
affairs.
&
G.
Dordrecht,
Reidel.
Barron, F. H. (1987).
Influence of missing attributes on selecting a best multiat-
tributed alternative. Decision Sciences, 18, 194-205.
&
Kleinmuntz, D. N. (1986). Sensitivity in value loss of linear
models to attribute completeness. In B. Brehmer, H. Jungermann, P. Lorens,
Barron, F. H.,
&
New
G. Sevon (Eds.),
Amsterdam:
directions in research on decision making.
Elsevier.
Barron, F. H., von Winterfeldt, D.,
cal relationships
&
Fischer, G.
between risky and
W.
(1984). Theoretical
riskless utility functions.
Ada
and empiri-
Psychologica,
56, 233-244.
Behn, R. D.,
&
Vaupel,
J.
W.
(1982). Quick analysi.i for busy decision makers.
New
York: Basic Books.
Bell, D. E.,
&
Farquhar,
P. H. (1986).
Perspectives on utility theory.
Operations
Research, 34, 179-183.
Bowman, E. H. (1963). Consistency and
Management Science, 9, 310-321.
optimality in managerial decision making.
Brunswik, E. (1952). Conceptual framework of psychology. Chicago: University of
Chicago Press.
Camerer, C. (1981).
General conditions
Organizational Behavior and
Clemen, R.
&
T.,
Winkler, R.
for the success of
Human
bootstrapping models.
Performance, 27, 411-422.
L. (1985).
Limits for the precision and value of
information from dependent sources. Operations Research, 22, 427-442.
Coombs, C.
H.,
&
Avrunin, G.
S. (1977).
Single-peaked functions and the theory of
preference. Psychological Review, 84, 216-230.
Dawes, R. M. (1971).
principles of
A
human
case study of graduate admissions: Application of three
decision making.
American Psychologist, 26, 180-188.
Dawes, R. M. (1979). The robust beauty of improper linear models
making. American Psychologist, 34, 95-106.
Dawes, R. M.,
&
in decision
Corrigan, B. (1974). Linear models in decision making. Psycho-
logical Bulletin, 81,
95-106.
Kleinmuniz
16
Dyer,
S.,
J.
Larsen,
fc
Using multiple objectives to approximate
B. (1984).
J.
normative models. Annals of Operations Research,
W.
Edwards,
2, 39-58.
Annual Review of Psychology, 12,
(1961). Behavioral decision theory.
473-498.
Einhorn, H.
(1972). Expert
J.
Human
zational Behavior and
Einhorn, H.
Performance,
Equal weighting
(1976).
J.
measurement and mechanical combination. Organi-
an example, plus some extensions.
in
86-106.
multi-attribute models:
M.
In
7,
Schiff
&
A
rationale,
G. Sorter (Eds.), Proceed-
ings of the conference on topical research in accounting.
New
New York
York:
University Press.
Einhorn, H.
(1980).
J.
making.
Learning from experience and suboptimal rules
Einhorn, H.
Cognitive processes in choice and decision
In T. S. Wallsten (Ed.),
behavior. Hillsdale, NJ:
&
J.,
Erlbaum.
Hogarth, R. M. (1975).
making. Organizational Behavior and
Einhorn, H.
Unit weighting schemes for decision
Human
Performance, 13, 171-192.
Hogarth, R. M. (1978). Confidence
fe
J.,
in decision
in
judgment: Persistence of
the illusion of validity. Psychological Review, 85, 395-416.
Einhorn, H.
&
J.,
Hogarth, R. M. (1981). Behavioral decision theory: Processes of
judgment and
Annual Review
choice.
of Psychology, 32, 53-88.
Hogarth, R. M. (1985). Ambiguity and uncertainty
inference. Psychological Review, 92, 433-461.
Einhorn, H.
J., fc
Einhorn, H.
J.,
Kleinmuntz, D. N.,
fc
in probabilistic
Kleinmuntz, B. (1979). Linear regression and
process-tracing models of judgment. Psychological Review, 86, 465-485.
Einhorn, H.
& McCoach, W.
J.,
(1977).
A
simple multi-attribute procedure for
evaluation. Behavioral Science, 22, 270-282.
Eliashberg,
&
J.,
consumer
Farquhar,
P.
Hauser,
J.
R. (1985).
risk preferences.
H. (1984).
A measurement
Management
Utility assessment
error approach for modeling
Science, 31, 1-25.
methods.
Management
Science, 30,
1283-1300.
Fischer, G.
W.
(1976). Multidimensional utility models for risky and riskless choice.
Organizational Behavior and
Fischer, G.
W.
Performance, 17, 127-146.
(1977). Convergent validation of
assessment procedures
and
Human
Human
for risky
and
decomposed multiattribute
riskless decisions. Organizational
utility
Behavior
Performance, 18, 295-315.
Fischhoff, B. (1980). Clinical decision analysis.
Operations Research, 28, 28-43.
DecomposiiioTi and ihe Control of Errors
Fischhoff, B. (1982). Debiasing. In D.
Judgment under uncertainly:
17
Kahneman,
P. Slovic, fe
Heuristics and biases.
A. Tversky (Eds.),
New
York:
Cambridge
University Press.
Fischhoff, B., Slovic, P.,
mated
failure probabilities to
Human
Psychology:
problem representation. Journal of Experimental
Perception and Performance, 4, 330-344.
&
Fischhoff, B., Slovic, P.,
Fault trees: Sensitivity of esti-
Lichtenstein, S. (1978).
fe
Knowing what you want: Mea-
Lichtenstein, S. (1980).
suring labile values. In T. S. Wallsten (Ed.), Cognitive processes in choice and
decision behavior. Hillsdale, NJ: Erlbaum.
Fishburn, P. C. (1982). Nontransitive measurable
utility.
Journal of Mathematical
Psychology, 26, 31-67.
Edwards, W. (1975). Public values: Multiattribute-utility measurement for social decision making. In M. F. Kaplan &; S. Schwartz (Eds.),
Human judgment and decision processes. New York: Academic Press.
C,
Gardiner, P.
&;
Man
Goldberg, L. R. (1970).
method
for a
vs.
model of man:
A
rationale, plus
of improving on clinical inferences.
some evidence,
Psychological Bulletin, 73,
422-432.
Goldstein,
W.
reversal
Hammond,
M.,
fe
Einhorn, H.
J.
Expression theory and the preference
(1987).
phenomena. Psychological Review, 94, 236-254.
K. R. (1965).
New
directions in research on conflict resolution. Journal
of Social Issues, 21, 44-66.
Hammond,
K. R.,
& Adelman,
Science, values, and
L. (1976).
human judgment.
Science, 194, 389-396.
Hammond, K.
R.,
Hursch, C.
J., &;
Todd,
F. J. (1964).
Analyzing the components
of clinical inference. Psychological Review, 71, 438-456.
Hammond,
K. R., Stewart, T. R., Brehmer, B.,
judgment theory.
In
decision processes.
Hammond,
K. R.,
M.
F.
New
Kaplan
fc S.
&
Steinmann, D. O. (1975). Social
Schwartz (Eds.), Human judgment and
York: Academic Press.
& Summers,
Cognitive control.
D. A. (1972).
Psychological
Review, 79, 58-67.
Hershey,
J.
C, Kunreuther,
H.
assessment procedures for
Hershey,
J.
C, & Schoemaker,
C, & Schoemaker,
utility functions.
P. J. H. (1985).
alence methods in utility measurement:
P. J. H. (1982).
Management
Sources of bias
in
Science, 28, 936-954.
Probability versus certainty equiv-
Are they equivalent?
Management
Science, 31, 1213-1231.
Hogarth, R. M. (1975). Cognitive processes and the assessment of subjective probability distributions. Journal of ihe American Statistical Association, 70, 271289.
Kleinmuntz
18
On
Hogarth, R. M. (1982).
R.
M. Hogarth
the surprise and delight of inconsistent responses. In
Question framing and response consistency. San Fran-
(Ed.),
cisco: Josey-Bass.
Hogarth, R. M. (1987).
UK:
ed.). Chichester,
&
Hogarth, R. M.,
Judgement and
The psychology of decision (2nd
choice:
Wiley.
W.
Reder, M.
(Eds.). (1987).
The contrast
Rational choice:
between economic and psychology. Chicago: University of Chicago Press.
Howard, R. A. (1980). An assessment of decision
analysis. Operations Research, 28,
4-27.
Johnson, E.
J.,
&
Schkade, D. A.
evidence and explanations.
Kahneman,
(in press).
Management
Bias in utility assessments: Further
Science.
Judgment under uncer-
D., Slovic, P., fc Tversky, A. (Eds.). (1982).
tainty: Heuristics
and
Keeney, R. L. (1982).
biases.
New
York: Cambridge University Press.
Decision analysis:
An
overview.
Operations Research, 30,
803-838.
Schkade, D. A. (1988). The cognitive implications of information displays in computer-supported decision making. Working paper 2010-88,
Kleinmuntz, D. N.,
fe
Sloan School of Management, Massachusetts Institute of Technology.
Laskey, K. B.,
&;
Fischer, G.
of response error.
Lichtenstein,
The
W.
Management
FischhofT, B.,
S.,
(1987). Estimating utility functions in the presence
&
Science, 33, 965-980.
Phillips, L. (1982).
state of the art to 1980. In D.
Judgment under uncertainty:
Kahneman,
Calibration of probabilities:
P. Slovic,
Heuristics and biases.
&
New
A. Tversky (Eds.),
York:
Cambridge
University Press.
Lindley, D. V. (1986).
The
reconciliation of decision analyses. Operations Research,
34, 289-295.
Lindley, D. V., Tversky, A.,
&
Brown, R. V. (1979). On the reconciliation of prob-
assessments (with discussion). Journal of the Royal Statistical Society
Series A, 142, 146-180.
ability
Machina, M.
J.
(1987).
Decision-making
in
the presence of risk.
Science, 236,
537-543.
March,
J.
G. (1978). Bounded rationality, ambiguity, and the engineering of choice.
Bell Journal of Economics, 9, 587-608.
Meehl,
P. E. (1954).
Clinical vs. statistical prediction.
Minneapolis: University of
Contingent decision behavior.
Psychological Bulletin, 92,
Minnesota Press.
Payne,
J.
W.
382-402.
(1982).
Decomposition and the Control of Errors
Politser, P. E.,
&
19
Toward predicting the value of a medical
Unpublished paper, Harvard School of Public Health.
Fineberg, H. V. (1988).
decision analysis.
The
Raiffa, H. (1982).
ari
and science of negoiiaiion. Cambridge,
MA: Belknap/
Harvard University Press.
Rapoport, A.,
&
Wallsten, T.
S.
(1972).
Individual decision behavior.
Annual
Review of Psychology, 23, 131-175.
&
The reliability of subjecobtained through decomposition. Management Science, 34,
Ravinder, H. V., Kleinmuntz, D. N.,
tive probabilities
Dyer,
J. S.
(1988).
186-199.
Sawyer,
Measurement
(1966).
J.
ical Bulletin, 66,
Schkade, D. A.,
&;
an</ prediction: Clinical an(f statistical. Psycholog-
178-200.
Johnson, E.
J.
Working paper, Department
(1987). Cognitive processes in preference reversals.
of
Management, University
of Texas at Austin.
Simon, H. A. (1978). Rationality as process and as product of thought. American
Economic Review, 68, 1-16.
Simon, H. A. (1981). The sciences of the
artificial
(2nd
ed.).
Cambridge,
MA: MIT
Press.
Slovic,
&
Fischhoff, B.,
P.,
Annual Review
Lichtenstein, S. (1977).
Behavioral decision theory.
of Psychology, 28, 1-39.
Comparison of Bayesian and regression approaches to the study of information processing in judgment. Organizational
Behavior and Human Performance, 6, 649-744.
Slovic, P., Si Lichtenstein, S. (1971).
Slovic, P.,
&
Lichtenstein, S. (1983).
Preference reversals:
A
broader perspective.
American Economic Review, 73, 596-605.
Thaler, R. (1985). Mental accounting and consumer choice. Marketing Science, 4,
199-214.
Tversky, A.,
&:
Kahneman,
D. (1974).
Judgment under uncertainty: Heuristics and
biases. Science, 185, 1124-1131.
Tversky, A.,
fe
Kahneman, D.
(1981).
The framing
of decisions
and the psychology
of choice. Science, 211, 281-299.
& Kahneman, D. (1987).
M. Hogarth & M. W. Reder
Tversky, A.,
In R.
Rational choice and the framing of decisions.
(Eds.), Rational choice:
The contrast between
economics and psychology. Chicago: University of Chicago Press.
von Winterfeldt, D.,
search.
Wallsten, T.
New
S.,
&
Edwards, W. (1986). Decision analysis and behavioral
re-
York: Cambridge University Press.
&
Budescu, D. V. (1983).
psychological and psychometric review.
Encoding subjective probabilities:
Management
Science, 29, 151-173.
A
Kleinmuniz
„^
from dependent inforWinkler, R. L. (1981). Combining probability distributions
mation sources. Management Science, 27, 479-488.
Research directions
Decision Sciences, 13, 517-533.
Winkler, R. L. (1982).
6
M
7
U 6 9
in decision
making under uncertainty.
Date Due
Lib-26-67
MIT LlBRflftlES
3
TDflQ
DD5 3SD &S2
Download