A GOAL-ORIENTED DESIGN EVALUATION FRAMEWORK

A GOAL-ORIENTED DESIGN EVALUATION FRAMEWORK
FOR DECISION MAKING UNDER UNCERTAINTY
by
JUN BEOM KIM
Bachelor of Science, Aerospace Engineering
Seoul National University, Seoul, Korea, 1992
Master of Science, Mechanical Engineering
Massachusetts Institute of Technology, Cambridge, MA, 1995
Submitted to the Department of Mechanical Engineering
in Partial Fulfillment of the Requirements for the Degree of
DOCTOR OF PHILOSOPHY IN MECHANICAL ENGINEERING
at the
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
June 1999
© 1999 Massachusetts Institute of Technology
All Rights Reserved
t........ .70........................
.n...or................-.
Signature of A uthor............................................
Engineering
Mechanical
/Department of
May 9, 1999
Certified by........................................
David R. Wallace
Esther and Harold E. Edgerton Associate Professor of Mechanical Engineering
Thesis Supervisor
Accepted by......................................................
Ain A. Sonin
Chairman, Department C
tee on Graduate Students
ENG
To myfatherfor his perseverance
and
To my motherfor her dedication
A GOAL-ORIENTED DESIGN EVALUATION FRAMEWORK
FOR DECISION MAKING UNDER UNCERTAINTY
by
JUN BEOM KIM
Submitted to the Department of Mechanical Engineering on May 9, 1999
in Partial Fulfillment of the Requirements for the Degree of
Doctor of Philosophy in Mechanical Engineering
ABSTRACT
This thesis aims to provide a supportive mathematical decision framework for use during
the process of designing products. This thesis is motivated by: (a) the lack of descriptive
design decision aid tools used in design practice and (b) the necessity to emphasize the
role of time-variability and uncertainty in a decision science context.
The research provides a set of prescriptive and intuitive decision aid tools that will help
designers in evaluating uncertain, evolving designs in the design process. When
integrated as part of a design process, the decision framework can guide designers into
the right development direction in the fluidic design process.
A probabilistic acceptability-based decision analysis model is built upon characterizing
design as a specification satisfaction process. The decision model is mathematically
defined and extended into a comprehensive goal-oriented design evaluation framework.
The goal oriented evaluation framework provides a set of tools that constitute a coherent
value-based decision analysis system under uncertainty. Its four components are: (a) the
construction of multi-dimensional acceptability preference functions capturing
uncertainties in designer's subjective judgment; (b) a new goal setting method, based
upon a discrete Markov chain, to help designers quantify product targets early in the
design process; (c) an uncertainty metric, based upon mean semi-variance analysis,
enabling designers to cope with uncertainties of single events by providing measures of
expectation, risk, and opportunity, and; (d) a data aggregation model for merging
different and uncertain expert opinions about a design's engineering performance.
These four components are intended to form a comprehensive and practical decision
framework based upon the notion of goal, enabling product designers to measure
uncertain designs in a systematic manner to make better decisions throughout the product
design process.
Doctoral Committee : Professor David R. Wallace (Chairman)
Professor Seth Lloyd
Professor Kevin N. Otto
ACKNOWLEDGEMENTS
This is a culmination of my four years' research at MIT Computer-aided Design
Laboratory in the field of design and decision theory.
My appreciation goes to my advisor Professor David R. Wallace. Countless times, he
patiently listened to my thoughts and provided me with much critical and constructive
feedback throughout the course of the research. I especially thank him for the trust and
encouragement that he had shown to me in all of our meetings.
I also would like to thank my committee members, Professor Kevin Otto for this
feedback on a few issues and Professor Seth Lloyd for the discussions in shaping two of
the topics presented in this thesis. Also I have to thank Dr. Abi Sarkar in Mathematics
Department for his inputs shaping Chapter 6 of this thesis.
I would like to mention all of my friends at CADLAB and MIT Korean community.
Some of the old members at CADLAB ; Paul, Nicola, Krish, Ashley, and Chun Ho.
Current members at CADLAB ; Nick, Ben, Shaun M., Shaun A., Manuel, Jaehyun, Jeff,
Pricilla, Chris, and Bill ; don't stay here too long. You deserve a window.
The Korean community at MIT has been the main source of relaxation and refreshment
throughout the long days at MIT. To name a few; Shinsuk, DongSik,HunWook,
DongHyun, Taejung, JuneHee, SungHwan, KunDong, JaeHyun, SangJun, Steve,
SokWoo. Without you guys, my life in the States must have been so boring.
Finally, I would like to express my deepest gratitude to my parents. Their unconditional
love and support throughout my life has shaped me and my thoughts. My brother is the
MAN. For all the support and encouragement that he had shown to me while I have been
away, I cannot thank him too much. We have been the best friends and will be as we go
along through the rest of our lives. I am so blessed to be a part of such a family.
One is one's own refuge, who else could it be?
- Buddha
I
9
TABLE OF CONTENTS
ABSTR A CT ..........................................................................................................................................................
5
TABLE O F C O N TEN TS ....................................................................................................................................
9
LIST O F FIG U RES ...........................................................................................................................................
13
LIST O F TA BLES .............................................................................................................................................
15
1.IN TR OD U C TION ...............................................................................................................................--.....--.
17
...............
M OTIVATION...............................................................................................................
..................................................................................................
AND
DELIVERABLE
THESIS G OAL
1.1
1.2
Goal-orientedpreferencefunction .....................................................................................
1.2.1
Go al settin g ..................................................................................................................................
1.2 .1.1
Preference function construction..............................................................................................
1.2.1.2
Decisions underdesign evolution and changing uncertainty...........................................
1.2.2
Uncertainty M easure....................................................................................................................21
1.2.2.1
M ultiple estimate aggregation................................................................................................
1.2.2.2
THESIS O RGANIZATION ....................................................................................................................
1.3
2. BA CK GR O U N D ............................................................................................................................................
2.1
2.2
Design D ecision Making....................................................................................................
D ecision M aking as an Integral Part of D esign Process .................................................
RELATED W ORK ................................................................................................................................
Qualitative D ecision Support Tools ..................................................................................
2.4.1
P u gh ch art.....................................................................................................................................
2 .4 .1.1
House of Quality ..........................................................................................................................
2.4.1.2
Q uantitativeD ecision Support Tools ................................................................................
2.4.2
Figure of M erit.............................................................................................................................
2.4.2.1
M ethod of Imprecision ............................................................................................................
2.4.2.2
2.4.2.3
2.4.2.4
2.4.3
2.4.4
Utility Theory...............................................................................................................................
Acceptability Model ....................................................................................................................
Other Decision Aid Tools....................................................................................................
D iscussion...............................................................................................................................
SUMM ARY .........................................................................................................................................
3. ACCEPTABILITY-BASED EVALUATION MODEL ............................................................................
3.2
3.2.1
25
26
27
2.3.2
3.1
22
22
D esign.....................................................................................................................................
D ecision Analysis ...................................................................................................................
DECISION A NALYSIS IN DESIGN.......................................................................................................
2.5
21
21
21
25
26
2.3.1
2.4
20
.........................---.
O VERVIEW ........................................................................................................
D ESIGN, DECISION M AKING, AND DESIGN PROCESS...................................................................
2.2.1
2.2.2
2.3
--- 17
19
28
28
29
31
32
32
32
34
34
35
36
39
42
43
44
45
O VERVIEW ........................................................................................................................................
A GOAL-ORIENTED APPROACH.........................................................................................................
45
45
Goalprogramming .................................................................................................................
46
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
10
Goal-baseddesign evaluationframework..........................................................................
3.2.2
A SPIRATION AND REJECTION LEVELS...........................................................................................
3.3
D efinition and Formulation ...............................................................................................
Absolute Scale of Aspiration and Rejection Levels............................................................
3.3.1
3.3.2
3.4
O NE D IM ENSIONAL A CCEPTABILITY FUNCTION..............................................................................
Formulation............................................................................................................................
Operation : Lottery Method...............................................................................................
3.4.1
3.4.2
M ULTI DIMENSIONAL A CCEPTABILITY FUNCTION .......................................................................
3.5
Mutual PreferentialIndependence.....................................................................................
Two-dim ensionalAcceptability Function............................................................................
3.5.1
3.5.2
3.5.2.1
3 .5.2.2
47
49
51
51
52
53
53
55
Formulation..................................................................................................................................55
So lutio n ........................................................................................................................................
56
58
D iscussion...............................................................................................................................
58
SUMM ARY .........................................................................................................................................
59
3.5.4
4. GOAL SETTING USING DYNAMIC PROBABILISTIC MODEL......................................................
O VERVIEW ........................................................................................................................................
4.1
47
N -dimensional Acceptability Function..............................................................................
3.5.3
3.6
46
61
61
4.1.1
Goal-basedM ethod................................................................................................................
62
4.1.2
Problem Statement .................................................................................................................
62
4.1.3
Approach ................................................................................................................................
A DISCRETE M ARKOV CHAIN : A STOCHASTIC PROCESS...............................................................
4.2.1
M arkov Properties.................................................................................................................
Transition Probabilitiesand M atrix...................................................................................
4.2.2
D YNAM IC PROBABILISTIC GOAL SETTING M ODEL .......................................................................
4.3
4.3.1
OperationalProcedure......................................................................................................
4.2
4.3.2
4.3.3
Identification of Candidate States ......................................................................................
Assessm ent of Individual States..........................................................................................
4.3.3.1
4.3.3.2
4.3.3.3
Local Probabilities .......................................................................................................................
M odeling Assumption on the Local Probabilities ..................................................................
Interpretation of Subjective Probabilities ...............................................................................
M arkov Chain Construction...............................................................................................
4.3.4
4.3.4.1
4.3.4.2
4.3.5
4.4.2
4.6
67
67
67
68
71
72
Decision Chain.............................................................................................................................72
Transition M atrix Construction................................................................................................
74
Limiting Probabilitiesand their Interpretation.................................................................
75
O peratio n ......................................................................................................................................
4 .3 .5 .1
4.3.5.2
Existence of Limiting Probabilities.........................................................................................
Interpretation of Limiting Probabilities .................................................................................
4.3.5.3
4.4
POST-A NALYSIS................................................................................................................................
4.4.1
Convergence Rate ..................................................................................................................
4.5
63
65
65
65
66
66
Statistical Test ........................................................................................................................
A PPLICATION EXAMPLE ...................................................................................................................
75
77
79
80
80
82
84
4.5.1
4.5.2
4 .5.2 .1
Purpose...................................................................................................................................
Procedure...............................................................................................................................
S e t-u p ............................................................................................................................................
84
85
85
4 .5 .2 .2
4.5.3
P articip an ts ...................................................................................................................................
G oal-setting Software ........................................................................................................
85
86
4.5.4
Result ......................................................................................................................................
87
4 .5 .4 .1
P ercep tio n .....................................................................................................................................
4.5.4.2
M easurement Outcome................................................................................................................88
SUMM ARY .........................................................................................................................................
5. DECISION MAKING FOR SINGLE EVENTS........................................................................................
5.1
5.2
O VERVIEW ........................................................................................................................................
EXPECTATION-BASED DECISIONS ..................................................................................................
5.2.1
5.2.2
Framework .............................................................................................................................
Allais paradox........................................................................................................................
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
87
88
91
91
92
92
92
11
Th e E xperim ent............................................................................................................................
5 .2 .2 .1
Alternative analysis......................................................................................................................94
5.2.2.2
Its Implication to Design Decision M aking .............................................................................
5.2.2.3
U NCERTAINTY CLASSIFICATION ...................................................................................................
5.3
Type I Uncertainty : Repeatable Event ..............................................................................
Type II Uncertainty : Single Event .....................................................................................
D iscussion...............................................................................................................................
5.3.1
5.3.2
5.3.3
DECISION MAKING UNDER TYPE II UNCERTAINTY........................................................................
5.4
92
95
98
98
98
99
99
100
101
102
5.4.1
5.4.2
5.4.3
M ean-varianceanalysis in finance theory ..........................................................................
Quantificationof D ispersion...............................................................................................
Opportunity and Risk ...........................................................................................................
5.4.3.1
5 .4 .3 .2
5.4.3.3
5.4.4
Bi-directional measurement.......................................................................................................102
D efin itio ns..................................................................................................................................10
Quantification.............................................................................................................................103
Interpretationof Opportunity/Risk in evolving design process..........................................
104
5.4.5
Discussion: Risk attitude......................................................................................................
106
5.5
EXAM PLE : ALLAIS PARADOX REVISITED.......................................................................................
107
Pre-Analysis .........................................................................................................................
First Gam e: L, Ivs. L1 .......................................................................................................
Second Game : L2 , vs. L2 ....................................................................................................
107
5.5.1
5.5.2
5.5.3
SUMM ARY .......................................................................................................................................
5.6
3
108
109
109
6. DATA A G GR EGATIO N M ODEL............................................................................................................111
OVERVIEW ......................................................................................................................................
6. 1
Problemfrom Information-Context.....................................................................................
6.1.1
RELATED RESEARCH ......................................................................................................................
6.2
Point Aggregate Model ........................................................................................................
ProbabilityD istributionAggregate M odel .........................................................................
6.2.1
6.2.2
PROBABILITY M ERGING MECHANISM ............................................................................................
6.3
I11
112
113
113
114
115
115
Approach: Probabilitym ixture............................................................................................
Analysis of difference among the Estimates........................................................................115
118
M erging with variance-difference.......................................................................................
6.3.1
6.3.2
6.3.3
6.3.3.1
6.3.3.2
Prerequisite: N-sample average................................................................................................
M odeling Assumption ...............................................................................................................
119
119
6.3.3.3
Pseudo-Sampling Size ...............................................................................................................
120
6.3.3.4
6.3.3.5
121
Confidence quantified in terms of Pseudo sample size............................................................
Pseudo Sampling Size in Probability M ixture M odel..............................................................121
6.3.4
6.3.4.1
6.3.5
6.3.6
M echanism for merging estimates with different means ....................................................
M odeling Assumption ...............................................................................................................
Quantificationof Variability among estimates...................................................................
A Complete Aggregate Model..............................................................................................125
EXAMPLE ........................................................................................................................................
SUMM ARY .......................................................................................................................................
6.4
6.5
122
122
123
125
126
7. APPLICA TIO N EX AM PLES....................................................................................................................127
7.1
7.2
OVERVIEW ......................................................................................................................................
A CCEPTABILITY AS AN INTEGRAL DECISION TOOL IN DOME........................................................
127
128
D OME system and Acceptability component......................................................................
D OME Components.............................................................................................................128
D ecision Support Component ..............................................................................................
128
7.2.1
7.2.2
7.2.3
7.2.3.1
7.2.3.2
7.2.3.3
7.2.4
An integratedModel.............................................................................................................
A CCEPTABILITY A S AN ONLINE COMMERCE DECISION GUIDE ....................................................
7.3
7.3.1
7.3.1.1
129
Acceptability Setting M odule....................................................................................................129
Criterion M odule........................................................................................................................130
Aggregator Module....................................................................................................................130
Background...........................................................................................................................
Shopping Decision Attribute Classification .............................................................................
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
131
133
134
134
12
7 .3.1.2
7.3.1.3
7.3.1.4
C urrent Tools ................................................................
7.3.2
..................................................
7.3.2.1
Constraint Satisfaction Problem.................................................................137
7.3.2.2
Utility Theory..........................................................---..........-.
7.3.2.3
Comparison of Decision Tools............................................................139
7.3.3
7.4
. -----------..............................
...... ...--- ..
O peration ............................................................-. --...----------------........................
Online M arket Trend ........................................................----....
- .......... -----... . .........................
D ecision Environm ent ....................................................-
System Realization........................................................................
. ----.. ------------............................
-----------.....................
135
136
136
137
138
140
140
. - - - --..................................
...... .. .
A rchitecture..............................................................-7.3.3.1
142
--.......................
-----..
--..............
Im plem entation R esult......................................................
7.3.3.2
143
..........................................................---..---.--.--...................
SUMMARY
8. CONCLUSIONS ...............................................-----.......--.-------------------...............................................
8.1
8.2
8.2.1
8.2.2
8.2.3
-----. ----------------------...........................
SUMM ARY OF THESIS ...........................................................CONTRIBUTIONS .............................................................................................................................
145
147
MathematicalBasisfor Acceptability M odel......................................................................
A Formal Goal Setting M odel..............................................................................................
Classificationof Uncertaintiesin Design Process.............................................................
147
148
148
RECOMMENDATIONS FOR FUTURE WORK ..............................................................
8.3
8.3.1
8.3.2
8.3.3
145
Dynamic Decision Model in Design Iteration Context.......................................................
Multidimensional Goal Setting Model ............................................................
Extension of Goal Setting Model as an Estimation Tool....................................................
APPENDIX A ..............................-..-.--..-..--.-------.----------------------------------------...............................................
149
149
149
150
151
APPENDIX B...............................-....---.------.-----------------------------------------------...............................................153
BIBLIOGRAPHY ...................................--------...----------------------------------------------...............................................
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
157
13
LIST OF FIGURES
Figure 1.1 Design iteration for a two-attribute decision problem over time................. 18
Figure 1.2 Research elements in the proposed goal-oriented design evaluation framework
0...................................
. ..
26
Figure 2.1 Three main components of design ............................................................
30
Figure 2.2 Design Divergence and convergence ........................................................
32
Figure 2.3 An Evaluation matrix in Pugh chart...........................................................
33
..................................................
Figure 2.4 Main component of the House of Quality
36
Figure 2.5 A Lottery Method for utility function construction ....................................
37
independence...........................................
utility
Figure 2.6 A conditional lottery for
40
Figure 2.7 Recycling acceptability function...............................................................
40
Figure 2.8 Recycling cost acceptability with uncertainty ...........................................
48
Figure 3.1 A utility function for an attribute X ..........................................................
Figure 3.2 Acceptability function for attribute X in a two attribute problem (x, y)......... 49
Figure 3.3 Conceptual comparison of locally defined preference functions and
50
acceptability functions........................................................................................
design
single
a
for
Figure 3.4 A reference lottery for evaluating intermediate values
53
-................
attribute...........................................................................64
Figure 4.1 Saaty's ratio measurement in AHP ..........................................................
Figure 4.2 Break-down of the suggested dynamic probability model.......................... 66
Figure 4.3 Qualitative relationship between knowledge and uncertainty .................... 69
70
Figure 4.4 Elicitation of local probabilities...............................................................
Figure 4.5 Chains in the dynamic probability model. (T,si) represents an assessment of T
. ------------------............... 73
against s,.........................................................................--.
81
Figure 4.6 Quantification of information contents by second largest eigenvalue ......
n
of
eigenvalues
of
value
absolute
largest
second
the
Figure 4.7 A typical distribution of
83
by n m atrices under null hypothesis....................................................................
Figure 4.8 A null distribution of second largest eigen value of 7 x 7 stochastic matrices
83
(2,000 simulations)............................................................................................
86
space..................................................
state
of
Figure 4.9 First Screen : Identification
87
Figure 4.10 Second Screen: Elicitation of local probabilities .....................................
Figure 4.11 Expectation measurement result for time reduction(A) and quality
. 88
im provem ent(B )..........................................................................................
93
Figure 5.1 First Game : two lotteries for Allais paradox .............................................
93
Figure 5.2 Second Game : two lotteries for Allais paradox .........................................
95
Figure 5.3 Hypothetical weighting function for probabilities .....................................
100
...............................................
Figure 5.4 Portfolio construction with two investments
102
Figure 5.5 Com parison of different statistics ...............................................................
104
.............................................
metrics
suggested
Figure 5.6 Visualization for using the
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
14
105
Figure 5.7 (g,') plot of two designs A and B ..............................................................
Figure 5.8 Construction of a preference function and uncertainty estimate .................. 106
108
Figure 5.9 Deduced utility function for Allais paradox ................................................
Figure 6.1 Comparing different statistics proposed for differentiating distributions ..... 116
117
Figure 6.2 Classification of difference among estimates ..............................................
Figure 6.3 Concept on the underlying distribution and pseudo sampling size............... 120
122
Figure 6.4 Main stream data and a set of outlier ..........................................................
126
model.......................
Figure 6.5 Merging probability distributions using the proposed
129
Figure 7.1 D OM E com ponents....................................................................................
129
Figure 7.2 Acceptability Setting M odule .....................................................................
130
Figure 7.3 GU I of a criterion m odule ..........................................................................
131
.....................................................................
m
odule
aggregator
Figure 7.4 GU I of an
132
Figure 7.5 Acceptability related components in DOME model....................................
Figure 7.6 Movable Glass System as a subsystem of car door system.......................... 132
Figure 7.7 Figure (A) shows part of a design model in DOME environment, while Figure
(B) shows an aggregator module for overall evaluation ....................................... 133
135
Figure 7.8 Online shopping search operation...............................................................
137
Figure 7.9 Preference presentation using CSP .............................................................
139
...........
theory
utility
attribute
multi
using
project
T@T
for
Figure 7.10 User Interface
Figure 7.11 System architecture of a simple online shopping engine using acceptability
14 1
model as the decision guide .................................................................................
decision
the
of
Figure 7.12 Acceptability setting interface. The left window shows the list
attributes. The right panel allows the user to set and modify the acceptability
142
function for an individual attribute ......................................................................
Figure 7.13 Figure (A) is the main result panel where (B) is a detail analysis panel for a
143
specific product shown in the m ain panel ............................................................
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
15
LIST OF TABLES
Table 2.1
Table 2.2
Table 2.3
Table 4.1
Table 6.1
Correlation between decision components and design activities................. 29
35
M ethod of Imprecision General Axioms ....................................................
44
Comparison of different decision frameworks...........................................
Threshold point for matrices(based upon10,000 simulations)..................... 84
117
Comparison of differences among distributions ...........................................
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
16
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
17
1.
1.1
INTRODUCTION
MOTIVATION
Product design is becoming increasingly complex as the number of requirements that
must be met, both from within the company and from outside, increases. Customer
requirements need to be accurately identified and fulfilled. While underachievement in
some of the identified consumer needs will result in a launch of an unsuccessful product,
over-achievement at the cost of delayed product introduction is equally undesirable.
Additionally, designers must proactively consider internal needs such as manufacturing
or assembly engineers' needs to achieve a rapid time to market and desired quality levels.
Many Design for X (DFX) paradigms such as Design for Assembly reflect this trend.
From a pure decision-making viewpoint, modern design practice begs designers to be
well-trained multi-attribute decision-makers. Throughout design phases, designers have
to identify important attributes, generate design candidates, and evaluate them precisely.
This generation/evaluation activity will be repeated until a promising design candidate is
identified.
Design, from a process viewpoint, is iterative in nature, encompassing two main
components - design synthesis and design evaluation (see Figure 1.1 ). A design activity
usually starts with a set of preliminary design specifications or goals. Then, the design
task is to generate a set of solutions that may satisfy the initial specifications.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
18
attribattibue
*
I
i
1
e1asifdes
o
end of design1
Figure 1.1 Design iteration for a two-attribute decision problem over time
Those generated alternatives are evaluated against the initial specifications and promising
designs will be chosen for further development. Throughout the process, either design
candidates or specifications may be modified or introduced. This generation/evaluation
pair will be repeated a number of times until a suitable solution is achieved. The design
activity is an iteration of controlled divergence (design synthesis) and convergence
(design evaluation), which will be terminated upon achievement of a satisfactory
outcome.
This thesis aims to provide a supportive decision framework for use during the process of
designing products. Special emphasis will be given to ensure that the decision framework
is prescriptive - normative as well as descriptive. From normative viewpoint, the
framework should be sophisticated and mathematically rigorous enough to reflect the
designers' rationality in decision-making. On the other hand, from the descriptive
viewpoint, the framework should be based upon real design practices- how designers
work, perceive decision problem, and make intuitive decisions in the design process.
To be normative, the decision model must embrace the kind of formal decision analysis
paradigm such as in utility theory. For the model to be descriptive, designers' practice
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
19
must be observed and their processes should be fully reflected in the decision framework.
By incorporating these characteristics, it is hoped that the designers will embrace the
suggested framework in real-world design applications.
Specifically, this research effort will construct a comprehensive and prescriptive
engineering decision framework that,
+ From decision-analysis perspective, will emphasize the role of evolving uncertainties
over time.
+ From engineering view, will serve as an intuitive and practical decision guide for
engineers or designers.
+ From process perspective, will support all design phases, from concept selection to
numerical optimization.
1.2
THESIS GOAL AND DELIVERABLE
The outcome of this research should be an integrated design evaluation framework for
use in the context of an iterative design activity. The iterative design process, as
illustrated in Figure 1.1, introduces two main components that should be addressed
-
tradeoff between competing design attributes, and evolving uncertainty about design
properties.
The work will build on an acceptability model characterizing design activity as a
specification-satisfaction process (Wallace, Jakiela et al. 1995). Acceptability is quite
descriptive in its construction and one of the underlying goals throughout this research is
to develop a more solid mathematical basis for the acceptability model. In addition, the
effort will expand the acceptability-based decision model to form a comprehensive, goaloriented decision framework for designers to use during the design process.
The first part of the thesis will concentrate on the construction of goal oriented decision
metric and a goal setting method for the acceptability-based decision-making model. The
second part of the thesis will address metrics to support decisions subject to evolving
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
20
Goal-oriented
preference function
a(xy) q
preference fuswtion
1DM's
preference
,structure
-
Perform ance estim ate
f(x y)
DMx
E[a(X Y)]
experme
comnbinirngexpert 2
e
Suppltntarv
1___
Lam__e
Y
-
design evolution
Figure 1.2 Research elements in the proposed goal-oriented design
evaluation framework
uncertainty. A data aggregation model will also be covered in this second part of the
thesis. These main elements are shown in Figure 1.2.
1.2.1
Goal-oriented preference function
At the onset of a design process, designers or managers set target values (goals) for the
performance of a product under development, corresponding to customer needs, thereby
defining specifications. Assuming that these specifications are correct, the achievement
of these goals will be the development of a successful design. In this part of the research,
design decision making is characterized in terms of goal setting and the subsequent
construction of specifications (acceptability functions).
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
21
1.2.1.1 Goal setting
Although a significant body of literature emphasizes the benefit of using goal based
approaches, little work has yet been done on formal goal-setting methods - how to
quantify the goal level under uncertainty. This research will suggest a prescriptive way of
setting goals at the start of design activity. Since the goals are usually set with soft
information at the start of design activity, subjective probabilities will be used as basic
components for the constructed methodology.
1.2.1.2 Preference function construction
One-dimensional preference functions will then be constructed around the notion of the
"goal" and be properly extended to multi-dimensional cases. For this purpose, an
operating assumption based upon the preferential independence will be used. The
proposed value analysis system will hopefully achieve both the desired operational
efficiency and mathematical sophistication for multiple attribute design decision
problems.
1.2.2
Decisions under design evolution and changing uncertainty
1.2.2.1 Uncertainty Measure
The "value of information" is a well-established concept in decision science. It quantifies
the maximum worth a decision-maker will pay to get a perfect piece of information that
will completely resolve the associated uncertainty in the decision situation. Although it
provides answers to what-if questions, the resolution of all relevant uncertainties during
the design process is hypothetical and it does not provide a guide to the pending decision.
This research will address how to interpret and reflect the evolving uncertainties for the
decision. In order to achieve this, the proposed framework will first try to categorize
different kinds of uncertainties encountered in design process. Based upon the
classification, an additional metric to supplement the metric of expectation will be
suggested to cases where uncertainty is driven by lack of knowledge. The metric will
facilitate a more meaningful comparison among uncertain designs. In some cases, the
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
22
suggested metrics might help designers to identify an initially unattractive, but a
potentially better design once the associated uncertainties are fully resolved.
1.2.2.2 Multiple estimate aggregation
Design activities usually involve multiple designers. It is likely that designers will have
different judgments on how the intermediate design is likely to perform. Since each
estimate may possess different information, it is important that the different views are
properly used in the decision-making process. This component of the thesis will suggest a
systematic way of aggregating multiple estimates by different designers into a single
reference which, in turn, can be used for a more informed decision in an evolving design
process.
1.3
THESIS ORGANIZATION
Chapter 2 will provide background on design process and related work in the area of
design decision-making. A general discussion on design will be followed by the role of
decision-making tools in design process, after which subsections will discuss existing
decision-making tools and design assessment methods. At the end of the chapter it will
provide a qualitative comparison of different design evaluation tools. Based upon the
comparison, this chapter concludes that currently available tools fail to adequately
achieve the balance between operational simplicity and modeling sophistication. Further,
even the most sophisticated value analysis tool such as utility theory minimally addresses
the time variability and uncertainty part of a decision problem.
Chapter 3 begins the main body of the thesis. Key concepts in acceptability model aspiration and rejection levels - will be reviewed and given mathematical definitions.
This chapter then shows how these concepts are used to build a single attribute
acceptability function. With the introduction of an operational assumption of
acceptability independence, one dimensional model will be expanded a two-dimensional
case, and finally to a general n dimensional case, using mathematical induction.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
23
Advantages and limitations of the acceptability model will be discussed at the last part of
the chapter.
Chapter 4 presents a formal goal setting method using dynamic probabilistic model. This
method is applicable when an expert tries to set a target value under lack of all relevant
information. The goal will be used in defining acceptability functions. The task of goal
setting is, in a mathematical sense, defined as a parameter estimation under uncertainty.
The operating philosophy behind the suggested model is to let designers explore their
knowledge base without worrying about maintaining consistency. The model uses a
discrete Markov Chain approach. A detailed description ranging from elicitation of
transition probabilities, construction of transition matrix, to the use of the limiting
probabilities will be detailed in this chapter. A post analysis at the end of the chapter can
be used to draw useful information about the states of decision-makers when they use the
suggested model.
Chapter 5 discusses decision making under uncertainties of single events. As a first step,
this chapter attempts to classify uncertainties in product development process. Based
upon different kinds of uncertainties encountered in design decision problems, this
chapter suggests that designers need a better understanding of their uncertainty structures
to make better informed decisions. Although the whole chapter is addressed in a
qualitative manner, a metric is suggested in order to supplement the popular expectation
measure.
While the preceding chapters address the preference side of the decision making, chapter
6 addresses a very practical issue in decision-making situations - uncertainty side. This
chapter will develop a mechanism for merging multiple estimates by different experts. A
probability mixture scheme based upon an opinion pool concept will be used.
Chapter 7 describes two software implementation examples using acceptability model as
an integrated decision guide. The acceptability decision model is currently used as a
decision guide in the MIT CADlab DOME software. Its implementation as part of a
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
24
systems model will be briefly discussed. As a second implementation example, the
acceptability model will be used as an online shopping guide. This application is
motivated by the characteristics of acceptability model -measurement accuracy at a
rather modest operational effort. These application examples will also discuss how the
acceptability model's characteristics will benefit both the buyers and the vendors at the
same time.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
25
2. BACKGROUND
2.1
OVERVIEW
This chapter reviews previous work relevant to this thesis. The first part of the chapter
will provide a brief description of design and design process in general. After briefly
introducing the decision analysis, this chapter quickly merges these two topics, decisionmaking and design, emphasizing the role of decision analysis as an integral part of a
design process. The second part of the chapter introduces related work on design
decision-making tools, both qualitative and quantitative. Qualitative tools, which often
cover broader issues than decision analysis alone, are often used in early design phase to
provide an environment for a design group to discuss and reach a consensus on major
issues. The quantitative tools provide a more rigorous framework for designers to carry
out sophisticated tradeoff studies among competing uncertain designs. The last part of
this chapter will provide a qualitative framework to compare the methods surveyed. The
chapter closes with a discussion of desirable properties that a design decision framework
should possess for practical applications.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
26
2.2
DESIGN, DECISION MAKING, AND DESIGN PROCESS
2.2.1
Design
Design facilitates the creation of new products and processes in order to meet the
customer's needs and aspirations. Virtually every field of engineering involves and
depends on the design or synthesis process to create a necessary entity (Suh 1990).
Therefore, design is such a diverse and complex field that it is challenging to provide a
comprehensive review on every aspect of design. From psychological view, design is a
creative activity that calls for sound understanding in many disciplines. From systems
view, design is an optimization of stated objectives with partly conflicting constraints.
From organizational view, design is an essential part of the product life-cycle (Pahl and
Beitz 1996)
Although there are variants describing product design, three main design components are
(Suh 1990).
+
problem definition
*
creative (synthetic) process
*
analyticprocess
The main task in the problem definition stage is to understand the problem in engineering
terms: the designers and managers must determine the product specifications based upon
the customer's needs as well as resources allocated for the design task. The subsequent
creative process is then to devise an embodiment of the physical entity that will satisfy
the specifications envisioned in the problem definition stage. This creative process can be
Problem
Definition
Creative
Process
Analytic
Process
Figure 2.1 Three main components of design
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
27
decomposed into conceptual, embodiment and/or detail design and is the least understood
part among the three, requiring the creative power of designers. The analytic process is
associated with measurement process, where designers evaluate the candidate solutions.
At this stage designers determine whether the devised solution(s) are good or not,
comparing the devised designs against the initial specifications.
2.2.2
Decision Analysis
Every decision analysis framework has a formalization process for the problem,
incorporating both objective and soft/subjective information. This explicit structuring
process allows the decision-maker to process assumptions, objectives, values,
possibilities, and uncertainties in a well summarized and consistent manner. The main
benefit of using decision analysis comes from this formalization process (Drake and
Keeney 1992; Keeney and Raiffa 1993), requiring the decision-maker to investigate the
problem in a thorough way to gain an understanding of the anatomy of the problem. The
decision analysis is useful if it provides a decision maker with increased confidence and
satisfaction in resolving important decision problems while requiring a level of effort that
is not excessive for the problem being considered (Drake and Keeney 1992)
A typical decision framework structures a decision problem into the following three main
constituents (Keeney and Raiffa 1993)
+ A decision maker is given a set of alternatives, X = {x,, x 2,..., IXJ, where each x,
represents one alternative.
+ The decision maker has a set of criteria, C=(c, c2,.., c).
+
Then, which one is the best solution among the alternatives based upon the set of
criteria?
Decision analysis helps the decision-makers in the following three areas.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
28
+ Preference Elicitation: to identify the set of criteria and the quantification of each
criterion.
+ PerformanceAssessment: to assess the multi-dimensional performance of each
alternative.
* DecisionMetric: decision rule for choosing the "best" solution.
In most applications, these tasks become complicated, mainly due to the uncertaintiesin
the preference construction and performance assessment. A preference function is
obtained by transforming a decision-maker's implicit and subjective judgment into an
explicit value system. In many practical cases, the decision-makers are not sure about
their own judgment, complicating the elicitation process. However, sophisticated decision
tools should capture and quantify this uncertainty. On the side of performance
assessment, lack of hard information makes the quantification task very challenging. As a
result, the performance is usually expressed with degrees of uncertainty. Based upon the
constructed decision-maker's preference and performance estimate for each alternative,
the final decision metric suggests a course of action under both kinds of uncertainties.
2.3
DECISION ANALYSIS IN DESIGN
2.3.1
Design Decision Making
Among the three design activities, decision analysis would be most applicable to the
problem definition and analytic process. The activity of specification generation, the
main activity at the problem definition stage, overlaps with the identification of the
criterion set in the decision-making framework. The House of Quality (HOQ), from
Quality Function Deployment (Hauser and Clausing 1988), is an extensive design tool
applicable to the problem definition process. The HOQ provides a framework where
designers can translate the customer attributes into engineering attributes. In addition to
its basic functionality, it allows engineers to set target values for the engineering
attributes using competitive analysis. In general, preference elicitation component of
decision analysis can help designers to better understand and quantify the design task. In
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
29
the case of product development, the preference elicitation process can help designers to
accurately formulate the customer's needs.
The analytical process is another apparent design activity that can be helped by use of
decision analysis. At the analytical process, the designers will be given a set of design
candidates that will be evaluated against the set of criteria elicited at the problem
definition stage. For this purpose, the multi-dimensional performance of each design
candidate has to be assessed, often under uncertainty. The formal way of assessing the
performance of alternatives in decision analysis will ask the designers to more accurately
measure each alternative. The performance thus obtained, with the preferences elicited
earlier in the problem definition process, will be inputs to the metric system. The metric
system will suggest to the designer which design alternative to pursue for the subsequent
development stage.
These are two main visible areas where the use of decision analysis can improve the
design activities. Table 2.1 shows the correlation between design areas and decision
analysis components.
Table 2.1 Correlation between decision components and design activities
DESIGN ACTIVITY ELEMENT
Problem
Definition
>
Z
Creative
Process
Analytic
Process
Preference
Elicitation
Performance
Assessment
Decision
Metric
2.3.2
Decision Making as an Integral Part of Design Process
In most practical cases, initial design efforts will not yield a design solution meeting the
initial specifications. In this case, the designer might change the initial specifications to
accommodate the realized designs and terminate the design activity. More commonly,
designers will devise new design candidates to meet the initial specifications. In most
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
30
cases, the succeeding iteration will be a mixture of these two strategies. The designer will
try to devise new designs based upon a newly modified set of specifications. Design
assessment between iterations will provide a feedback to the designers suggesting a
guideline to the succeeding design activity. With this feedback, the design activity will be
iterated multiple times, resulting in a design process. The design process will continue
until a promising solution is identified which satisfies the initial or modified
specifications.
In addition to its role in the design activity, the scope of decision analysis becomes
broader in the design process. In the design iteration context, decision analysis tools can
be used to provide designers with critical feedback necessary for succeeding iterations.
The analysis might even suggest an appropriate direction for the following iteration. For
instance, the decision analysis result can help designers identify weakly performing
attributes of the most promising candidate. In the next iteration the main effort can be
directed to improve the weak performing attributes of the design. Extending the above
argument, continuous feedback from the analysis including measurement of the design
candidates would help designers to continuously monitor the strength and weakness of
each design. Ideally, design evaluations can be placed in many parts of the design process
to provide feedback to the designer. From the decision-making standpoint, design process
is often viewed as a controlled iteration of divergence and convergence (Pugh 1991).
Design Generation
*A
Denon
1101
degree of
uncertainty
DlOvion Realuion
Figure 2.2 Design Divergence and convergence
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
time
31
Divergence corresponds to the Ideation/Creation part of the design, and convergence to
the evaluation/selection part. The overall role of decision analysis tool as an integrated
part of design process is abstracted and visualized in Figure 2.2. This is a modified and
expanded interpretation of the controlled convergence diagram by Pugh (Pugh 1991). The
x abscissa represents time in the design process, while the y abscissa is a hypothetical
multi-dimensional design attribute space. For instance, this design space might be a cost/
performance pair. At the initiation of a design process, a target point A on the multidimensional design space will be set in the problem definition stage. A set of design
candidates will then be generated through the creative part of design. This part is again
modeled as each divergent part of the cone in Figure 2.2. The width of the cone
represents the uncertainties associated with the solutions. The ideation process may be
viewed as an exploration of uncertain design domains that may yield potentially
satisfactory designs. In the convergence part of design activity, the level of uncertainties
associated with design candidates will decrease through analysis and measurement.
The dots in the figure represent a trace of how the designer's aspiration level or
specification might undergo a change based upon the feedback from the
analysis/evaluation at the convergent part of the diagram. Given the newly set targets, a
new round of divergence is carried out. The critical role of analysis/evaluation is clear. It
guides the overall direction of the design process. This guidance can also direct the effort
and resource allocation in each succeeding design iteration.
2.4
RELATED WORK
So far, the role of decision analysis in both design and design process has been discussed.
This part of the chapter discusses current design decision-making tools. These existing
tools are classified as qualitative and quantitative. Well-known qualitative tools include
selection charts and some aspects of Quality Function Deployment. Among the well
known quantitative tools are Figure of Merit, Fuzzy Decision-Making, and Utility
Theory. These quantitative tools provide a more sophisticated and complete value
analysis but are more expensive in terms of operational effort. The Acceptability model,
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
32
the basis of the goal-oriented model of this thesis, is intended to reduce this operational
effort.
e
Qualitative Decision Support Tools
2.4.1.1 Pugh chart
Pugh chart is a selection matrix method often used in early design phase. It provides a
framework for a group of designers to discuss, compare, and evaluate uncertain design
candidates (Pugh 1991). Figure 2.3 shows an exemplary evaluation matrix used in Pugh's
total design method. In the chart shown, there are four concepts, from concepts I to 4,
under consideration and five criteria from A to E. One of the design candidates serves as
a datum point against which other concepts are compared. In the assessment, three
legends of +/-/S are used to specify "better than", "worse than", and "same as" the datum,
respectively. The design with the highest aggregated sum will be chosen as the best
candidate among the candidates. The Pugh chart is qualitative and does not allow an indepth analysis. However, it is intuitive enough for many designers to use in concept
selection process.
Concept 1
Concept 2
Concept 3
Concept 4
Concept 5
A
+
-
D
-
+
B
-
+
A
-
+
C
+
+
T
-+
S
D
+
S
U
+
S
E
S
S
M
-
-
2
1
0
-1
1
S
Figure 2.3 An Evaluation matrix in Pugh chart.
2.4.1.2 House of Quality
The basic aim of Quality Function Deployment (QFD) is to translate product
requirements stated in the customer's own words into a viable product specifications that
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
33
can be designed and manufactured. American Supplier Institute defines QFD as (ASI
1987),
"A systemfor translatingcustomer requirements into appropriatecompany requirements
at every stage,from research through productiondesign and development, to
manufacture, distribution,installation and marketing, sales and service. "
House of Quality is a major tool in QFD to perform the necessary operations. HOQ is a
very versatile tool applicable to a broad range of design and the decision-making aspect
of HOQ will be discussed in this section. Figure 2.4 shows the major components of a
HOQ. The main body of the house serves as a map between the customer attributes and
the corresponding engineering attributes. The column next to the customer attributes
allows the designers to specify the relative importance among the customer attributes.
Then the designers, based upon the translated engineering attributes and competitive
analysis, can determine a set of targets for evaluation. There are many variant versions of
HOQ adding more functionality (Ramaswamy and Ulrich 1993; Franceschini and
Rossetto 1995).
Correlation Matrix
Requirements degree
of importance
Customer
Attributes
Product Engineering/Design
Requirements
Competitive
Benchmarking
Assessment
Relationship Matrix
Technical importance ranking
Figure 2.4 Main component of the House of Quality
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
34
Quantitative Decision Support Tools
2.4.2.1 Figure of Merit
The Figure of Merit (FOM), a commonly used design evaluation model, determines the
value of a certain design as a weighted average of individual attribute levels. A weighting
factor is often thought of as relative importance. A typical FOM for a design X. expresses
the overall value as
N
iwi
FOM(XJ)
=
- vi(xp,1)
N
'
=
(xj' 'j,
2
...
j,N
)
(2.1)
i=1
where,
Xi : aj" design candidate
N: total number of attributes
wi : weighting factor assigned to the i't attribute.
vi: performance indicator function of the i' criterion.
This method seems very popular since it is quite intuitive and easy to use. In fact,
variations of FOM seem to be in use in many decision situations.
Its operational simplicity and intuitiveness are its advantage. However, there are also
limitations.
First, unless the function vi's are defined as a closed form, the designer will have to
evaluate the i" performance indicator function for every design candidate. As the number
of designs subject to evaluation gets bigger, the designer may have to evaluate the
performance indicator in an exhaustive manner.
Second, the FOM method addresses the uncertainty in a simplified way. The Figure of
Merit Method addresses the preference and performance with one function, the
performance indicator function. In practical cases, x j ,the it' attribute level of aj' design
candidate may be uncertain. In the FOM method, the performance criteria function of vi
Massachusetts Institute of Technology - Computer-aided Design Laboratory
35
will be used to express both the preference and uncertain performance for a specific
design. As the problem becomes complex and gets bigger, this approach will not yield
accurate and consistent measurement results.
2.4.2.2 Method of Imprecision
Otto performed an extensive research on different design metric systems in decision
frameworks (Otto 1992). He argues that there are different kinds of uncertainties
encountered in design; possibility, probability, and imprecision. Imprecision in the
method refers to a designer's uncertainty in choosing among the alternatives (Otto and
Antonsson 1991). The method itself is very formal. A preference is defined as a map
from a space Xk to [0,1] c- R,
yk : Xk
(2.2)
--> [0,1]
that preserves the designer's preferential order over Xk., where k= 1,2,..n and n is the total
number of attributes. The subsequent step is to combine each preference, yk 's into an
aggregate preference. The first step taken for this aggregate preference construction was
to define a set of characteristics that describe and emulate designer's behavior in design
decision-making. This set of characteristics then is transformed into a set of axioms.
Table 2.2 shows the set of axioms proposed for the design decision metric. Among the
many possible aggregate models investigated, Zadeh's extension principle is used to
Table 2.2 Method of Imprecision General Axioms
Boundary condition
P(O,0...,0)=0 ,P(],...,1)=1
P(
N....
N)ffIk <k'
iu p)P(N1,...,k
,...,
P(i,...,p
M'N
k'-+k
P(p', ...,O,..., pN) =0
''''.k
N
Monotonicity
Continuity
Annihilation
P(p,...,p =YyIdempotency
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
36
aggregate the individual preferences (Wood, Otto et al. 1992). Employing the fuzzy
framework, the initial map pk can be regarded as fuzzy membership functions. The
concept of "fuzzy" is based upon concept of possibility, not probability, and there is some
debate on using this in decision science (French 1984).
2.4.2.3 Utility Theory
Utility theory is the most sophisticated analytical decision value analysis method,
originally developed for economic decision making. Since its formal inception by von
Neumann, it has been extensively developed by many researchers and has been applied to
many disciplines including design (Howard and Matheson 1984) (Tribus 1969; Thurston
1991).
Utility theory is built based upon a set of axioms that describes the behavior of a rational
decision-maker. Among the axioms are transitivity of preference and substitution. A
detailed discussion on the construction of utility theory is found in the classical book on
utility theory, Decision with Multiple Objectives (Keeney and Raiffa 1993). Since the
acceptability-based framework is to a large extent associated with utility theory, this
section will discuss basic formulation of utility theory and its relatedness to design
decision making.
A single attribute utility function quantifies the utility of an attribute level on a local scale
of [0.0 , 1.0]. Lottery method is used for the construction of a single attribute utility
function. In order to quantify the utility of $ 500 on a basis of ($ 0, $ 1,000), a decisionmaker participates in a hypothetical lottery as is shown in Figure 2.5.
P\
$ 1,000
$ 500
Figure 2.5 A Lottery Method for utility function construction
MassachusettsInstitute of Technology - Computer-aided Design Laboratory
37
The minimum probability, p, that will leave the decision-maker no different between the
sure amount of $ 500 and the lottery of ($ 0, $ 1,000 ) is defined as the utility for $ 500.
A continuous utility function for a certain range is constructed interpolating a number of
data points available in the range.
Construction of a multi attribute utility function is a challenging task compared to that of
one-dimensional function. Theoretically, an n-dimensional surface can be constructed by
interpolating and extrapolating a number of data points on the surface. However, this
direct assessment approach becomes impractical as the number of attributes or the surface
dimensionality increases.
Utility independence is an assumption often invoked regarding the decision-maker's
preference structure used in utility theory. Its role in multi attribute utility theory is very
similar to that of probabilistic independence in multivariate probability theory. Attribute
X is utility independent from attribute Z when the conditional preference for lotteries on
X given z do not depend on the particular level of Z. The conditional lottery is illustrated
in Figure 2.6.
0.5
(x1, zO)
0.5
(x 2,
zO)
Figure 2.6 A conditional lottery for utility independence
If a decision maker's certainty equivalent value, i , in the lottery does not change with
the level of Z, then the attribute X is utility independent of attribute Z. In a surprisingly
large number of cases, this turns out to be a pragmatic assumption for the construction of
multi attribute utility theory. If an attribute X is utility independent of Y, and Y is utility
independent of X, both attributes are mutually utility independent and this mutual
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
38
independence dictates a special form for the multi attribute utility function. For instance,
for a two dimensional utility function, one of the special forms is
u(x,y) = k
u(x) + k, -u(y) + k, .u(x)- u(y)
(2.3)
where kX, k, and k, are scaling factors to be determined from a set of more hypothetical
lotteries. A more strict assumption of additive independence further simplifies the above
formula to
(2.4)
u(x,y) = k, -u(x) + k,) -u(y)
A general n-attribute utility function, where attribute xi are mutually utility independent,
is
n
nI
u(x)=0k, .u,(x,) +k0 ki -kj . u,(xi) -u (xj)
i=1
j>i
i=1
k, -k
+k 2
(2.5)
k .u,(xi)-u (x
1 )-u,(x,)
i=1
j>i
1>j
+
-- +k"n-
-k, -k2 -..
kn .u1(x).u 2 (X2 )..-u(x)
where,
1. u is normalized by u(x, x,--, x)= 0 and u(x*,x2,...,xn)
=
1
2. u (xO) = 0 and u (x')=1, i=1, 2,..., n
3. k,= u(x,* , )
4. k is a scaling factor that is a solution to +k = J(1+ k -k,)
i=i
As the number of attributes increases, the complexity of multiple attribute utility
function, including the number of scaling factors, increases. Under the assumption of
additive independence for a multi attribute case, the above formula reduces to
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
39
n
u(x)=
k, -u,(Xi)
(2.6)
Besides the operational complexity of a multiple attribute utility function, the trade-off
strategy built in the utility function does not reflect typical engineering trade-off scenario.
Due to its additivity, utility theory assumes that a trade-off is always possible between
attributes, even to a point of achieving a zero value in some of the attributes(Otto 1993).
For instance, utility theory may prefer a design exceeding stress limit with a low cost to a
safe design with an intermediate cost. In real design, the first case would not be
acceptable at all.
However, utility theory is unique in its mathematical sophistication. The acceptability
model shares many ideas with utility theory, especially in the process of formalization
and construction. In effect, acceptability function is a utility function on an absolute scale
(Wallace 1994).
2.4.2.4 Acceptability Model
The Acceptability model is based upon the observation of design practice and intuitions
behind the observations. It bears a design hypothesis that design is a specificationsatisfaction process. The acceptability model intends to emulate how product designers
use the specifications in the development process.
Design activity starts with a design brief or specification. Although the initial
specification may be subject to change in the course of design, the specification will be
used to judge the intermediate designs in the process. The design activity will be over if a
design meeting the specification is identified. Acceptability is a notion translating the
design brief or specification into an evaluation metric system. Acceptability is defined as
a designer's subjective probability that a certain attribute level will be deemed acceptable
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
40
1.0o
_
0.0
|
1.5
1.0
0.5
y
Material Recycling cost
Virgin Material cost
Figure 2.7 Recycling acceptability function
(Wallace, Jakiela et al. 1995). Figure 2.7 emulates the behavior of a designer who has a
binary logic about what attribute levels are acceptable or unacceptable. The designer will
accept a recycling cost ratio of 0.9 with a probability of 1.0, while he/she will accept the
ratio of 1.1 with a probability of zero. For many design attributes, designers will often not
be able to specify a clear boundary for an attribute being acceptable or unacceptable. In
most cases, this comes from the designer's preferential uncertainty on selecting a single
binary threshold for decision. In Figure 2.7, a ratio of 0.99 will be totally acceptable,
while a ratio of 1.01 will be totally unacceptable. This illustrates that the uncertainty on
the threshold point can have a significant effect on the decision and it would be more
realistic to capture this kind of uncertainties in decision making. In effect, most decisionmakers will have a vague region in the attribute level and this intermediate region can be
Probability of
Acceptance
1.0
0.0
0.5
1.0
1.5
Material Recycling cost
Virgin Material cost
Figure 2.8 Recycling cost acceptability with uncertainty
Massachusetts Institute of Technology - Computer-aided Design Laboratory
41
further quantified as subjective probabilities in acceptability model. Figure 2.8 shows an
acceptability function with a transitional boundary. The transitional range between ratios
of 1.0 and 1.5 quantitatively models a designer's uncertainty about the threshold point. In
this representation, the ratios greater than ratio of 1.5 have probabilities of 0.0 for being
acceptable. This model more realistically describes a decision-maker in practice. There is
an important concept used in the construction of acceptability functions. Since the
acceptability is defined in terms of probabilities, the acceptability function is on an
absolute scale. Therefore, zero acceptability or zero probability in the recycling cost ratio
will render the whole design obsolete regardless of other characteristics of the design
under assessment.
For the construction of multiple attribute acceptability function, there are two steps
involved. First, individual acceptability functions a(x), i=1,...,n, are elicited on the
absolute scale.
a, = ai(x,), a(x2)= 0.0 and a(x)= 1.0
(2.7)
where, i=1,2,...,n.
Then a multi attribute acceptability function is given by
n
a(x,
X2,---, Xn)
=1
;ai(xi)
(2.8)
We can multiply the individual acceptability functions under an implicit assumption that
the each acceptability value, quantified using probabilities, is independent from each
other.
The greatest advantage of the acceptability model is its intuitiveness due to the
descriptive nature embedded in the formulation process. It is based upon the premise of
the design activity and is built with familiar design languages. At the same time,
multiplicative metric serves as design logic frequently found in many design practices.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
42
The acceptability model described in this subsection will be more refined and rigorously
formulated in chapter 3.
*
Other Decision Aid Tools
The qualitative and quantitative decision frameworks discussed in the previous two
subsections had two main elements, either separately or combined: preference and
performance. Designers implicitly combine these two components together in qualitative
tools such as Pugh chart. For example, one element in the selection matrix is an
assessment for both preference and performance for a design's specific attribute. On the
other hand, more sophisticated quantitative frameworks treat the preference and
performance in a separate manner and combine those two components into a metric
afterwards. The utility theory and acceptability models fall into this category.
There are other decision tools available to address specific aspects of preference
elicitation, performance assessment, or both. The Delphi method is a process-oriented
method applicable to a broad range of group decision-making environment. It is a method
for building a consensus from a group of experts using a combination of anonymous
survey and feedback.
Analytic Hierarchy Process (AHP) is a quantification method under uncertainty. It
relaxes the decision-maker's burden of maintaining consistency in assessment task. The
method allows decision-makers to express possibly conflicting opinions, which are then
processed into a set of consistent opinions. It was originally devised to determine
priorities among competing attributes in a multiple attribute problem. However, the
method can be modified for general assessment problems under uncertainty.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
43
2.4.4
Discussion
This subsection provides a qualitative comparison among the decision-making tools
discussed in this chapter. The comparison is made in three aspects - which the author
believes are important for use from a design engineer's view. They are mathematical
sophistication, operability, and issues associated with uncertainty in evolving design
environment.
MathematicalSophistication:This is associated with the accuracy of the measurement
method both for preference elicitation and performance assessment. The decision tools
are very difficult to empirically test their correctness. Unlike natural sciences, the role of
controlled experiment is very limited for testing a specific decision making tool.
Therefore, the accuracy of each tool will be mainly deduced from the modeling
assumptions and mathematical steps taken for the construction process.
Operability:This dimension addresses the use of ease for applying the tool in practice. A
decision model with high operability should be intuitive to understand and, at the same
time, the measurement and elicitation steps should not be lengthy.
Incorporationof Evolving Uncertainty:From the pure decision making standpoint,
design process is viewed as a decision-making under evolving uncertainty. This
dimension addresses how each decision framework addresses the inherent component in
the design process, the uncertainty and time variability - the evolving uncertainty.
Table 2.3 qualitatively compares these attributes between the decision tools addressed in
this chapter. The Pugh chart is judged to be easy to operate with minimal mathematical
sophistication. At the same time, it does not have a mechanism to adequately address the
evolving uncertainties in design process. The utility theory is by far the most
mathematically sophisticated. It is built on a set of axioms and every construction step is
much mathematically rigorous. In our observations, however, utility theory is not
typically used by practicing product design engineers. It involves multiple assessment
steps, for example, scaling factors, for designers to accurately apply it to design process.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
44
Also there is a criticism that value-based decision analysis tools, such as utility theory,
emphasize too much on the multi dimensionality of problems while more important
issues such as uncertainty and time variability issues are minimally addressed (Bauer and
Wegener 1977).
This research effort was partly initiated with an intention to provide a comprehensive
decision framework achieving a balance between mathematical sophistication and
operability. The acceptability model is believed to provide a groundwork for this task.
However, the expanded model should have a component for addressing the evolving
uncertainties in the design process.
Table 2.3 Comparison of different decision frameworks
Mathematical
Operability
Sophistication
Pugh Chart
Incorporationof
Evolving Uncertainty
VA/i/
/
Figure of Merit
Utility Theory
2.5
SUMMARY
This chapter provided a discussion on design and design process from decision-making
viewpoint. The design decision analysis tool is a subset of design process. With decision
analysis as an integrated part of design process, the designers can continuously monitor
the design candidates and receive feedback during design iterations.
Related work was also presented in this chapter. Some of the well-known tools were then
qualitatively compared on three important dimensions. This thesis is motivated to provide
a decision framework emphasizing a balance between its operability, mathematical
sophistication and emphasis on incorporation of uncertainties.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
45
3. ACCEPTABILITY-BASED EVALUATION MODEL
3.1
OVERVIEW
This chapter presents an acceptability-based evaluation model, a prescriptive goaloriented decision making tool applicable to most design phases. The approach uses goals
or targets as are commonly observed in design practice. Section 3.2 provides the
motivation for using a goal-oriented approach in early stages of product design compared
to traditional optimization-based framework. The section 3.3 introduces two important
concepts used in the acceptability model. Based upon these two concepts, section 3.4
develops one dimensional acceptability function, and section 3.5 will extend the onedimensional model to a multidimensional acceptability model using mathematical
induction.
3.2
A
GOAL-ORIENTED APPROACH
The concepts of setting goals and satisficing are not new. Satisficing was first formally
introduced to the operations research/management science (OR/MS) community by
Simon (Simon and March 1958). Thereafter, Koopman introduced the idea of goal
programming, which has subsequently formed a rich body of literature (Zeleny 1982). In
engineering design, many parametric design phase optimization problems have been
approached using goal programming. Dlesk (Dlesk and Lieman 1983) and Mistree et al
(Mistree and Karandikar 1987) have used goal programming in multiple objective
engineering design applications.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
46
Keeney (Keeney 1993) provides a good example distinguishing the concepts of
optimization and satisficing. The optimization framework is usually expressed as an
objective indicating a direction in which the participants should strive to do better. For a
postal service, the objective may be expressed as "minimize the total transit time for a
certain category of mail". For this same problem, satisficing framework is represented by
a goal. A possible representation of a goal would be "deliver at least 90% of the parcels
and letters within two days". The major difference between these two concepts is that,
using the goal, the participants can say that the task is either achieved or not, while using
objective will force the participants to be in a continuous improvement mode. The goals
are useful for clearly stating a level of achievement to strive forwards.
3.2.1
Goal programming
The primary motivation for goal programming in OR/MS is tied to the multi dimensional
aspect of the decision problem - Keen (Keen 1977) noted that the optimization
paradigm had stalled on "the curse of multidimensionality" of modern decision problems.
In its essence, goal programming was proposed to simultaneously secure minimum levels
in all attributes of interest. It has been adopted in engineering optimization problems for
similar reasons. As a systems-oriented view received more emphasis over the period,
engineering design problems frequently involve multiple attributes and designers must
simultaneously consider the conflicting objectives.
3.2.2
Goal-based design evaluation framework
Goal programming has been used in many engineering applications to address the multidimensional aspect of engineering design problems. In addition to the multidimensionality of the design problems, this thesis adopts the goal-based approach from
one more perspective. As was discussed in chapter 1, design activity is characterized as a
specification satisfaction process. Design process is terminated if designers can produce
an artifact that meets the targets. Embracing this view, the design goal is a set of targets
and the design activity is to achieve the goal.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
47
In the acceptability evaluation framework proposed in this thesis, a goal-oriented
approach is adopted since it is a close emulation of how design tasks are carried out in
practice. There may be two reasons that goal setting is used in product design. First, as
was previously described, goal-based approaches can be useful in multidimensional
design problems. Second, the author believes that it provides a framework for designers
to confront uncertainties during early design phases. Even in a single attribute problem,
designers may be uncertain about what can be achieved or what is desired. Under this
uncertainty, aggressive but realistic goals can serve as a guide to the designers to focus
design effort (Ramaswamy and Ulrich 1993) without introducing artificial precision into
the statement of preferences.
3.3
ASPIRATION AND REJECTION LEVELS
3.3.1
Definition and Formulation
In comparison or evaluation methods, the datum point serves as a set of references for
differentiating different objects. In qualitative evaluation models, the datum point is often
set as one randomly chosen design candidate. In Pugh's selection matrix, for example, all
attributes of one random single design candidate serves as a reference against which other
designs are compared. In quantitative method, the notion of setting datum point for a
specific attribute often translates into a set of two points- the most and worst favorable
points in an interval. The interval is determined such that it contains an arbitrary attribute
level of all designs under the evaluation. In acceptability model, these two threshold
points are defined in such a way to emulate the design activity. The starting point for
constructing an acceptability preference function is to establish these two threshold points
in the attribute abscissa.
In order to see how the acceptability model determines these two threshold points, in
comparison to other evaluation framework, the threshold point setting process for utility
theory is introduced first. The points x* and x0 , shown in Figure 3.1, represent these
thresholds for an attribute X in utility theory. In utility theory, the guideline for
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
48
determining the threshold points (x*, x") is that they should bound all possible values or
worth for the attribute X of every alternative. For a two attribute design problem (X, Y),
this may be stated as,
[x",x*] s.t. x4 <x. <x*, Vi = 1,2,...,n.
[y',y*] s.t. y' < y,
where the symbol
(3.1)
y*, Vi =1,2,..., n.
is read as "preferred to or indifferent to" and n indicates the number
of alternatives. Each alternative is represented by a set of attributes [x,yJ, i=1,2,...,n.
These thresholds serve as a reference set against which other intermediate attribute levels
are evaluated. However, in design practice, a single design with an ideal attribute set
[x*, y*] is usually unachievable since it is the set of best possible performance levels for
all attributes. Therefore, in this case, alternatives are measured against an impossible
reference design [x*,y*].
A goal-based model might use a different approach for setting threshold points.
Alternatives are gauged against a standard (i.e., a set of goals) which is presumed to be
achievable although aggressive. It is a common practice to set such targets at the onset of
a project. Measurement against an achievable target provides a designer with a feedback
about how close a design alternative is to its desired state. The proposed acceptability
model assumes that, if projected targets are achieved, the designer will find the solution
to be acceptable with certainty or with probability of 1.0.
Utile
Utility function
1.0
*
x
X0
Attribute X
Figure 3.1 A utility function for an attribute X
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
49
In order to incorporate this idea, two key concepts of an aspirationlevel and a rejection
level are defined as follows.
Definition. An aspirationlevel for an attribute is such that, under current technological,
physical and other constraints, if this level is achieved for this attribute, the designer is
certain that he/she will accept the whole design assuming other attributes are acceptable.
Definition. A rejection level for an attribute is such that, under current technological,
physical and other constraints, if this level is not achieved for this attribute, the designer
is certain that he/she will reject the whole design even if all other attributes are
acceptable.
Figure 3.2 schematically shows the two threshold points defined as above. In the flat
region of the acceptability function at 1.0, the aspiration level or goal is exceeded
assuming the other attribute Y is acceptable. The flat region of the acceptability function
at 0.0 is delimited by the rejection threshold regardless of the other attribute Y's level.
The intermediate x levels are evaluated assuming the other attribute Y has achieved its
goal of y*.
Acceptability
1.0
Acceptability
function
x* (x, y*)
X"
Attribute X
Figure 3.2 Acceptability function for attribute X in a two attribute problem
(x, y)
3.3.2
Absolute Scale of Aspiration and Rejection Levels
The aspiration and rejection levels qualitatively differ from the threshold points usually
set in many scoring-based decision methods. In scoring based methods, the threshold
points are set on a "local" scale. Therefore, the intermediate attribute levels are also
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
50
evaluated on this local frame. When different attributes on individual local frames are
combined, an additional procedure is needed for comparing these "local" scales. As
shown in the left frame of Figure 3.3(A), attributes X and Y are first evaluated separately
on their local frames. In order to merge these two evaluation scores, scaling factors or
weighting factors are then introduced such that the separate evaluations of attributes X
and Y are commensurate.
In the proposed acceptability model, the rejection level is a baseline with the same worth
for all attributes. This is schematically shown in Figure 3.3(B). The worst-case values of
attributes X and Y are equally unacceptable. If either level fails to be achieved, the whole
design will not be acceptable. Similarly, the simultaneous attainment of all aspiration
levels provides a common reference for all individual attributes.
-
v(x)
-
Scaling
u(y)
y
v(x)
01
facto
o~
au(y)
0
0
1
I
)
0
1
(A) Scored-based model
a(xy") or
a(x ,y)
a(x*,y*)
a(x,y*)
a(x*,y)
0
(B) Acceptability function
Figure 3.3 Conceptual comparison of locally defined preference functions
and acceptability functions
Thus, different attributes are gauged against a common reference set. Later, this will
enable the aggregation of multiple individual acceptability functions without an
additional mapping procedure such as use of weighting or scaling factors. This may
appeal to designers in two respects. First setting weighting factors is a difficult task in a
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
51
decision problem. Second, the absence of the weighting factors diminishes the number of
steps necessary for the construction of a multiple attribute acceptability function.
3.4
ONE DIMENSIONAL ACCEPTABILITY FUNCTION
3.4.1
Formulation
Consider an n attributedecision problem. Let the superscript * denote an aspiration level
for each attribute and the superscript 0 denote a rejection level. Assume that the
aspiration levels for all attributes are identified and that X* denotes a set of all aspiration
levels. Let X 0 denote the corresponding set of rejection levels for all attributes.
X*=[x*, x*,..., x*]
(3.2-a)
=[x , X,...,x0
(3.2-b)
x
Additionally, assume the existence of a value function, i.e., it is possible to construct a
preference function v(.) quantifying a decision maker's preference. This assumption is
identical to one of the axioms in utility theory - the existence of a preference function.
The rejection level and aspiration level for an attribute are mathematically expressed in
the preference function as
v(x 1 ,x 2 ,--, IiX,,..,Xn) = 0 Vi E 1,2,...,n)
* x,* ,
v(x * x*,...,x
*)= 1
(3.3-a)
(3.3-b)
Property (3.3-a) is a mathematical interpretation of a rejection level. Failure in achieving
a minimum level in one attribute will render the whole design as unacceptable with
certainty, regardless of the other design attribute levels. Property (3.3-b) indicates that, if
aspiration levels in all attributes are simultaneously achieved, the design is acceptable
with certainty.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
52
With the assumption for the existence of a preference function, it is possible to define one
dimensional acceptability function as
a(x,) = v(xx,---,
1 ,x
(3.4-a)
,x+1 ,--, x)
with boundary conditions of
a(x*)= v(x*,...x ,..x*)= .
(3.4-b)
a(x2)= v(x*,...x ,...,x*) = 0.0
The definition of equation (3.4) provides that one attribute is evaluated assuming that the
other attributes are all acceptable. This is the essence of one dimensional acceptability
function. The argument of this function can also be a vector. For example, if an
acceptability function, a(x), has two arguments then,
a(xi, x) = v(x*,.
x,
xi , x
*,,x
_, x
x*
(3.5)
From the definition of X*, X0 and v(.), we can derive the following properties of a one
dimensional function a(x).
a(x,) = v(x,..,
1,x,
x
=
0 Vi e {,..,n}
a(x*) = v(x*,x*,...,x*) = 1 Vi e{1,2,...,n}
(3.6)
a(xj,,xj) = v(x*,.., x i,..,x ,..,x,)= a(xi) Vj#i
3.4.2
Operation : Lottery Method
The well known reference lottery (Keeney and Raiffa 1993)(also discussed in Chapter. 2)
can be used to assess one dimensional form of a(x) as is illustrated in Figure 3.4. The
elicited subjective probability p is assigned to the attribute level x,. The upper and the
lower options in the lottery respectively have acceptabilities of 1 and 0. The acceptability
of I is the subjective probability that the designer will be satisfied with the set of X*.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
53
Likewise, the acceptability of 0 is the subjective probability that the designer will be
satisfied with any attribute xi . Therefore, the elicited subjective probability p is also on an
absolute scale.
(x(*x2,--,X1--,n
1
i+
X
~
(
-
*- *
( 1,
*
2
X
1 ,,
(2X..X
"-
i-1,
00
Xi
,
*
i+1 ,--
*
Figure 3.4 A reference lottery for evaluating intermediate values for a single
design attribute
3.5
MULTI DIMENSIONAL ACCEPTABILITY FUNCTION
3.5.1
Mutual Preferential Independence
Without introduction of any operational assumptions on the designer's preference
structure, construction of a multi-dimensional acceptability function is a much
challenging task. In that case, interpolation or extrapolation technique may be heavily
used for construction of a multi-dimensional acceptability function. This process is in its
essence, an n-dimensional surface construction technique with a limited number of data
points in the space. As the number of attributes increases (more attributes to consider) the
n-dimensional surface construction easily becomes complex. The increasing number of
necessary data points, the decision-maker will have to go through many hypothetical
lotteries. In this case, it will be technically difficult to achieve a decision-maker's
continuous attention for an accurate measurement. As a consequence, the overall surface
may be constructed with false points, not accurately reflecting the designer's preference.
There is one particular preferential structure easily found in many decision-makers and
that preferential structure will be introduced as an operating assumption. Consider a two
attribute decision case in which price (P) and mileage (M) are the only decision attributes
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
54
for a car buyer. Given two alternatives of A = ($ 17,000, 24 mpg) and B= ( $18,000,
24mpg), most economy-conscious buyer will prefer A to B,
A = ( $17,000, 24 mpg) > B = ( $18,000, 24mpg)
Where > is read as "is preferred to ". The same decision maker will probably prefer A'= (
$ 17,000, 26 mpg ) over B' =( $18,000, 26mpg ).
A' = ($ 17,000, 26 mpg) > B' = ($18,000, 26mpg)
When two choices of A and A' are preferred to B and B', respectively (A >B and A' >B'),
the attribute of "price" is preferentially independent of the other attribute, "mileage". In
words, when "price" is preferentially independent of mileage, lower price will be always
preferred for every fixed mileage level.
The following is a formal definition of preferential independence between attributes:
Attribute Y c X, where X is a set of all attributes, is preferentially independent of its
complement Y if the preference order of consequences involving only changes in the
level of Y does not depend on the levels at which attributes in 7 are held fixed
In the above (price, mileage) example again, we can easily see that, for most economy
conscious buyers, "mileage" is also preferentially preferred to "price". In this case, the
two attributes are mutually preferential independent. More formally,
If X is preferentially independent of Y and Y is preferentially independent of X, then two
attributes of X and Y hold mutual preferential independence.
This operational assumption of mutual preferential independence does not always hold
but appears to be pragmatic in design practice. Most of the frequently used engineering
attributes are in favor of this assumption when traditional design goals or targets are
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
55
established. The preferential independence is important since it allows us to address a
single attribute among the set of attributes.
3.5.2
Two-dimensional Acceptability Function
3.5.2.1 Formulation
As was pointed out in the preceding section, construction of a general multiple
dimensional acceptability function using interpolation/extrapolation is technically
challenging. Moreover, if the number of attribute increases, the quality of the secured
surface may not accurately reflect the preference structure of the designers any more.
This section will construct a two-dimensional acceptability function for mutually
preferential independent attributes. The utility independence assumption introduced in the
previous chapter is based upon the preferential independence discussed in this subsection.
The acceptability independence and mutual acceptability independence can be defined in
the same way as the utility independence is defined. Under this mutual acceptability
independence assumption, a two dimensional acceptability function of a(x, y) should be
represented by a proper combination of a(x) and a(y) (Keeney 1993). One candidate form
is
a(x,y) = k, -a(x) + k, -a(y) + k,, -a(x) a(y)
(3.7)
where k,, k, and k, are scaling factors. The identification of the unknowns will give a
two dimensional acceptability function under mutual acceptability independence.
In order to find the values of the unknowns, the properties of single attribute acceptability
functions are applied.
a(x*,y) = a(y)
(3.8-a)
a(x,y*) = a(x)
(3.8-b)
a(x0 ,y) = 0
(3.8-c)
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
56
(3.8-d)
a(x,yo) =0
The above set of equations appears to be over-constrained since there are three unknowns
(kx, k, and k.,,) and four conditions. However, some of the equations derived from the
set of conditions are linearly dependent and a unique solution is found. This will be
shown in the next section.
3.5.2.2 Solution
In order to determine the three unknowns of k.,,k, and k,,, let us sequentially apply four
equations from (3.8-a) through (3.8-d) into the equation (3.7). Applying equation (3.8-a)
into equation (3.7) yields,
a(x*, y) = a(y)
= k, -a(x*)+k, .a(y)+kk,, -a(x*)-a(y)
= k,+k, .a(y)+k,- a(y)
= kx +(k,+k,)-a(y),
Vy
The above should hold for all values of y. Therefore,
kX = 0
(3.9-a)
k, + kx, = 1
With the equation (3.8-b), we obtain
k =0
(3.9-b)
k,+ k,, =1
The substitution of (3.8-c) leads to
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
57
a(x 0 ,y) = 0
=k -a(x0 )+k,y,- a(y)+k,, -a(x0 ).a(y)
=k -a(y),
Vy
Therefore,
(3.9-c)
k,=0
The same procedure for (3.8-d) into (3.7) yields
(3.9-d)
kx=O
Recollecting all the substitution results,
=0
k,
k,'
=0
k
kk
=0
=1
=1
(3.10)
The above four equations of (3.10) are dependent. Further a simple reasoning reveals that
the scaling factor k, is unity and k, = k,= 0. Thus, the two dimensional acceptability
function of a(x, y) becomes
a(x, y) = a(x) - a(y)
(3.11)
Using the definitions of aspiration and rejection levels, two dimensional acceptability
function for mutually acceptability independent attributes is just a multiplication of the
individual acceptability functions.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
58
3.5.3
N-dimensional Acceptability Function
An acceptability function for mutually independent attribute cases where n _>3 can be
obtained using the same way (see Appendix A ). However, the same result is obtained
using mathematical induction.
a(x, y, z) = a(x) -a(y, z)
= a(x)-a(y)-a(z)
(3.12)
Equation (3.12) is obtained using the property of mutually preferential independent
attributes and the property of the acceptability function. They are
e
When attributesX and Y are mutually acceptability independent and attributesX and
Z are mutually acceptabilityindependent, attributeX and a pair of attributes(Y, Z)
are mutually acceptability independent.
e
Argument of an acceptabilityfunction can be a vector.
Applying the above two properties for the construction of an n dimensional acceptability
function for mutually acceptability independent attributes yields,
a(x,I...xn) = a(x,) - a(x2,-..
n
)X
= a(x1) - a(x2) -a(X3,---., In)
= ...
(3.13)
n
=
$la(x,)
This completes a multi dimensional acceptability function under mutual acceptability
independence assumption.
3.5.4
Discussion
The intuitive form of equation (3.13) is due to the absolute scale of the metric obtained
through the definitions of rejection and aspiration levels. The carefully defined threshold
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
59
points bring the operational simplicity into the model. Under the same operational
assumption of mutual preferential independence (mutual utility independence), the utility
theory involves two steps in constructing an n-dimensional utility function. First,
individual utility functions have to be constructed. Then the scaling factors are elicited
through cross-evaluation of different attributes. These scaling factors, conceptually close
to weighting factors, will map the local value systems into the common global scale
system, putting disparate measurement into one coherent value system. For example, for
n
an n attribute case, a total of
nC, scaling factors will be obtained through cross-
evaluation. This number grows quickly as the number of attributes, n, grows and
measuring all of the scaling factors with acceptable consistency level is quite challenging.
In the acceptability model, the absence of scaling factors or weighting factors is seen to
be a major advantage over the conventional weighting factor-based framework. This
reduces the necessary steps for the construction of multi attribute acceptability function.
However, this advantage of using implicit weighting factors is not free. It should be noted
that acceptability approach is based upon the implicit notion that the designers have
enough domain knowledge so that they can set the aspiration and rejection levels for the
attributes of interest at the onset of the design activity. However, those threshold points
may be fluidic with the design progress. If the designers do not have good perspective on
what they will accept and decline, the use of acceptability model may not accurately
address the decision-maker's preference. Also, while the approach to setting the threshold
points has some advantages, the restrictive assumptions also introduce a limitation. The
acceptability model cannot distinguish designs exceeding goals. Only goal achievement
is valued. If several design alternatives are found to achieve all goals, it will be necessary
to formulate new and more challenging goals and redo the analysis to distinguish them.
Most scoring decision models do not have this characteristic.
3.6
SUMMARY
This chapter presented an important piece in this thesis. The acceptability - subjective
probability of acceptance - was used to quantify the preference structure of a designer
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
60
under uncertainty. By modeling design activity as a specification satisfaction process and
viewing designers as possessing the domain knowledge to quantify the aspiration and
rejection level, an acceptability-based design evaluation framework was constructed.
Under the assumption of mutual preferential (acceptability) independence, one
dimensional acceptability function is extended for construction of multi-dimensional
acceptability function. Development of the aggregated acceptability with less restrictive
independence assumptions should be the subject of future work.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
61
4. GOAL SETTING USING DYNAMIC PROBABILISTIC MODEL
4.1
OVERVIEW
The acceptability evaluation model hinges on the notion of "goal"and this chapter will
provide a formal method helping designers set these important threshold points in early
design phase. The proposed goal setting method will be constructed using a dynamic
probabilistic model and can be used for product specifications or product targets early in
design process. Considering that these specifications will serve as a guideline to
designers throughout the design process, it is important to set a goal level that accurately
reflects all relevant issues associated with design. The goal level at early design phase is
usually set with designers' soft information. This chapter provides a prescriptive method
helping designers quantify their goal level under lack of sufficient information.
Section 4.2 introduces a discrete Markov Chain, a stochastic process model. Section 4.3
then takes a step-by-step approach to formulate the goal setting task as a dynamic
probabilistic model. This section also provides a solution to the model formulated in the
previous section, followed by the interpretation of the solution for real-world application.
Section 4.4 discusses the calculation of the convergence rate in a discrete Markov Chain
and its implication in the goal setting model. The last section provides an application
example measuring engineers' expectation level for adopting a new design technology.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
62
4.1.1
Goal-based Method
The benefit of using goal-based approach is well documented in business literature
(Sterdy 1962; Curtis 1994). After assessment of current status in terms of resources and
capability, the goal-based approach helps people establish targets for a given task. A goal
should reflect a compromise between what is realistic and what is desirable. Therefore,
the goal level should be determined such that it is achievable and aggressive at the same
time.
The process of setting goals requires that the decision-maker deliberate upon possible
outcomes. In a group, target setting provides an environment amongst team members to
discuss and review the given task. Although the final goal level is quite important, the
major benefit of using goal setting comes from the process of setting the level. In that
spirit, the proposed goal setting method will be process-oriented to allow decision-makers
deliberate on the outcomes.
Goal setting is widely used in design process, especially when setting the product
specifications. Ulrich (Ulrich and Eppinger 1995) views the establishment of
specifications as a goal setting process. From systems approach, goal setting is as a part
of problem definition (Pahl and Beitz 1996).
4.1.2
Problem Statement
The goal level is usually set by a team leader or a manager, often experts in the specific
domain. At the same time, the expert is supposed to have various kinds of soft
information necessary for setting a reasonable goal level. It is also assumed that the team
leader will have a management mind set: the leader should have a clear understanding of
the current situation in the product development status, in terms of technologies and
resources allocated for the development.
When setting a target at the onset of a design activity, the leader will resort to soft
information and intangible knowledge of any kind to arrive at an estimate such as "ball
parkfigure" or "orderof magnitude" analysis.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
63
Extending such an idea, this chapter models a goal setting task as a parameter
quantificationprocess under uncertaintiesof single events: uncertaintiesdue to lack of
information or knowledge. It will not be applicable to the parameter estimation subject to
uncertainties of repeatable events.
The designer's soft information will be formulated into a discrete probability model. The
model then generates a probability distribution of the parameter (probability mass
function), which represents a decision-maker's degree of belief for different parameter
levels. At the same time, issues insightful to understand the designer's behavior will be
addressed. Although the main application area of the proposed model is goal setting in
early product development stage, it will be extensible to more general parameter
estimation problems subject to uncertainties due to lack of knowledge.
4.1.3
Approach
Maintaining consistency is an important issue when using probability based methods. The
probability axiom that all the probabilities sum up to one has been one important
consistency check. However, it is noted that improving consistency does not necessarily
mean getting an answer which best reflects the "real" life solution (Saaty 1980). Rice
compares maintaining consistency as having a good bookkeeper (Rice 1995). In many
practical cases, maintaining consistency often constraints a decision-maker's assessment
process (Saaty 1980).
The proposed dynamic probabilistic model will be based upon the idea that a practical
assessment tool should allow the users both to maintain consistency and to provide
answers close to a "real life solution". The suggested approach takes the following two
steps,
e
Let a decision-maker explore all realms of knowledge to assess a parameter,
worrying less about maintainingconsistency.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
64
e
The resulting broaderpool of knowledge is later transformed into a set of consistent
assessments via dynamic probabilisticmodel.
The constraint relaxation concept is found in Saaty's well-known ratio measurement
technique in Analytical Hierarchy Process (AHP). Using the Saaty's method to measure
the ratios of likelihood of three events, A, B and C, (where the three events constitute a
complete space), the decision-maker is asked to assess
1.
DA,B
2. (C
= lik(A) I lik(B)
= lik(A) / lik(C)
3. wB,C = lik(B) / lik(C)
where lik(A) is the likelihood of event A and so on.
Although the ratio of
OB,C
is uniquely determined once the values of cRAB and
A,C
are
determined, the ratio measurement technique asks the decision maker to independently
assess the above three ratios. This provides a more flexible environment to the decisionmaker in the assessment task. Then, based upon the elicited ratios, the method constructs
a matrix. The technique transforms a set of possibly conflicting ratios into a set of
consistent ones using matrix operations. In doing so, Saaty assumes that the resulting
answer will be closer to "real life". The proposed dynamic probability model follows the
same motivation with the ratio measurement technique as in AHP.
mA,B
1.0
1 SA,B
I1/ -A,
rai
I1/
(OA,C
meB,C
wB,C
1'0 _
Figure 4.1 Saaty's ratio measurement in AHP
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
65
4.2
A DISCRETE MARKOV CHAIN: A STOCHASTIC PROCESS
4.2.1
Markov Properties
Discrete Markov Chain, a stochastic process, has been widely used in many applications
such as social science, natural science, and engineering. A sequence of random variables
(Xi, i = 0,],...,n ) is a discrete Markov Chain if
P(X ,2+ =k \, X,, =i, ..., X 0 =ko) = P(X,,, =k I X,, =i) = P(X =k | XO =i) =Pik
(4.1)
In order for a dynamic probability model to be a Markov Chain (MC), the model should
have transition probabilities, Pik such that
"
they are history-independent.
e
they are time-wise homogeneous.
4.2.2
Transition Probabilities and Matrix
For a Markov chain X with a state space S, where Sl = n, the transition probabilities, Pik
are given by
Pik - P(X,, = klX = i), for n2 0
(4.2)
The n by n matrix (Pik) of transitional probabilities is called the transition matrix and is
denoted by P. One obvious but important property of the transition probabilities is
Pik
0 for all i and k
(4.3-a)
(4.3-b)
Pik = 1.0
k eS
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
66
Any matrix satisfying the above two properties is deemed to be stochastic. This stochastic
property is often used in deriving many useful characteristics in the mathematics of
Markov chains.
4.3 DYNAMIC PROBABILISTIC GOAL SETTING MODEL
4.3.1
Operational Procedure
A schematic of the proposed dynamic probability model is shown in Figure 4.2. A set of
candidate states for a parameter is first identified by a decision-maker. This activity
serves as a screening process for the decision-maker to determine a set of discrete
candidate values used in the parameter estimation. Then, each candidate state will be
assessed one by one in a probabilistic manner. These sets of individual assessments will
be conflicting unless the decision-maker can exclusively choose one single candidate
state as the value for the parameter. Those conflicting sets are processed via the discrete
Markov chain model, yielding a probability mass distribution on the candidate states.
Identificationof
CandidateStates
Solution :
Limiting Probabilities
Assessment of individual
States in the Space
Markov Chain Formulation &
TransitionMatrix Construction
Figure 4.2 Break-down of the suggested dynamic probability model
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
67
4.3.2
Identification of Candidate States
A state space S should be determined such that it contains all possible candidates for the
parameter.
S={ s11s2, ... ,Is}
(4.4)
There are two important issues regarding the set of state space S. In the case of goal
setting, the state space should contain both the most optimistic and pessimistic values.
The most optimistic value reflects the most favorable scenario, while the most pessimistic
one reflects the worst scenario. Also, the resolution between states should be neither too
coarse nor too fine. This is important to facilitate a meaningful comparison among states
by the designer. However, the states do not have to be equally distributed in the space.
4.3.3
Assessment of Individual States
4.3.3.1 Local Probabilities
Let T represent a parameter of interest that the decision-maker is trying to estimate: in a
goal setting context, it is a target level for a specific attribute of a design. The goal for
nominal average fuel economy of a new engine is an engineering design example.
Assume that the states s, are increasingly preferred in the following example. Under
certainty, the comparison of T against an arbitrary state s, e S should exclusively lead to
one of the following conclusions.
T -s
(4.5-a)
T < si
(4.5-b)
T> s
(4.5-c)
Under certainty, the value of T should be either equal to, less than or greaterthan an
arbitrary state si. The above three events are complete and the judgment will lead to one
of them.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
68
However, under uncertainty the decision-maker's judgment for the comparison is no
longer exclusive. In this case, it is likely that the three events may be valid to different
extents. We quantify this judgment under uncertainty with subjective probabilities as
Prob.( T - si) =pi
(4.6-a)
Prob.( T > si) =q
(4.6-b)
Prob.( T < si) = ri
(4.6-c)
where pi + qi + ri = 1.0 , i = 1,2,...,n. and n is the number of elements in the state space.
The set of probabilities, U, = / pi, q,, r}, i =1,2,...,n summarizes a decision maker's
judgment on a single reference state s, under uncertainty. Moreover, this set of
probabilities is assumed to be based upon the decision-maker's accumulated knowledge
up to or prior to that assessment. In words, the set U is a probabilistic answer to a
hypothetical comparison based upon a decision-maker's information available up to that
point in time.
The set of local probabilities, U, = [p,, qi, r,
i =1,2,...,n plays a key role in
construction of the probabilistic model.
4.3.3.2 Modeling Assumption on the Local Probabilities
Important assumptions on the elicitation of the set, U = f U1, U
2
.
U,,
are as follows:
e
Existence: a decision-maker's uncertainty is quantifiable with subjective probabilities.
*
Homogeneousprobabilities:there is a unique set of probabilistic judgments for a
single state that will remain unchanged as long as the information for the assessment
is the same.
The second assumption will be discussed in depth. For this purpose, another observation
on relationship between knowledge and uncertainty quantification will be addressed first.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
69
It is assumed that a certain state of knowledge-content determines a unique uncertainty
structure for an uncertain parameter: qualitatively, this is roughly equivalent to saying
that there is a one-to-one correspondence between knowledge-content and quantified
uncertainty. This qualitative statement is shown in Figure 4.3. Another assumption is a
hierarchical uncertainties of U= [ U, i =1,2,...,n }. The suggested dynamic probability
model hinges on this decomposition hypothesis.
In summary, as long as the basis knowledge-content remains unchanged, the overall
uncertainty as well as the decomposed uncertainties, in a qualitative sense, should remain
unchanged.
Uncertainty
Information
Figure 4.3 Qualitative relationship between knowledge and uncertainty
Figure 4.4 shows how the local probabilities are elicited in practice. Here, t represents a
discrete time in the assessment process. Each block with a component U,(s 1 ) represents a
decision-maker's assessment on a state s, at a discrete time t=n. In Figure 4.4, the
decision-maker starts an assessment on state s3 at t=1, and then on
S3
and s4 at t=2. At
t=2, it is possible to have
U,(s,)
-
U2(si) for the specific state i under assessment
This discrepancy is interpreted as a decision-maker modifying the previous assessment at
a later stage. This implies that the decision maker reviews and cross-checks their previous
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
70
(S3)
t=1
(s4)
t=3
t=5
(s)
(2
t=n
(si)
(S2)
s)
(s
(sg)
(s
s)(s
Figure 4.4 Elicitation of local probabilities
judgment against other judgments (in this case, on U2(s 4 ) ) , and based on it, changes the
assessment on a state s3 . At t=5, all of the candidates states are assessed at least once.
During the discrete time through t =5 and t= n, the cross-checks and adjustments on
every state are made by the decision-maker. It is through this repeatedprocess of
checking and reviewing that provides decision-makers with more opportunities to
carefully quantify their subjective judgments on the parameterestimation. After repeated
sessions of cross-checking and reviewing, we assume the existence of a certain time t = n
such that
V k, 1 ->n Uk(si) ~ U(si) V i =1,2,...,5.
(4.7)
After a certain number of reviewing sessions, a set of homogeneous (or steady) local
probabilities emerges. The time t=n is recognized as a point when a decision maker feels
that he/she hasfully exploited the realm of the availableknowledge to a sufficient degree
and that, unless new information becomes available to the decision maker, he/she does
not feel like modifying the local probabilities.In a formal way, t=n is identified as
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
71
V e >0, 3 n and 3 U.(s,) s.t.
V k > n and V ie
f
1,2,...,n], d(Uk(s,), U-(s,)) <e
(4.8-a)
(4.8-b)
where d(.) is a hypothetical metric to measure the distance between two probability
distributions. A suggested operational scenario summarizing the above assessment
procedure is as follows,
step 1: All knowledge relevant to the assessment has been accumulated.
step 2: The decision-maker randomly chooses one state si .
step 3: The decision maker determines U =[ p,, q,, r,} for a state s,.
step 4: The decision maker repeats the above two steps (2 and 3) until all states are at
least once considered.
step 5: The decision-maker reviews and cross-checks the set of assessments on U= [ U,
i =1,2,...,n ] until no further modifications are made.
The existence of convergent local probabilities (U-(si)) plays an important role in
extending the suggested dynamic probability model into a discrete Markov Chain. The
final set of assessments obtained after step 5 will be a set of U = [ f p,, q,, r,}, i=1,2,...,n,}
and it is quantification of a decision-maker's judgment on the entire states.
So far, the only consistency requirement imposed on the set U is the minimal condition
that the three judgmental probabilities on a single state must sum up to one ( p,+ q,+ r,
=1.0). However, on the whole set level U, the consistency issue is not addressed at all. As
a matter of fact, the sets of U are in conflict against each other unless the decision-maker
can pinpoint one state as the choice. The next step is to construct a globally consistent
assessment based upon the set of individual assessments, U= { U, , i =1,2,...,n}.
4.3.3.3 Interpretation of Subjective Probabilities
A statistical interpretation on subjective probabilities will be used in this chapter. For
example, "the probability that throwing 6 in the next throw of this loaded die is 1/4" is
interpreted as (Popper 1956)
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
72
"The next throw is a member of a sequence of throws, and the relative frequency within
this sequence is 1/4"
By using a statistical interpretation for the subjective probabilities, we can apply the
probability calculus in processing the elicited probabilities. The resultant final
distribution from the calculus is then interpreted as probabilities of single events.
4.3.4
Markov Chain Construction
4.3.4.1 Decision Chain
Hypothetical sequential decisions under uncertainty are modeled as stochastic process-a
sequence of random variables. This stochastic process may be interpreted as an
automated decision analyst helping a decision-maker to estimate a parameter of interest,
T, via a sequence of hypothetical interviews.
Consider n candidates in the state space, S. Starting from a state si, the decision-maker
will be asked to compare the parameter of interest, T, against a state, s,. Due to the
uncertainty, the decision-maker would not be able to provide a 100% confident answer
for this question. Assume that only probabilistic answer is available for this comparison.
Further, depending on the judgment between T against s, , the decision-maker will be
asked to assess, at the next phase, an adjacent states as (assume that states are
increasinglypreferred)
sj, i < j, if priorjudgment was (T > s,): to a better state
s , i > j, if priorjudgment was (T < s,)
to a worse state
s , if priorjudgment was (T~ si)
a reassessmenton the same state
Continuing this hypothetical process will generate a chain of sequential assessments,
sometimes requiring repeated assessments on the previously considered states. This
hypothetical chain will be called as a decision chain. More formally, a decision chain, G
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
73
=(G, ; n =0), is a sequence of random variables of G, , taking values on the state space S.
A random variable Gn represents an n" reference state used for the comparison by the
decision-maker. For example, we have a realization of decision chain A.
(4.9)
A ={G,=s, , G 2 =s 4 , G 3 =s 2 , G 4 =s 3 , G 5 =s 2 1
The above chain indicates that, in the current decision process, the decision-maker
sequentially considers s, through s4 and again s2 on the fifth stage. The chain A is one
probabilistic realization from the uncertainty structures. Figure 4.5 provides a more
complete picture of how the chain can potentially develop, starting from state S3 in a
probabilistic way. In this figure, each node represents a decision-maker's hypothetical
comparison of T against s, as a reference state. The edge, on the other hand, represents a
decision, a probabilistic transition between nodes, based upon the subjective judgment on
si. For instance, the chain A in Figure 4.5 indicates that, at time t=] the designer assesses
T against s. As he/she judges that the state s3 would underestimate the parameter, the
designer considers a more aggressive value-the state S4 at t=2.
t=1
(T.sJ
t=2
t=3
t=4
\
,(T~s
(T.s)
(T.s)
(T.sJ
(T.sJ)
(T~s
(T s
(T~S
(T.s)
(T.s
(T.s(s
Converging
Distribution
(T s<)
Branching
process
Figure 4.5 Chains in the dynamic probability model. (T,s,) represents an
assessment of T against s,
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
74
Probabilistic transitions between states, under a set of operating conditions (which will be
discussed in detail later), yield a converging distribution amongst states, as in the left
hand side of the Figure 4.5. This is called a stationary distribution. It represents the
limiting probabilitiesif the decision-makerrepeatedly carriesout the assessmentprocess
for a sufficient large number of times. Further, it captures the decision-maker's overall or
global uncertainty structure of the parameter T in the limiting case. The above process
describes how a decision-maker's idea might develop under lack of information.
However, the figure 4.5 supports a specific kind of transition: it only allows transitions
among neighboring states and this will further be addressed in the following subsection.
4.3.4.2 Transition Matrix Construction
This section addresses how to construct the goal setting process as a discrete Markov
Chain. At the center of the discussion is that the dynamic probabilistic model should
satisfy two critical conditions of the discrete Markov Chain, i.e., the existence of history
independent and time independent transitionprobabilities.
Assume that at a stage n, a decision maker probabilistically judges that (T > si). A nonzero likelihood of (T > si) implies an existence of e >0 such that,
(T > si) <-> { T = s, + e, for some E> 0 ]
(4.10)
The non-zero probability of P(T > s,) does not, however, provide in-depth information
such as the magnitude of the parameter e and the corresponding likelihood.
In order to address this problem, we seek the most conservative or minimum level of
information that would let a designer pursue the decision of (T > s,). In words,
If a decision-makerchooses a decision of (T > si), what would be the most conservative
information deduciblefrom the decision?
Given that T is judged to be greater than si on the finite state space S, it is always true
that T would be greater than or equal to si+.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
75
(4.11)
Prob. ( T _ s j., I T > si ) =1.0
In a most conservative sense, T would be equal to si, . From a descriptive view point,
where a decision analyst interviews a decision-maker, the non-zero probability of (T > si)
assures, with probability of one, that the decision-maker will always shift to and compare
T against the next level, si,. Based upon this conservative information context, we define
the transition probability from si to si, as
Prob.( G,,,,= s ., I G
=s
)
qi = Prob.( T > si)
(4.12-a)
Extending the same argument, the other two transition probabilities from a state si are
also defined as
Prob.( Gn, 1 = sI | G,,= si )
Prob.( G,, 1 = si | GI
=
si )
ri = Prob.( T < si)
p = 1.0- r-q
(4.12-b)
(4.12-c)
This completes the construction of transition probabilities used in the discrete Markov
Chain.
4.3.5
Limiting Probabilities and their Interpretation
This subsection discusses the existence of the limiting probabilities and their use. After
providing an example about its operation, conditions for the existence of limiting
probabilities will be discussed in detail.
4.3.5.1 Operation
In order to obtain the limiting probabilities of a stochastic matrix P, a decision maker will
have to construct an n by n transition matrix first, with each element, pi;, representing the
assessed transition probability from one state(i) to another(j). For a ten-state case, a 10 by
10 transition matrix, P, is constructed as
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
76
0
1.0
0
0
0
0
0
0
0
0
0
0.1
0.9
0
0
0
0
0
0
0
0
0
0.2
0.8
0
0
0
0
0
0
0
0
0.1
0.3
0.6
0
0
0
0
0
0
0
0
0
0.4
0.6
0
0
0
0
0
0
0
0
0.1
0.4
0.5
0
0
0
0
0
0
0
0
0.2
0.45
0.35
0
0
0
0
0
0
0
0
0.3
0.4
0.3
0
0
0
0
0
0
0
0
0.5
0.3
0.2
0
0
0
0
0
0
0
0
0.8
0.2
(4.13)
Suppose that a decision-maker starts the judgment process from state 1. Then this starting
point is represented by a vector:
(4.14)
G, =[1,0, 0, 0, 0, 0, 0, 0, 0, 0]
One probabilistic transition from state 1 subject to the transition matrix P yields,
(4.15)
G2 = G, P
Thus, a general n" step transition is
(4.16)
Gi = Gl- P = G, P "
If a decision maker continues the transition process for an infinite number of times,
7r =
(4.17)
G_ = G,-P-
There are two ways of interpreting the limiting probabilities of r.
*
steady state/equilibrium view : c = r - P
*
limiting probability : 7r =limn,1
G,. P"
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
77
,where wis the limiting probabilities, P is the transition matrix, and G, is the starting
vector. Theoretically, the limiting probabilities are obtained after an infinite number of
iterations. However, in practice, they are obtained in a relatively small number of
iterations as long as they exist.
For the case of a stochastic matrix of P, every row of the limiting matrix P- is identical.
This property is called "monodesmic" in a discrete Markov Chain. Therefore, regardless
of the initial starting vector of G,, the limiting distribution of G_ is identical. This may be
interpreted as a real situation where, regardless of the decision-maker's initial starting
point for the assessment, the estimation outcome should be identical. A related theorem
will follow shortly.
4.3.5.2 Existence of Limiting Probabilities
Up to this point it was assumed without a proof that limiting probabilities exist for a
corresponding transition matrix. There is a mathematical theorem stating the existence of
limiting probabilities under certain conditions.
Theorem: Given afinite, irreducibleand aperiodicMarkov Chain, there exists a unique
stationary distribution, r. Further,
(4.18-a)
V i,,r > 0
V
,,,-lia
G,-P
(4.18-b
A detailed proof of the above theorem is found in many advanced books on a discrete
Markov Chain. The focus of this chapter is, therefore, to verify whether the goal setting
model constructed using a discrete Markov Chain meets the conditions for the existence
of unique stationary distribution. For that purpose, definitions of the conditions will be
given first.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
78
Definition : if a discrete Markov Chain has a finite number of states, the chain is a finite
Markov Chain.
The decision chain for the goal setting method isfinite since the state space contains a
finite number of elements.
The following definition precedes the definition of an "irreducible"Markov chain.
Definition : States i andj communicate, if E t,t' s.t. p';, pj)> 0
The states i and j communicate if the chain can move from state i to j and j to i in a finite
number of transition trials.
Definition : A set of states (S') is irreducible if
a
V ij e S' and i andj communicate
*
V ieS'andje SIS' then pi, =0
The states in S' can communicate to each other but any state originating from S/S'cannot
leave the irreducibleset S' once it reaches S'. For the state space { s,,...,s, ] in the goal
setting matrix, the probabilities of pjo = p,,, = 0. As long as some of the local
probabilities are not zero, the states with nonzero probabilities should communicate each
other. Therefore, there exists an irreducibleset in the state space S.
Last definition of aperiodicMarkov Chain will be given after the definition of a period of
a Markov Chain.
Definition : The period of a state i is the largest integer N such that, if p", # 0 then N
divides n.
Definition : A Markov Chain is aperiodic, if its period is 1.
It can be shown that for afinite, irreducibleMarkov chain, all states have the same
period, which is then defined as the period of the whole chain.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
79
Since the decision chain constructed for goal setting is finite and irreducible, all the states
in the state space have the same period. More, if there exists at least one state i with p,,0,
the chain will be aperiodic.
Therefore, the Markov Chain constructed based upon the local probabilities is afinite,
irreducibleand aperiodicand it will have a unique limiting probabilities.
4.3.5.3 Interpretation of Limiting Probabilities
The outcome of the goal setting model based upon the sets of local probabilities is a
probability mass function on the state space. The probability mass function quantifies a
parameter of interest under uncertainties of single events or lack of knowledge. A
challenging issue with the resulting probability distribution is how to use it to set a
nominal goal level in real-world applications. A general answer to this question is that the
resulting limiting probabilities should be interpreted and used in a very flexible way to fit
and reflect the designer's needs. For example, for an increasingly favorable attribute such
as mileage in a car, an aggressive designer may take the 75th percentile point as a
nominal goal level. On the other hand a conservative designer may take 30
-
40th
percentile point as a nominal target. The nominal goal value determined by the
corresponding cumulative probability,
S, P(x !;s.)
=AP(s,)
j=1
is a meaningful guide in the goal setting process. Using this format will allow the
designers to quantitatively reflect upon their risk level in the goal setting process.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
(4.19)
80
4.4 POST-ANALYSIS
4.4.1
Convergence Rate
A fully deterministic n by n transition matrix in the dynamic probability model will be
defined as the matrix which
* has (n-1) zero eigen values and
* has only one nonzero eigen value and its value is 1.
A] =
...
= A,, = 0.0, An =1.0 for an n by n matrix
(4.20)
A stochastic matrix, where each row sums to one, always has eigenvalues such that / Ai
/<
1.0 V i. Further, the largest eigenvalue is always one. Under this definition transition
matrices in Markov Chain are stochastic.
If a matrix is not fully deterministic, then it will have an eigenvalue I and some nonzero
values. The nonzero eigenvalues may provide an insight about the state of a decisionmaker.
A Z-transform decomposes m'" power of an n by n matrix into
0(m) = P' = 0 + T(m) where m = 0,1,2,...
where
0
(4.21)
is a constant matrix ( used to obtain limiting probabilities ) and T(m) represents
the transient part of the m"' power of the transition matrix. Further, the matrix T(m) is
expressed as
T(m) = A,"111'T 1 +A ' T2 +
2
...
+2 y_'"-Tu
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
(4.22)
81
where A,are eigenvalues and T are corresponding matrices( the matrix T, need not to be
eigen matrices ). Therefore the convergence rate of 0P(m), the rate how fast the transient
part approaches to zero, is determined by the eigenvalues of the transition matrix P. In a
simplified manner, its convergence rate will be dominantly determined by the largest
eigenvalue besides A=1, where / Ai
/_<1.0.
Greater eigenvalues imply slow convergence
for the matrix multiplication.
Degree of
Determinacy
More deterministic
1
0
Second largest
Eigenvalue
Figure 4.6 Quantification of information contents by second largest
eigenvalue
The large absolute value of eigenvalue is interpreted as a state of decision-maker who
does not possess enough information to produce a deterministic transition matrix. In
words, the underlying hypothesis is that, a decision-maker can provide a transition matrix
closer to a deterministic matrix in proportion to the amount of relevant information used
by the decision-maker. When more information is available, the transition matrix will
show a quick convergence. Based upon this observation, we generalize that the absolute
magnitudes of eigenvalues quantify the information level of a decision-maker's local
judgments. In order to simplify the analysis, we will focus on the magnitude of the
second largest eigenvalue.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
82
4.4.2
Statistical Test
This section addresses how to statistically judge whether an elicited transition matrix P is
backed by an acceptable amount of information. A decision-maker, in an extreme case,
can just give out random numbers. The objective of this section is to screen out such
random assessments in a statistical manner. The underlying idea is to apply a testing the
level of information using the second largest eigenvalue, comparing it to a pool of second
largest eigenvalues of randomly selected n by n transition matrices. Through a statistical
comparison, a quantitative statistical conclusion can be drawn. The basic underlying
assumption is that, if the decision-maker had enough information, the resulting transition
matrix will be far from being random in the context of absolute magnitude of second
largest eigenvalue. Neyman-Pearson approach will be used for the testing.
The null hypothesis and the alternative hypothesis are as follows;
HO: The elements in transitionmatrix are randomly chosen.
HA
: The elements are not random.
Under the null hypothesis, the elicited transition matrix is subject to a null distribution.
We can generate this null distribution by a numerical simulation. Let X be a random
variable representing the second largest eigenvalue of an n by n stochastic matrix and be
subject to a null hypothesis. Then, a one-sided a-test provides a threshold value, x* such
that
P(X > x*) = ]-a
(4.23)
If the absolute value of the second largest eigenvalue of the transition matrix is smaller
than or equal to x*, we reject the null hypothesis at (1-a).100% confidence level.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
83
probability
2 "d largest
eigenvalue
Figure 4.7 A typical distribution of the second largest absolute value of
eigenvalues of n by n matrices under null hypothesis
In other words, if the second largest absolute value of eigenvalue of the elicited transition
matrix is smaller than or equal to x*, we can say that the decision-maker's transition
matrix is not random with confidence level of (1-a).100%. Table 4.1 shows the threshold
values for random stochastic matrices of different sizes. For example, if the second
largest eigenvalue of a 5-state stochastic matrix is smaller than or equal to 0.5622, we can
say, with 99% confidence level, that the decision-maker's matrices is not random.
0.35
.
n
0.3
:
0.25
0.2
>1 0.15
Reject H0
0.1
0 0.05
0.
0tC
Ci llz
LO
V:
'
U)
0
D
rt
Cq
-
-
t
N-,
W
P-
2nd largest eigen
V*
Cq
-cd.
Cq
0
0D
CO
0D
value
Figure 4.8 A null distribution of second largest eigen value of 7x 7
stochastic matrices (2,000 simulations)
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
84
Table 4.1 Threshold point for matrices(based upon10,000 simulations)
# of states
x*: 1 % value
3
4
5
6
7
8
0.2449
0.4509
0.5622
0.7267
0.7291
0.7715
(C=0.01)
4.5 APPLICATION EXAMPLE
DOME (Distributed Object-based Modeling Environment) is a software developed at
MIT CADLAB(Computer-aided Design Laboratory) (CADLab 1999). It is an internetbased software environment that enables designers to collaborate in a very heterogeneous
environment, both in terms of geography and deployed software. The Ford Motor
Company has been an industry sponsor. This opportunity provided the author with an
opportunity to use the developed goal setting model for measuring the Ford engineers'
expectation level for using DOME in their product development process. From the Ford's
standpoint, DOME may be a new facilitating factor in their product development process
and it would be helpful for them to quantify their goal levels for their expectations in
adopting the new technology.
4.5.1
Purpose
There are two main objectives in testing the goal setting model with Ford engineers. The
first point of interest was the user's perception on the developed goal setting model. The
model itself has been mainly developed based upon observational assumptions and
mathematical reasoning. Therefore, this interview serves as an excellent opportunity to
observe its assumptions and applicability. Another point of interest was the measurement
outcome. Engineers at Ford Motor Company were very enthusiastic about the
functionality and features of DOME. This survey serves an opportunity for us to measure
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
the Ford engineers' expectation levels in key attributes such as development time
reduction with a potential adoption of DOME technology.
4.5.2
Procedure
4.5.2.1 Set-up
The MIT CADLAB had a software demonstration at the Ford Motor Company at
Dearborn, MI in March 1999. The purpose of the demonstration was to illustrate the
functionality and features of the DOME to Ford engineers. During a break in the
demonstration, prepared handouts were given out to potential interviewees. The handout
contained exemplary questions that can be addressed by the method and the basic idea of
how the local probabilities will be used for a dynamic probability simulation. The
appendix B contains the handout. The interviewees read the handout and returned to the
remaining part of software demonstration. The interview was then conducted after the
demonstration.
4.5.2.2 Participants
Two engineers participated in the interview. Since the objective of the developed tool is
to help designers set their goals or targets in the early design process, it is assumed that
the designers are experts in their application areas with extensive prior experience in their
fields. The resulting local probabilities should be the answers based upon their domain
expertise. In order to meet this assumption, two senior engineers were chosen for the
interview. One was an IT(Information Technology) senior manager and the other was an
engineer for Movable Glass System (MGS) of door system. The MGS engineer has 13+
years of experience in the automotive industry. Considering their extensive experience,
they were judged to be good candidates for the interview. After the DOME
demonstration, they were given three topics for the interview and chose one based upon
their field of interest. The senior manger chose the question of "average product
development time reduction using DOME" while the engineer chose the question of
'average product quality improvement using DOME".
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
86
4.5.3
Goal-setting Software
Each interview was conducted using a JAVA program that allows the users to
interactively express the quantities necessary to build the transition matrix. The program
has three main GUI (Graphical User Interface) parts. At the first screen, the designers are
asked to identify the candidate states for assessment. The user decides the most optimistic
and pessimistic values for the assessment. Upon determining the two points, the user is
asked to decide the interval between the extreme points. The Figure 4.9 shows snapshot
of the first GUI. With the second part of the GUI, the user can express local probabilities
that will lead to the transition matrix. It has a pop-up window with which the user can
interactively set the local probabilities at each state. After initial assessment of each state,
the user can edit the prior assessment state-by-state. This functionality is considered to be
very important in the overall elicitation process, since this multiple revisions allow the
user to arrive at a set of stationary local probabilities. After all the local probabilities are
elicited, the user can go to the next GUI that shows the limiting probabilities from the
transition matrix from the previous GUI part.
Figure 4.9 First Screen : Identification of state space
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
87
I
2
3
4
~
uuu U.U
20.00
35.00
50,00
000
00.40
0030
*u.U
:UUU
UU.U
UUU
0000
00.30
00.70
0000
00.30
00 00
00 00
01 00
01.00
ttState
foAhl
2 o .0
mEa ton
Pobabiity Sum
[2
finish
random set
edit
next
Figure 4.10 Second Screen: Elicitation of local probabilities
4.5.4
Result
4.5.4.1 Perception
Both of the interviews were carried out without significant operational difficulties. The
self-explanatory handout provided enough information for the interviewees to understand
the purpose and the elicitation process. The following is observations made during and
after the interview process.
" The concept of local probabilities on one state was not difficult either conceptually or
operationally. The engineers can elicit those local probabilities without much
difficulties.
*
One main objective of the developed model is to let designers think hard on the
assessment problem at hand. This is inherently built into the model by letting the
designers think on local states, worrying less about maintaining consistency in the
assessment. This was observed with the IT senior manager. Even though the author
didn't reveal that GUI allows the users to edit the local probabilities, the manager,
after finishing the random assessment, showed the desire to revise some of the
previous local judgments.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
88
4.5.4.2 Measurement Outcome
As was previously discussed, the objective is to measure the engineers' expectation level
on impact of using DOME. The limiting probabilities for both cases are shown in Figure
4.11. The time reduction is shown as a probability mass function, while the quality
improvement graph is shown as a cumulative distribution.
o
GaSetnPlgin
M~~~B
I[
- Goal Setting Plug-In
I
N91131
(B)
(A)
Figure 4.11 Expectation measurement result for time reduction(A) and
quality improvement(B)
4.6 SUMMARY
This chapter presented a goal setting method using a discrete Markov Chain. The goal
setting problem was formulated as a parameter estimation problem under soft information
or subjective judgment. The rationale behind the model is that maintaining consistency
may constrain designers from fully exploring their knowledge pool in the estimation task.
The model uses consistency relaxation technique that allows the designers to worry less
about maintaining consistency in the assessment process. The parameter estimation
MassachusettsInstitute of Technology - Computer-aided Design Laboratory
89
process is then modeled using a discrete Markov Chain with designers' local knowledge
defining the transition probabilities among states. The details of transition matrix
construction process, the existence of limiting probabilities were discussed in detail. An
analysis mechanism using second largest absolute eigen value is suggested to qualitative
measure the information content of a transition matrix. An example illustrating the
application of the suggested model is shown at the end of the chapter.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
90
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
91
5. DECISION MAKING FOR SINGLE EVENTS
5.1
OVERVIEW
Design decision making under uncertainty often uses probabilities to represent and
quantify uncertainties associated with early design alternatives. In this case, expected
value, the most compact statistic to summarize a probability structure, is often used to
compare design candidates each of which may have distinct uncertainty structures.
However, there may be a discrepancy between the decisions based upon expectation and
the final realization after all uncertainties are resolved. This chapter discusses the
possibility of using supplementary metrics other than expectation alone, to help designers
identify promising ideas in a fluidic design process.
Section 5.2 reviews expectation-centric decision making framework and introduces
Allais' paradox in expected utility maximization. Section 5.3 then classifies the prevailing
uncertainties encountered in design process. With the insights gained from the
classification, section 5.4 suggests a descriptive set of metrics to supplement an
expectation-based decision rule. The concepts of risk and opportunity will be introduced
and mathematically defined. Section 5.5 will illustrate that the mean-semivariance
framework can be used to partially address the Allais paradox.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
92
5.2
EXPECTATION-BASED DECISIONS
5.2.1
Framework
In design decision-making, a designer structures and quantifies their preferences and all
associated uncertainties for a specific decision situation. Typically, a decision-maker uses
a value-based concept to quantify preferences while using probabilities, often subjective
in a strict sense, to formalize and represent circumstantial uncertainties associated with
early designs. Design alternatives with the highest expected values are then chosen
because they offer the highest expected performance in a statistical sense. In utility
theory, the maximization of expected utility is inferred from a set of axioms which define
the behavior of a rational decision-maker (Pratt, Raiffa et al. 1965). Under uncertainty a
rational decision-maker should follow the rule of maximization of expected utility.
However, some economics literature argues that, in some operational cases, expected
utility maximization has limitations as a descriptive theory (Kreps 1992). Allais paradox
and Machina's paradox constitute the oldest and most famous challenges to the expected
utility theorem. The following section introduces Allais paradox introduced in mid 50's.
5.2.2
Allais paradox
5.2.2.1 The Experiment
The most famous violation of expectation-based argument in utility theory is Allais
paradox, named after its discoverer, Maurice Allais (Mas-Colell, Whinston et al. 1995).
One modified version is presented in this section. It is comprised of four choices among
two hypothetical gambles; the first gamble is shown in Figure 5.1, while the second one
is shown in Figure 5.2. Empirical evidence provided by many literature shows that
majority of people choose lottery LI over L 12for the first lottery.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
93
0
$2,500,000
L 11
0.1
$2,500,000
$500,000
$500,000
$0
$0
Figure 5.1 First Game: two lotteries for Allais paradox
For the second gamble with two lotteries of L2 , and L22, the empirical result indicates that
majority of people prefer L22 to L21
0
L21
-0.11
0.89
$2,500,000
0.1
$2,500,000
$500,000
$500,000
$0
$0
Figure 5.2 Second Game: two lotteries for Allais paradox
Let's analyze these two decisions using utility function u(x), $ 0< x < $2,500,000.
Interpreting the first gambling result between L,, and La using the expected utility theory,
u(0.5M) > 0.1 - u(2.5M) + 0.89 . u(0.5M) + 0.01 -u(0)
(5.1)
where M stands for million.
On the other hand, the second gambling result between L2 , and L22 is interpreted using
expected utility theory as
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
94
0.11. u(0.5M) + 0.89 -u(O) < 0.1 u(2.5M) + 0.9 -u(O)
(5.2)
Adding 0.89 -u(0) - 0.89 -u(0.5M)to both sides of equation (5.1) yields
0.11 -u(0.5M) + 0.89 -u(0) > 0.1 -u(2.5M) + 0.9 -u(0)
(5.3)
which is in contradiction with equation (5.2).
Therefore, in this hypothetical application, majority of decision-makers violates the
maximization of expected utility theory.
One interesting point in this paradox is that the utility function does not influence the
result at all. This paradox exists for all possible utility functions. Therefore, the paradox
originates not from the utility functions but from the use of expectation as a decision
metric. Next subsection reviews previous work regarding the paradox in other disciplines.
5.2.2.2 Alternative analysis
The discovery of Allais paradox raised different reactions within the decision science
community and other academic disciplines.
At first, the paradox was dismissed as an isolated example. However, research efforts
traced its origination to the independence axiom in utility theory. More specifically, this
empirical violation of the independence axiom is known as common consequence effect.
A more detailed discussion on common consequence effect is found in (Machina Summer
1987).
Kahneman and Tversky explained this paradox from a descriptive viewpoint (Kahneman
and Tversky 1979). One of their arguments in the suggested prospect theory is that
people over-emphasize certain events when using probabilistic representation of
uncertainties. This view is shared by other behavioral economists (Baron 1988).
Massachusetts institute of Technology - Computer-aidedDesign Laboratory
95
Decision Weight: xt(p)
1.0
0.5
0
0.5
1.0
Stated probability: p
Figure 5.3 Hypothetical weighting function for probabilities
Figure 5.3 schematically shows a hypothetical weighting function in probability. Based
upon this hypothetical non-linearity in the probabilities, many researchers have suggested
nonlinear functional forms. The simplest form suggested by Kahneman and Tversky is
(Kahneman and Tversky 1979)
Sv(xi).- 7c( p,)
(5.4)
where v(.) is a generalized value function such as utility function and n is the decision
weight. A more complete coverage of many other functional forms are found in Machina
(Machina Summer 1987).
Another explanation of this paradox is "regret theory". The regret theory argues that
people makes decisions in the direction to minimize the regret after the uncertainties are
completely resolved. Therefore, in the above examples, getting $0 as the lottery outcome
will bring the highest regret to decision-makers and this fact qualitatively governs the
decision.
5.2.2.3 Its Implication to Design Decision Making
A designer may choose a preliminary concept with a high expected value that may not
turn out to be the best choice at the end of the design activity. Thus, one might infer that
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
96
preliminary design concepts selected solely based upon highest expectation may not
always be the best choice. Given such discrepancies, it would be of interest for a designer
to quantify the potential gap between current expectations and possible final outcomes in
advance. This might lead to development of additional metrics that would supplement the
metric of expectation in design decision making. This interpretation is qualitatively
related with the regret theory in the previous section.
The source of discrepancy between the initial expectation and the final result is at least
twofold. First, preliminary subjective probability distributions quantifying uncertainties
associated with design alternatives may be unreliable. This has been an important issue in
the use of subjective probabilities in many decision science disciplines. Although design
activity is an on-going and continuous process, a specific decision often has to be made
with temporal information that will have an influence on the rest of design activity.
Therefore, a designer tries to secure high quality information to construct appropriate
subjective probabilities. Although the use of imperfect information for a temporal
decision making is an important source of discrepancy, this issue will not be addressed in
this thesis.
A second possibility is use of probabilities to represent different kinds of uncertainties
encountered in design. Objective or frequency-based probabilities represent and quantify
uncertainties of an infinitely repeatable event. However, many decisions involve
uncertainties for single events. For example, the lotteries for the Allais paradox are given
just once to the participants. In the paradox, the events are not repeatable. Probabilities
are used to model the state of knowledge and quantify the uncertainty of single events.
The interpretation of single events, often subjective probabilities, is addressed in the
literature (Popper 1956) (French 1986) and it is widely accepted that probabilities (often
subjective probabilities) are used to represent and quantify single events subject to
uncertainty.
However, the interpretation of expectation is more difficult and hypothetical for single
events. Expectation is the average value of a random variable and, at the same time, is an
answer to a very hypothetical and abstract question for an infinitely repeatable
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
97
experiment (Bodie 1989). The meaning of the expected worth in design decision-making
in many cases is abstract because the relevant decision event (i.e., the experiment) is
usually only repeated once. The statistic of expectation, initially devised for repeatable
events, may not fully address the implication of probabilities used to describe
uncertainties of single events. The expected value may not even be one of the possible
values. For example, a random variable that takes on values of either 1 or -1 with equal
probabilities can never have zero in a single realization. An interesting anecdote is found
in (Schiller 1997).
Samuelson told a story which he believes demonstrates a violation of expected
utility theory. Samuelson reported that he asked a lunch colleague whether he
would accept a bet that paid him $200 with a probability of 0.5 and lost him $100
with probability of 0.5. The colleague said that he would not take the bet, but he
would take a hundred of them. With 100 such bets, his expected total wining is
$5000 and virtually he has no chance of losing the money. It seems intuitively
compelling that one would readily take the complete set of bets, even if any
element of the set is unattractive.
In summary, two issues have been raised. Probability is often used to represent and
quantify both repeatable and single events. And the statistic of expectation effectively
differentiates uncertainties of repeatable events. However, expectation may not fully
differentiate the implications of single events that are quantified through probabilities.
This chapter will continue to investigate these two different kinds of uncertainties. It will
extend to a survey of methodologies that can be used to differentiate single events. Based
on this, a supplementary metric in addition to expectation will be devised.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
98
5.3
UNCERTAINTY CLASSIFICATION
5.3.1
Type I Uncertainty: Repeatable Event
This type of uncertainty, also called aleatoryuncertainty, describes natural variability
(Apostolakis 1990). This uncertainty is due to the random or stochastic variability of the
phenomena and is contained in the formulation of the model under investigation. Other
terms used for this uncertainty are randomness or stochastic uncertainty. Exemplary
uncertainties in this category are dimensional variability of manufactured goods,
variability found in material properties, life time of a machinery, and etc. The uncertainty
in this category is best described as the variability in the outcome of a repeatable
experiment. This is associated with frequentist's view on probability. The classical
probability theory was developed to describe and quantify this kind of uncertainties.
Another interesting observation about this type of uncertainty is its relationship with
information. This uncertainty is due to humanly inaccessible information. Uncertainty in
this category is not reducible to a deterministic form no matter how much information is
available. For example, there are too many variables governing the dimensional
variability in manufacturing processes. Therefore it is virtually impossible for the human
to control all the variables to reduce the variability to a complete zero. From an
informational context, the aleatory uncertainty is characterized as irreduceable
uncertainty (Veneziano 1994)
5.3.2
Type II Uncertainty : Single Event
This type of uncertainty is also called as epistemic uncertainty. This uncertainty is due to
our ignorance and represents the state of knowledge regarding a certain parameter. Many
economic or social events such as interest rate on the last day of 1998 or next year's total
car sales in US fall in this category. Another famous example would be the weather
forecasting in one week. In design perspective, all the nominal values subject to
estimation in early design phase falls in this category.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
99
This type of uncertainty is completely resolved if the decision-maker has access to all the
necessary information. The decision-maker can pinpoint one single number for the
interest rate on the last day of 1998 after this date. For the case of future event, the
uncertainty will be resolved completely as time goes on. As the day passes, more
information will be available for more reliable forecast. On the day itself, all the
uncertainty will be resolved; only one event will realize. Reflecting this time-based
resolution property, the uncertainty in this category is also referred to being reduceable.
Uncertainty in this category is evolutionary; the uncertainty can be completely resolved
accordingly with the amount of accessible information, in many cases temporal.
In most cases, probability is used to model and quantify the uncertainty in this category.
However, the probability distribution in this case should eventually have zero variability
with addition of more information (within the desired level of resolution). Based upon
this observation, we call the uncertainty in this category as uncertainties of single events.
5.3.3
Discussion
Since the underlying implications for the uncertainties of repeatable and single events are
different, the probabilities in both cases should be treated in a different manner.
Considerable research effort has been focused on providing different interpretation of
subjective probabilities expressed in terms of objective probabilities. In this way, the
uncertainties of single events can still be subject to the probability calculus devised for
objective probabilities. Next subsection discusses decision making solely on the
uncertainties of second kind or the uncertainties of single events.
5.4
DECISION MAKING UNDER TYPE II UNCERTAINTY
Portfolio construction in finance theory provides a good example for decision making
under uncertainties of single events. The yearend return for a specific equity is subject to
uncertainty and the associated uncertainty in this case is uncertainties of single events.
Although there is much uncertainty about the yearend return before the end of the year,
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
100
there will be just one number realized at the end of the year. And the financial managers,
if their time horizon is the end of this year, are interested in that specific number.
5.4.1
Mean-variance analysis in finance theory
In finance theory, probabilities are used for encoding uncertainties of single events.
Figure 5.4 schematically shows a portfolio construction process for two investment
opportunities, A and B. In this process, the probability density functions for the
ROI(Return On Investment) of A and B are first constructed based upon accumulated
historical data. It is a clear assumption that only one point in the probability distribution
will be realizedforthe decision period. The expected return and risk are defined by the
expectation and variance of the distribution, respectively. Variance, in turn, can be
interpretedas the degree of surprise,quantifying the deviation of the actual returnfrom
the expected value (Bodie 1989). Then, the pair of expectation and variance for
investments A and B is plotted in t-a (mean-standard deviation) space (Figure 5.4 (B)).
A proper portfolio - a weighted combination of these two investments - yields a parabola
probability
Expected
15
return
15
investment A
P
20
15
-5
25
return,
P
rA
[-B
=20
GB=15
-10
A
-
probability
00
20
(A)
.
investment B
return,
5
rB
(B)
Figure 5.4 Portfolio construction with two investments
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
15
standard
deviation
101
in the diagram. The solid portion of the portfolio line is called an efficient portfolio. At
the same risk or standard deviation level, this curve yields the highest expected return.
Once the efficient frontier is identified, the final portfolio selection depends on the risk
attitude of the investor. Usually, higher return is expected only with higher risk.
Therefore, a risk-adverse investor would choose P1 over P2.
On the other hand, a risk-seeking investor might choose portfolio P2, which may yield a
higher return at greater risk.
The variance is always considered in the portfolio construction process. The uncertainty
of a specific equity was quantified with mean and variance and the final portfolio point
will also be determined on the (g-a).
The portfolio construction is a good example that a surprise factor should be included for
the decision making under the uncertainties of single events.
Under the premise that the variability should be considered for in early design phase, the
first step is to determine how to measure the variability of the uncertainties in evolving
design process. Then its interpretation and use should be determined.
5.4.2
Quantification of Dispersion
There are many statistics that address distinct aspects of probability distributions.
Expectation is by far the most compact way of summarizing information contained in a
probability distribution. Other statistics such as variance and skewness, 2 " and
3rd
moments respectively, extract information that expectation does not provide. Figure 5.5
schematically compares different statistics in two attributes - information abstraction and
intensity. Expectation achieves the highest level of abstraction at the cost of information
intensity. On the other hand, the original distribution delivers the most complete
information at the highest complexity level. It is true that expectation allows an efficient
mechanism for comparing different uncertain design candidates. However, when the
probabilities represent single events, the statistic of expectation may not convey sufficient
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
102
information
-intensity
information
abstraction
compromised
region
Exp.
Exp. +
Var.
Box
plot
Whole
distribution
Figure 5.5 Comparison of different statistics
information for designers to deal with possible discrepancies between what is expected
and what may be realized.
InterQualtile Range(IQR) is one of the most robust statistic to measure the dispersion of
probability distribution. This is defined as the difference between two qualtiles. Median
Absolute Deviation from the Median or MAD is another robust statistic. Assume the
presence of discrete data points, x,,...,x,, and its median as i. MAD is defined to be the
median of the numbers
lxi -
ii.
The most commonly used measure of dispersion is the standard deviation, which is a
square root of the variance. It is defined as
a 2 = f(x).(x -m)2dx, m=fx.f(x)dx
5.4.3
(5.5)
Opportunity and Risk
5.4.3.1 Bi-directional measurement
Mean-variance analysis, as applied to finance theory, assumes that the underlying
uncertainty in ROI is expressed as a normal distribution. This assumption about the
underlying distribution is often valid for financial data accumulated over a relatively
short period of time(Brealey and Meyers 1996). However, in design decision-making the
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
103
underlying uncertainty may be asymmetric. Thus, the mean-variance analysis approach
should be adjusted to account for more generic uncertainty structures.
5.4.3.2 Definitions
In mean-variance analysis, risk is defined and quantified as the degree of surprise that
may occur if the outcome fails to meet the expectation of an underlying distribution. In
design decision making, we would like to distinguish between good and bad surprises.
These two different kinds of uncertainties are defined as:
* Risk: possible single event outcomes with lower worth than expectation.
*
Opportunity: possible single event outcomes with higher worth than expectation.
Symmetric distributions will always have same degree of risk and opportunity.
5.4.3.3 Quantification
From the notion of semi-variance in finance theory, a modified concept based upon
2 nd
momentum is defined as (Fishburn 1977)
a
r - T) 2 ]
T = E[min(0,
(5.6)
where E is an expectation operator and r is a random variable representing ROI. A target
return for an investment is represented by T. This formulation uses the target return as a
reference point for defining risk. Further it quantifies the risk as the 2 "dmomentum of
possible realizations below the target value T.
The definitions of risk and opportunity suggest the use of expectation as a reference point
for uncertainty classification in design decision making. By doing this, the expectation
can still be used as a normative tool in current decision-making framework. Risk is
quantified in terms of oq, while opportunity is quantified in terms of a,. They are
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
104
02
inf
= E[max(O,a(X)
=
-
E[a(X)]) 2 ]
(5.7-a)
E[min(O, a(X) - E[a(X)])2
(5.7-b)
where y2 _..72 +0.2
mnf
sup
where X is a random variable for an attribute level and a(x) is the corresponding
preference function for X. After calculating the expectation and semi-standard deviations
for each design alternative, the result might be visualized as is shown in Figure 5.6.
value or
utility
opportunity
-
risk-
~7I7
expectation
design
A
design
B
Figure 5.6 Visualization for using the suggested metrics
5.4.4
Interpretation of Opportunity/Risk in evolving design process
Expectation, opportunity, and risk measured in terms of semi-standard deviations and
defined as in equations (5.7-a) and (5.7-b), respectively, may be used as a primary aid for
decision-making under evolving uncertainty. In Figure 5.6, design B's uncertainty
structure indicates that its expected worth is slightly lower than that of design A, but its
opportunity is greater than its risk. Design A's uncertainty structure indicates a slightly
higher expectation with lower opportunity and risk. Therefore, qualitatively speaking, a
risk-taking design decision maker might choose design B while a risk-averse design
decision maker may choose design A. A useful index for providing insight into the
uncertainty structure might be
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
105
sup
(5.8)
a inf
the ratio of opportunity to risk. A modified two-dimensional plot of (R, Ti) might be used
to understand opportunity and risk of design alternatives. This is shown in Figure 5.7 for
the designs A and B (Figure 5.6). Note that for i1=1, the current uncertainty structure is
symmetric in terms of risk and opportunity. On the other hand, when r1 is greater than one
it suggests that the current uncertainty structure has more opportunity than risk. A third
momentum of a random variable (a measure of skewness) might also be used to replace
(5.9)
M3 = E[a(X) - E[a(X)]]3
The resulting sign indicates whether opportunity or risk dominates in the corresponding
uncertainty structure. Any odd momentum will have this characteristic, with the
exception of the first momentum (expectation).
A
B
1.0
Figure 5.7 (g,Ti) plot of two designs A and B
The opportunity/risk ratio Timight also serve as a normative metric to predict the
implications of new additional information in the context of continuing design activity.
As a design develops, more information becomes available and the level of uncertainties
gradually decreases. The opportunity/risk ratio, in lieu of additional information, may
indicate how a design might evolve during the design process. When f1=1 information,
either equally positive or negative in a qualitative sense, will have same impact on the
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
106
expected worth of the evolutionary design. On the other hand, when r > 1, positive
information will increase the expected worth estimate to a greater extent than equally
negative information would decrease the design's expected worth. The opposite argument
holds when ri < 1. This interpretation assumes that the next piece of information is
subject to a stochastic randomness of equal probabilities.
5.4.5
Discussion: Risk attitude
The shape of a quantitative preference function (Figure 5.8(A)), such as utility or
acceptability function, determines the risk attitude of a decision-maker (Keeney and
Raiffa 1993). Figure 5.8(B) shows a probability density function reflecting the
uncertainty associated with a certain decision alternative. The integration of the
preference function and the probability density function describing uncertainty yields the
expectation and a corresponding variance. By interpreting the corresponding variance
through mean-variance analysis, the value and uncertainty structure of each design
alternative can be plotted in ([t,)
space. Regardless of a decision-maker's risk attitude
encoded in the preference function, the final decision will be made on ( ,y), depending
upon the decision maker's risk attitude. We might question, then, if a decision maker's
risk attitude is already expressed in the prescribed preference function as in Figure
5.8(A), why is another risk attitude necessary to make a decision in (g,y) space? Two
kinds of risk seem to originate from different sources as follows.
utile, u(x)
pdf, f(x)
attribute x, X
(A)
attribute x, X
(B)
Figure 5.8 Construction of a preference function and uncertainty estimate
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
107
The risk expressed in a preference function is a decision-maker's degree of satisfaction
for attribute levels under an implicit assumption that each attribute level is achievable to
the same extent. This information is mainly used for tradeoffs in multi attribute
problems. On the other hand, the risk attitude for a decision in (g,) space is associated
with different degree of likelihood on the realization of distinct attribute levels.
This chapter identifies the single events as the source for the second kind of risk. For a
more realistic decision in the evolving design, these sources for different uncertainties
should be identified and their implications should be incorporated into decision making
context.
5.5
EXAMPLE: ALLAIS PARADOX REVISITED
This subsection applies the mean-semivariance analysis to the Allais paradox.
5.5.1
Pre-Analysis
The first task for this example is to deduce the utility function of a decision-maker based
upon the utility analysis on the first gambling result. For the people who favored L,, to
L 12 , the
expected utility theory dictates that,
E[L I] = u(0.5M) > 0.1. u(2.5M) + 0.89. u(0.5M) + 0.01 -u(0) = E[L 12]
(5.10)
Since a linear transformation does not influence outcome of utility analysis, assume
u(0) =0,
(5.11-a)
u(0.5M) = a,
(5.11-b)
u(2.5M)=1.0
(5.11-c)
Substituting the above three conditions into the equation (5.10) yields,
a > 0.1+0.89 -a
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
(5.12)
108
Combining the above result with the condition that u(0.5M) < u(2.5M) yields the range
for a, i.e., 0.91 < a < 1.0. Based upon the value of a, the utility of 0.5M, we can deduce
that the decision-maker is risk averse as is shown in Figure 5.9.
utility
g
1.0
0.91
--- - - - -------
r
0.5
2.0
1.0
$ (in million)
Figure 5.9 Deduced utility function for Allais paradox
5.5.2
First Game: L11 vs. L2
Apply the mean-semivariance to the first game, the lotteries L,, and L1 ,
Lottery L 12
Lottery L,,
E[L]
=
a
E[L12 ] = 0.1 + 0.89a
.2 = 0.0
u2 =0.1. (0.9 - 0.89)2
-2 = =0.0
o f,=0.01- (0.1- 0.89a)2
(inf
1111 = 1.0 (definition)
1712
3.16 . (0.9 - 0.89a)
(0.1+ 0.89a)
For the range of a, 0.91< a< 1.0, the expected values for the two lotteries are very close,
E[ 1 ] = E[L 2 ]
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
(5.13)
109
However, the opportunity/risk ratios are quite different,
111 =1
(5.14)
>> 1 12
Although the expected values are very close, the ratios are not, f I
>
f
b. In words, there
is much more "bad"surprise component built in the lottery L, 2 . Therefore, people might
avoid this lottery. In the lab experiment, majority people choose L,, over L, 2 . The
argument with (il
>
1112)
predicts the choice of the majority in the experiment. For
uncertainties of single events, people tries to avoid "bad"surprise in the final outcome.
5.5.3
Second Game: L21 vs. L22
Application of mean semi-variance analysis to the second game yields
Lottery L2,
E[L 1]= 0.lla
a 2 = 0.87a
= 0.081
= 0.009
2
a32 = 0.01 1a
=2.81
E[L 2] = 0.1
2
2
77
21
Lottery L22
32
inf
1122
-
=3.0
In this case, again, the expected values for both lotteries are very close with 0.91 < a <
1.0. However, the comparison of the ratios )2, > 722 suggests that the decision maker
perceives more opportunity or less risk with the second lottery, L22 than the first one, L2.
This interpretation is consistent with the empirical result that majority people choose L22
over L2 ,.
5.6
SUMMARY
This chapter provided mean-semivariance analysis for decision making under
uncertainties of single events. After classifying the kind of uncertainties encountered in
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
110
evolving design phase, it was proposed that variability offers additional insight about
decisions when they are subject to single events and when the associated uncertainty is
large. The main concept characterizing such a decision situation was defined as risk and
opportunity. Various interpretations using the mathematical definitions of risk and
opportunity are given with a purpose to provide more insights to the decision under
uncertainties of single events. The mean-semivariance was used to explain the empirical
result in Allais paradox. The ratio of opportunity to risk seems to partly explain the
contradiction that cannot be explained using expectation-based framework.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
I1
6. DATA AGGREGATION MODEL
6.1
OVERVIEW
Design activities usually involve multiple designers. In the product development process,
it would be common that designers will have different judgmental opinions on an
identical assessment task. In words, the decision-maker faces different estimates from
different experts. This chapter provides a systematic method addressing this data
aggregation issue. There are two main reasons that call for the development of such a
method.
First, from practical considerations, most decision-making tools incorporate a single
performance estimate for decision making under uncertainty. Second, since each estimate
may be based upon different knowledge pools, it may be important that the different
views are properly incorporated into the decision-making process for a more informed
decision.
This chapter suggests a systematic way of merging multiple estimates from different
designers into a single reference. The aggregated result, then, can be used for a more
comprehensive decision in an evolving design process.
After defining the problem from an information-based context, related work in the area of
data aggregate will be reviewed. Thereafter, the suggested data aggregate model will be
built in three steps. Each step will be discussed in detail, followed by a simple example
illustrating the use of the aggregate method.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
112
6.1.1
Problem from Information-Context
In the suggested framework, designers represent their judgments in a probabilistic way. It
is assumed that the probability distributions from different designers represent one and
the same information with noises. In a symbolic representation
I,=I+e ,i=1,2,...,n
(6.1)
where I is all of the accessible information at a specific time in the process.
I,: information used by the i"' expert for the estimate.
E, noise in the information I, compared to I.
Different probability distributions (P,) by different experts are assumed to be based upon
different information pools, Ii. On the other hand, using all the accessible information I
will generate P*, a hypothetical, hyper probability distribution. The distribution P*,
therefore, is a hypothetical but best estimate obtainable at a specific time t= T*.
P
P*
I
J
E~)
(6.2)
P*(I)
This chapter will construct an aggregating mechanism,
Merging Mechanism(M) : P = M( P,..., P,) = P(I,,..., I)
by systematically merging different estimates of P,'s, into P. It is hoped that the resultant
P will be closer to P*. From the information-context, the merging mechanism will
behave as a filter for the individual piece of information(I,). The imaginary filter will
emphasize the core information (I) while diminishing the noise part(E) in each
distribution (I,). Therefore, the resulting merged estimate will be based upon an
information pool with minimum noise.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
113
6.2
RELATED RESEARCH
A rich body of research literature exists for aggregating different data or expert opinions.
A complete survey is found in work by Genest (Genest and Zidek 1986). While the
methods for aggregating objective data sets emphasize the pure statistical side of the
problem, aggregating expert opinions or subjective knowledge emphasizes both the
statistical technique and problem-specific issues as well.
The aggregate model can be either point aggregation or distribution aggregation. Point
aggregation methods combine a set of expert opinion given as single points, while
distribution aggregation merges a set of different probability distributions.
The motivation for this chapter is construction of a statistical framework for combining
different designers' opinions given as probability distributions. Therefore, this chapter
will mainly review related work in distribution aggregation, but relevant point aggregate
methods will also be discussed briefly.
6.2.1
Point Aggregate Model
Meta-analysis is a point aggregate model often used in social and clinical studies (Glass,
McGaw et al. 1981). The method organizes and extracts information from large masses
of data that are nearly incomprehensible by other means. This method is targeted at
combining point data sets whose differences are so large or critical that they almost look
impossible to integrate. Such situations are quite often encountered in clinical or social
studies based upon literature searches. The common criticism for meta-analysis is that it
sometimes looks too illogical to mix findings from studies that are not the same.
Another interesting method, virtual sampling method, for aggregating point data comes
from artificial intelligence (Hamburger 1986). The virtual sampling method suggests a
mechanism for combining n pairs of estimate points given by,
(m,,vi),
i = 1,2,...,n
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
(6.3)
114
where m, is an estimate for a parameter and vi is an
i'h
expert's variance on the estimate.
This aggregate mechanism is then formed based upon a set of desired characteristics for a
proper merging mechanism. In this model, the variance is modeled to represent the
uncertainty level of an ith expert. The method is built upon virtual sampling technique that
is based upon a well-established statistical relationship between sampling size and
standard deviation of the sample mean.
A more general discussion on other point aggregate models is found in work by Hogarth
(Hogarth 1977).
6.2.2
Probability Distribution Aggregate Model
Aggregating distributions, as opposed to point estimates, is more technically challenging
(Raiffa 1968). However, aggregating distributions has been an important issue in
decision-making under uncertainty since formal decision making framework incorporates
only one distribution for uncertainty representation.
There are two main different views on the distribution aggregate or group assessments.
The first approach views a set of different estimates as an opinion pool and represents the
aggregated form as a weighted average of individual distributions. This is associated with
probability mixture in probability theory. In words,
n
T
P,...,
=
O, - Fi
(6.4)
The main task in using the probability mixture is then to determine the weighting factors
characterizing the context of the problem. An engineering application using this model is
found in the Probabilistic Risk Assessment (Hora and Iman 1989).
Alternative to this opinion pool view is naturalconjugate approach based upon Bayesian
statistics (Winkler 1968). Each expert opinion is viewed as a sample evidence which can
be represented by a conjugate prior. Then, the final aggregate is formed by aggregating
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
115
each expert's prior distribution using Bayes' theorem (Hogarth 1977). However, this
approach is practically difficult since one must determine the degree of dependencies
between different distributions.
Besides the quantitative aggregate models, the Delphi method can be modified for
aggregating experts' opinions (Sackman 1974).
6.3
PROBABILITY MERGING MECHANISM
6.3.1
Approach: Probability mixture
The opinion pool view is adopted in this research for three reasons. First, the model
characterization for opinion pool is deemed to more closely describe the problem that
designers face in practice. Second, from operational viewpoint, the opinion pool view is
both intuitive and practical compared to naturalconjugate model. The identification of
dependencies among the distributions for conjugate model is very challenging and is
subject to considerable noise. Third and perhaps most importantly, there are studies
documenting that composite distributions using simple weighting schemes show greater
predictability than most of the individual ones (Sanders 1963). Although the author
could not find comparative studies regarding the predictive powers of pool opinion
approach and natural conjugate model, the gathered information suggests that the opinion
pool provide significant predictive power with modest operational effort.
The main task in probability mixture framework is then to identify the weighting factors
for each distribution and this task should be based upon group decision-making behavior.
6.3.2
Analysis of difference among the Estimates
The first logical step towards the construction of any distribution aggregate model would
be analyzing the difference among estimates. In its essence, those differences
simultaneously serve as a source of confusion and, at the same time, opportunity to
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
116
designers. The difference analysis process involves an investigation of how the
distributions are different both in qualitative and quantitative terms. After identifying the
key differences among different probability distributions, these differences can be used to
decompose the aggregate model.
There are many ways to quantitatively differentiate probability distributions using
different statistics. For example, the statistics of "mean" is the most widely used statistic
to compare distributions. On the other extreme, using the entire distribution provides the
most information-rich but unintuitive way of comparing distributions. In lieu of
constructing an aggregate mechanism with both intuitiveness and information richness, a
pair of statistics of (mean, variance) is used for differentiating probability distributions.
Qualitatively, the total difference among probability distributions will be characterized
based upon the mean difference and variance difference. In words,
Total difference = mean difference + variancedifference + H. 0. T's
where the high order terms may include skewness, etc.
Analysis of Variance (ANOVA) also uses these two concepts for quantifying the
difference among the sample data points. In ANOVA, variabilities between and within
the sample data points are used to characterize the difference.
,
Decgree of
Intuitivenss
,'
mean
(mean,
variance)
(mean,
variance,
skewness)
Information
Richness
entire
distribution
Figure 6.1 Comparing different statistics proposed for differentiating
distributions
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
117
Consider three sets of different distributions in Figure 6.2. In Figure 6.2(A), the variance
of each probability distribution, X and Y, is the same, while their mean values are
different. Figure 6.2(B) provides an opposite case. In Figure 6.2(A), the mean shift or
mean difference between X and Y is contributing 100% towards the total difference while
their variance difference contributes zero towards the total difference. In Figure 6.2(B),
on the other hand, the variance difference is contributing 100% towards the total
difference. This is also shown in Table 6.1.
In most cases, the total difference between estimates will be a combination of mean and
variance differences, as is shown in Figure 6.2(C). In Figure 6.2(C), the variance
difference between the probability distributions is assumed to account A % towards the
total difference, while mean difference accounts (100- A) % towards the total difference.
This qualitative comparison is also shown in Table 6.1.
So far the focus has been to qualitatively analyze the difference among the probability
distributions using the statistical pair of mean and variance. Based upon the qualitative
analysis, the following four discrete steps are proposed to determine the weighting factors
for probability mixture.
X
y
(A)
X
(B)
X
(C)
Figure 6.2 Classification of difference among estimates
Table 6.1 Comparison of differences among distributions
case A
case B
case C
Contribution to total
difference by
Contribution to total
difference by mean
variance
Difference
Difference
0%
100%
A %
100%
0%
(100-A) %
total
difference
100%
100%
100 %
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
118
"
Merging mechanism for pure mean-different distributions: This merging mechanism
is represented by M,,e,, . This method will be exclusively used when the meandifference contributes 100 % towards the total difference as in the Figure 6.2(A).
" Merging mechanism for pure variance-differentdistributions: This merging
mechanism is represented by Mva,. This method will be exclusively used when
variance-difference contributes 100 %towards the total difference as in the Figure
6.2(B).
" Quantification of total difference: The difference factor (a) that quantifies variancedifference's contribution towards the total difference will be calculated. The quantity
(1- a), therefore, characterizes the contribution of mean-difference towards the total
difference.
The mechanisms of Mvar and Mnean will then be combined together in an appropriate way
to yield the final weighting factors, as
Mtotl
= a Mdev(X, Y) +
-
totl
(X
Y)(6.5)
(1- a)- Mean(X, Y)
where, X and Y are two different probabilities. The completed model will be a proper
combination of two separate merging mechanisms through the difference factor (a).
The following subsections will show the construction of two separate merging
mechanisms and calculation of the coefficient, a, for the proposed probability mixture
model.
6.3.3
Merging with variance-difference
This subsection addresses the exclusive case when the variance difference among the
estimates is contributing 100 %towards the total difference.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
119
6.3.3.1 Prerequisite : N-sample average
Let a random variable X be subject to a probability distribution function F. Further,
assume that it has a mean and a standard deviation of pF an aF, respectively. This will be
written as
X
(6.6)
F G2)
Now let (x, , ... , x,) be a random sample of size n drawn from the distribution F. The mean
n
of the n-sample T =
x, /n will have the same mean of pF , but with a smaller variance
of (1/n.
2
n
The variance of JYis 1/n times the original variance of X. This is the main reason for
taking an n-sample average for a parameter estimation problems such as in manufacturing
process control: the larger the sample size n is, the smaller the corresponding variance
will be. Therefore, the confidence interval for the parameter of pF becomes narrower with
a bigger sample size of n. Hamburger's virtual sampling model, introduced in the
previous subsection, is based upon this sample size- variance relationship. This research
also draws a conceptual analogy with this well-established relationship.
6.3.3.2 Modeling Assumption
The basic underlying modeling assumption in the pseudo-sampling approach is three
folds.
*
Artificial sample sizes characterizing the variance of a distribution will be used as a
measure of uncertainty for a distribution.
* Uncertainty level of a distribution quantifies the confidence level of an expert.
* Estimates with more confidence will be weighed more heavily.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
120
In a group decision-making environment, it is logical to weigh heavy a more confident
opinion in the pool. In a probability-based framework, variance of a distribution is a good
indicator for sureness of the data or, in a subjective sense, confidence level. In the
suggested model, the variance will be used to indirectly quantify the confidence level
behind an estimate.
6.3.3.3 Pseudo-Sampling Size
Let a random variable X have an unknown probability mass function (PMF) of F.
Suppose an estimate P,be provided by an i'h expert. It is hypothesized that an estimate
distribution of Pi is obtained from drawing an unknown sample size of N, from F. This is
interpreted as
2
(6.8)
i'
Ni
where,
a = variance of an unknown distributionF
= variance
aiac of distributionPi
2Pi
(Yx
impling with
size Nx
Estimate X
F
Hypothetical, hyper
distribution
Sampling with
size Ny
Estimate Y
Figure 6.3 Concept on the underlying distribution and pseudo sampling
size
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
121
6.3.3.4 Confidence quantified in terms of Pseudo sample size
The confidence level is qualitatively correlated to a distribution's degree of dispersion.
More specifically in the proposed model, it is to be measured in terms of pseudo
sampling size. Further, an estimate with a larger sample size N is equivalent to a small
dispersion and is interpreted as being derived with more confidence. The above
relationship in section 6.3.3.3 uses an existing statistical relationship for correlating
variance of a distribution and the sampling size. Due to the desired characteristics of
variance and confidence level, i.e. less variance into more confidence, the sampling size
can serve as an excellent quantitative indicator for the confidence level for an estimate.
Combining all these ideas together, we can now devise the weighting factor
corresponding to Mvar. For estimates of X, and X2, their corresponding sampling sizes are
determined as
2_
X
(6.9)
2
Y
ny
where, nx+ny = 1.0.
6.3.3.5 Pseudo Sampling Size in Probability Mixture Model
Incorporating the above weighting factors in the probability mixture model, the weighting
factors for Mvar are constructed as
n
M
(FI,... P)=Y
nv
var(P*)
where, var(I)= a(Pnvar, and
nvar, i
(6.10)
- FP
n
=
1.0
i=1
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
122
6.3.4
Mechanism for merging estimates with different means
The subsection 6.3.3 provided a merging mechanism when the distributions are
exclusively subject to variance-difference. This part of the chapter addresses the other
counterpart, a merging mechanism for use when the distributions are exclusively subject
to mean-difference.
6.3.4.1 Modeling Assumption
This subsection discusses an aggregation model for probability distributions when the
mean-difference contributes 100% towards the total difference.
The merging mechanism in this part is characterized as a consensus-based approach. See
Figure 6.4. Given a main stream distribution A and a set of outliers B, there are two
strategies to deal with the outliers.
* Ignore the presence of the outliers: ignore the data which violates the main stream
data set A.
*
Try to transform the outliers: in order not to discard possible information used for
outliers; try to transform data B to fit into mainstream data set A.
0
eeA
OS.
B
X
Figure 6.4 Main stream data and a set of outlier
The second strategy more closely reflects design decision context since the outliers may
based upon important information that other estimates might miss. Ignoring outliers in
the early design phase may exclude an important scenario as design develops. At the
same time, the resulting probability should not be too sensitive to the outliers. The
preceding idea translates into the following modeling statement
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
123
If the mean of a certain estimate is closer to the mean of the overall estimates, the
estimate will be more influential on the final distribution construction.
The above seems to be a natural thought in dealing with different multiple estimates: If a
specific estimate shows high level of consensus among other estimates, it will be more
influential. However, even if the estimate is not showing high level of consensus, there is
a need to capture that estimate in the final distribution in a less influential way. This
concept may be interpreted as a discountfactorfor each estimate. An estimate with more
consensus will have a small discount factor while the one with little consensus will have
a large discount factor. The above argument can be expressed with the following
coefficient for quantifying the degree of consensus,
= min{(Xi - X ) 2
(X - X),
nean, =
(Xi -
(6.11)
)2
where X, are mean of estimates and X is the mean of all estimate means, X,'s. If the
value of # is zero, we can assign 1 to every
n,,ea,,,
for practicality. As a matter of fact, as
will be shown in the following section that, if # is zero, there is no need to obtain
n,,,,ani's.
With the weighting factors defined as above, the mixture of the estimates exclusively
with 100% mean-difference is
M,nean (I
--- ,
n)
n,inean, i
=
(6.12)
i=I
6.3.5
Quantification of Variability among estimates
The sections 6.3.1 and 6.3.2 provided a qualitative analysis of difference between
estimates. Based upon that qualitative analysis, two separate merging mechanisms were
developed:
M,,
and M,ean. The next step is to combine these two separate models into
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
124
one complete model using a more rigorous quantitative analysis on the difference. The
main question is
How much does the variance-difference or mean-difference contribute towards
the total difference?
One way of answering this question is to use a modified version of Fisherstatistics. A
statistic sharing the same qualitative characteristic with Fisherstatistic is defined as
F
variability between estimates
variability within estimates
(6.13)
If F=O, all the mean values of the estimates are the same. In other words, the total
difference exclusively comes from the variability difference among the estimates. On the
other extreme, when F is infinite, all the estimates are the point values and their meandifference among the estimates contributes 100% towards the total difference. In
between, when F is between zero and infinite number, the difference is partially from the
variability between and within estimates.
A modified Fisher statistic will be used to quantify the difference among estimates in the
suggested framework. Here, the denominator and numerator will be replaced by
F
variability between estimates
variability within estimates
var(x,)
(6.14)
1- a
a
where x, is a random variable representing each estimate. The derived quantity a , called
a difference parameter,quantitatively describes the relationship between two dominant
differences - mean and variance. Two critical characteristics of aX are
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
125
+ When the variability between estimates are zero (
(X
--
+ When the variability within estimates are zero (Ivar(xi)
6.3.6
=0.0), (Xis 1.0.
0.0), a is 0.0.
A Complete Aggregate Model
Two separate aggregate models of M,. and M,,,,, will be combined through the derived
difference level x, forming a complete estimate aggregate model. The final formulation is
Mtota(PI.
P,) = a . Ma,(P..,P,) + (1 -a) -M,,,,(It,..., P)
=
n,
* v~ +
(6.15)
(1 - a) - niean,i) -P
i= I
When the variability between estimates is zero (ax = 1.0 ), the overall aggregation model
of Mtot,, will reduce to M,.. On the other hand, if the variability within estimates is zero
(a = 0.0 ), the Mto,,a will reduce to M,,ean. It was mentioned in the preceding section that, if
all the means of the estimates are the same, the value of # will be zero and it is not
possible to derive
nzean,'s
in equation (6.11). It turns out that if P is zero, then the
denominator of equation (6.11) is zero. In this case, the value of a will be 1.0. With
ac=1.0 the Mot,,, will reduce to M,. only and the user need not derive the ratios in equation
(6.12).
6.4
EXAMPLE
This subsection shows a simple example illustrating the outcome of the proposed
probability mixture model. Given two estimates modeled as normal distributions of
N(2,0.5 2 ) and N(3, 1.02), the task is to merge those two probability distributions. These
two inputs have mean-difference and variance-difference as well. Two thin curves in
Figure 6.5 indicate two input distributions, while the thick line denotes the combined
result.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
126
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
4.5
5.0
5.5
6.0
Figure 6.5 Merging probability distributions using the proposed model
6.5
SUMMARY
This chapter presented a mechanism for combining different estimates by multiple
experts, a problem often encountered in early design phases. At first, a number of
existing merging mechanisms, both for points and for probability distributions were
reviewed. The model proposed in this chapter is then conceptually based upon an opinion
pool and statistically uses a probability mixture methodology. The model is qualitatively
characterized based upon an assumption that estimates with more confidence and high
consensus will be more influential on the resulting distribution. Three distinct steps were
taken for the determination of the weighting factors,
+ Analysis and quantification of variability among the estimates
+ Construction of separate merging mechanisms for variance- and mean- different
estimates.
+ Completion of a final model combining two estimates through the difference
parameter a.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
127
7. APPLICATION EXAMPLES
7.1
OVERVIEW
This chapter provides two implementation examples; two software applications using
acceptability decision model as a decision tool integrated into the software. The first
example shows the use of acceptability model in an engineering application. It illustrates
the use of acceptability model in the DOME (Distributed, Object-based Modeling
Environment) system being developed at MIT Computer-aided Design Laboratory. An
overview of DOME and its use of acceptability-based design evaluation model will be
illustrated in section 7.2. The acceptability model in DOME will allow designers to
receive a real-time feedback on design performance as the designers create or change
engineering models. Section 7.3 shows the use of acceptability model as a decision guide
for online retail sites. As the number of online retail sites grows, the number of selections
available to the buyers grows accordingly. Although this serves as a good opportunity for
buyers to purchase better product at a lower price, the buyers are easily overwhelmed
simply by the large number of selections. The use of formal decision analysis tools in this
situation can greatly help buyers to systematically and consistently judge a large pool of
selections. In the second part of this chapter, a detailed discussion on currently available
online decision tools will first be provided. After the usability of acceptability model as
an online decision tool is discussed, the architecture and implementation of a "lap top
computer" commerce engine with a built-in acceptability model as a decision guide will
be discussed in detail.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
128
7.2
ACCEPTABILITY AS AN INTEGRAL DECISION TOOL IN DOME
7.2.1
DOME system and Acceptability component
Development of DOME system has been a major research initiative at MIT Computeraided Design Laboratory. DOME aims to provide to designers a web-based modeling
environment where designers, working in a heterogeneous environment in every sense,
can collaborate with each other by creating, sharing and optimizing designs in real time.
DOME framework views design activity as a service network, where participants create,
modify and exchange their design models. Each designer may work on a specific aspect
of design and when those models are linked together over the Internet the entire model
under development emerges. Acceptability model serves as an integral decision making
tool for the DOME system. From functional viewpoint, subsystems associated with the
acceptability model allows the designers to evaluate an ongoing design either in local or
in a global scale in real time. Based upon this feedback, designers can identify problems
and make tradeoffs among competing design attributes (Pahl and Beitz 1996).
7.2.2
DOME Components
DOME system has a list of variables that designers can use in building an engineering
model. Among them are real modules, PDF modules, Excel module, etc. A user can use a
real module to have a discrete value in the model. A PDF (Probability Distribution
Function) module is used when a user wants to specify a random variable as part of the
engineering model that is being built. The DOME system can also have other application
software incorporated as objects, providing inputs and receiving outputs from external
software. This functionality allows the integration of heterogeneous computing
environments. Common to all of these modules is their basic architecture. The Figure 7.1
shows a simple diagram for DOME components. It is a thin client/server system written
in JavaTM and C++ language. The Java client, Java server and C++ engines are the main
components. The engineering model is built in C++ and heavy computing is done in C++.
The Java parts mainly provides Graphical User Interface (GUI) for visualization of the
model and its building process. The C++ engine was written by M. Wall and is being
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
129
used in CAD Laboratory for research purposes. The Java part template is written by B.
Linder and has been shared by CADlab members for research purposes. Due to the
object-oriented technology used for DOME implementation, new functional modules are
easily created and added to the existing system to add more functionality to the DOME
system.
Java Client
Java Server
C++ Engine
Figure 7.1 DOME components
7.2.3
Decision Support Component
There are three modules associated with acceptability decision framework. They are
acceptability setting module, criterion module and acceptability aggregator module. This
section briefly discusses the functionality of each module.
7.2.3.1 Acceptability Setting Module
Designers can set one dimensional acceptability function using this module. The
corresponding GUI is shown in Figure 7.2. The buttons on the left hand side of the GUI
panel allows the users either to create a new acceptability function or to modify an
Figure 7.2 Acceptability Setting Module
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
130
existing function. This Java front-end part was written by J. Chung, one of the CAD lab
members.
7.2.3.2 Criterion Module
The criterion module shows the current achievement of a specific design attribute with a
corresponding acceptability level. It has two inputs; one for the acceptability function of a
specific attribute and the other, the attribute level of a certain design under development.
As an output, it displays three pieces of information, expectation, opportunity and risk as
defined in Chapter 5. Figure 7.3 shows the GUI of the criterion module. In this figure, the
achieved expectation is shown as 0.88 while the opportunity and risk are zero since the
attribute level is assigned as a delta function.
W
,:i.
*~1
oe
l
I
Moue
-.-
Ri--
Figure 7.3 GUI of a criterion module
7.2.3.3 Aggregator Module
An aggregator module shows the multi dimensional acceptability of a design under
development. All the attributes in the aggregator module are assumed to be mutually
acceptability independent and this assumption validates calculating the overall
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
131
Figure 7.4 GUI of an aggregator module
acceptability as the multiplication of the individual acceptability values. The GUI of an
aggregator module is shown in Figure 7.4. From functional viewpoint, an aggregator
module allows the designers to collect all criterion modules, dispersed in different parts
of whole design, and show the overall acceptability as multiplication of all expectation
values from every criterion module. In Figure 7.4, each acceptability value is visualized
as a bar in the table and the overall score is shown in the text field on top of the table.
7.2.4
An integrated Model
A simple DOME model having an acceptability-setting module and a criterion module is
shown in Figure 7.5. Here, a designer tries to evaluate cost, a design performance against
cost specification, an predefined acceptability function. A real module is used to set the
attribute level of "cost". An acceptability setting module named as "costspec" and a real
value of "cost" are two inputs to the criterion module. A designer can set the values in
both modules by using GUI. The GUI will pop up by double clicking the icons before the
name of the modules. In Figure 7.5(A), a criterion module is collapsed hiding the three
pieces of information inside. In Figure 7.5(B), the same criterion module is unfolded
showing the inside information. In Figure 7.5(B), the cost real module and cost-spec
acceptability setting module outside of the criterion module are hooked up to the
corresponding modules inside the criterion module. Due to the object-oriented modeling
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
132
MINl E
lmcadiab1
cada13
ol
x
Addres~ lltesto.mdIl
|Vaue!Unt
Name
P cadlab13
v testO.rndl
4 testo.mdl
7s cost
cost_spec
criterion
3.7E3
$
$
$
7cost
C cost-spec
V
Criterion
a evalue
3.7E3
$
0.88
7H opportunity 0.0
0.0
a risk
74 cost_spec $
3.7E3
78 cost
(A)
$
$
$
$
(B)
Figure 7.5 Acceptability related components in DOME model.
technique used in DOME system implementation, the decision module can be easily
replaced by other decision models if the designers want to. For instance, utility-based
decision model or other decision models can be constructed and integrated into the
DOME system. For the time being, the acceptability module has been used in building
engineering models in CADlab.
Figure 7.6 shows a Movable Glass System(MGS), a part of an automobile door system.
Figure 7.6 Movable Glass System as a subsystem of car door system
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
133
The main components in the MGS systems are glass, cross-arm linkages, motor, and
window sealing system. Figure 7.7(A) shows the MGS engineering model built in
DOME. At the bottom part of Figure 7.7(A) are shown several criterion modules for
evaluating the current configuration of the design. In Figure 7.7(B) is an aggregator
module included in the model. The aggregator module collectively shows the criterion
modules dispersed in the model as well as the overall acceptability of the model.
Address:
model2.mdl Door Glass2
adeoffs
Units
Nan PVau
7
cadlab33
modetl.mdl
r
model2.mdl
* Door Glass2
Seals
SealsCatalog
VelocityExcel
BPillarHeight
GlassRadius
SealDragA
MotorTorque
MotorSpeed
AvgVel
AvgStall
300.0
11.5E3
0.09
5.5
82.0
6.09
410
mm
|mm
N/mm
J
rpm
cm/s
N
//model2.md/DoorGlass2/Tradeoffs/
N
GeometryiDEAS
ArtificialGeornetry
', OpeningTime Matlab
I Design specifcations
-2 Integration
'2Tradeoffs
velocity spec
openingjimespec
stall forcespec
w glass radius spce
7e glassradius spec
7Lopportunity
7r risk
Preference
Attribute
l costspec
e
(A)
0.87
n00o
0.0
2.0E3
MI
(B)
Figure 7.7 Figure (A) shows part of a design model in DOME environment,
while Figure (B) shows an aggregator module for overall evaluation
7.3
ACCEPTABILITY AS AN ONLINE COMMERCE DECISION GUIDE
The second implementation illustrates the use of acceptability decision model as an online shopping decision guide. The explosion of Internet brought a more broader choice to
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
134
the buyers. As the number of online retailer increases, buyers have access to more
selections than ever before. In order to fully exploit this large selection pool, ideally
buyers should be able to compare each item carefully and identify a product that will best
fit the their needs. However, in the presence of the large pool, most buyers find it difficult
to systematically compare all the alternatives. The entire comparison process will not
only be time consuming and tedious, but also be technically difficult if a buyer has
multiple buying criteria. For example, if a buyer's criteria are price and performance at
the same time, the buyer has to determine relative values of competing products with
different attribute levels. This comparison buying then can easily overwhelm the users
and keep them away from the best buying practice. Noticing this problem, vendors have
been providing tools that allows users to flexibly and efficiently search the selections
offered at their sites. Rough decision tools having been serving the buyers -
the purpose
of this example is to implement the acceptability decision model as a part of a simplified
shopping engine usable over the network.
7.3.1
Background
7.3.1.1 Shopping Decision Attribute Classification
Criteria for online buyers are largely classified into two categories; product attributes and
merchant attributes. Product attributes are characteristics of a physical product such as
various performances and price. On the other hand, merchant attributes describe the
characteristics of the merchant or broker such as shopping convenience, product variety,
product availability, delivery time, and etc. Although these two attribute groups are in
most cases equally important, it will be assumed in the following sections that buyers are
only concerned in product attributes and make decisions based upon product attributes.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
135
Criteria Setting
EIK
internet
Translation
MU
Search Result
Displav
internet
Search Result
CLIENT side
SERVER side
Figure 7.8 Online shopping search operation
7.3.1.2 Operation
Figure 7.8 shows an overview of shopping site operation from both user and vendor
sides. A buyer sets a set of criteria provided at the web browser. The vendor should
provide a set of intuitive GUIs to the buyers so that the buyers can easily express their
decision criteria. Afterwards, the set of specified criteria will be transferred over the
network to the vendor's server. The server will have two components; a business logic
and a central database. The business logic will translate the customer's criterion set into a
query, which will then be fed into the database. The database contains all the product
information. There are now web sites that do not maintain a central database. Using agent
technology, the vendor site sends out a query to manufacturer's sites to collect the product
information. However, in this implementation a central database is assumed. Upon
receiving a query result from the database, the business logic will summarize the search
result in an informative way. The result will then be sent back to the client side so that the
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
136
buyer can browse the result and decide. After that, the buyer will either go ahead and
purchase from the set of suggested selections or redo their search.
7.3.1.3 Online Market Trend
The number of product categories offered at online shopping sites is increasing quickly.
Although books and CDs used to be main stream products offered on the web sites for a
while, diverse products are now available on the web market. Judging from product
attributes only, buying a book or a CD is more or less a single attribute problem. As long
as the buyers know the book title or the album title, the price would be the only attribute
that will concern buyers. However, most products are bought based upon multiple
product attributes. When buying a computer, for example, buyers will consider diverse
product attributes such as CPU speed, RAM size, and Hard Disk size, for the decision.
When buying decisions are made based upon a number of attributes, the decision analysis
can benefit both the buyers and vendors in analyzing the situation.
7.3.1.4 Decision Environment
Although the online commerce example can be cast as a well-defined decision
framework, the situation introduces additional dimension to the traditional decision
problem definition:
e
The limited communication channel between buyer and vendor
* The lack of mutual communication between buyer and vendor.
Unlike physical markets, the decision analysis is the only channel through which the
buyer and seller can communicate to each other online. The buyer's preference quantified
through the decision analysis is the only information that a vendor will have about
potential buyers. Based upon this information the vendor will provide a product set that
will attract buyers. Another dimension being considered for online shopping is lack of bidirectional communications between buyers and vendors. In a physical market, a buyer is
likely to be helped by sales persons in identifying products and arriving at a specific
product. The presence of sales person can make buyers identify or change the buyer's
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
137
preference that will ultimately lead to a buying decision. From decision analysis view
point, this is almost equivalent to decision makers identifying new decision attributes or
modifying an existing preference through a rigorous analysis. Considering that vendors
will maintain a finite set of attributes for a certain product in their databases, identifying
new attributes for a certain product category through online decision analysis is not
likely. However, a more realistic decision analysis tool would accommodate the
possibility that the buyers modify the predetermined attribute level. The next section
reviews a tool currently used as decision guide and one which will be available on the
WWW as soon as the spring 1999.
7.3.2
CurrentTools
7.3.2.1 Constraint Satisfaction Problem
The Constraint Satisfaction Problem (CSP) analyzes decision problems more
qualitatively through constraints (Guttman, Viegas et al. 1997). Finite domain CSP is one
approach composed of three main parts: a finite set of variables,each of which is
associated with afinite domain; a set of constraints that define relationships among
variables and restricts the values that the variables can simultaneously take. From
operational viewpoint, it involves two discrete steps. First, the user determines a binary
Retative to one another, how
following fatOrs?
SPA**the
-w
From:
I.
O
'
y~rm1ou
poreviit
01:hers here's your chance The
be Us d t c 0rk the tiems on your inaleLat.
MHz
j3
From.
Riom:
Faster Petese Speed
To:
900
[AOO
,pq*t*#
if yoid like to stress the tnpzanc e of canain factors ovex
To:
To.
Fram
-I
importarn art
|
targ
To:
m1
F
rt.
ihigh*iAn
2"4meit
1
i
of 40.thet;Md ut #a
(A)
C
r
More Systei Memory (RAM)
rlnir' r
9
Fadse Moden
0
Cbi
r
or r~
(B)
Figure 7.9 Preference presentation using CSP
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
r
C'
r
r
r
r~
C
r
138
representation for each attribute. In Figure 7.9(A), a user can decide a range for each
attribute. From a pure decision analysis viewpoint, CSP is a rough form of Figure of
Merit, where an individual value function is represented as a binary function that sets a
crisp boundary between what is desired and what is not. In the panel shown in Figure
7.9(B) a user can set weighting factors for each attribute. The overall score is then
determined as a weighted average of the values set in Figure 7.9(A).
CSP formulation seems to be the only sophisticated decision guide currently available.
PersonaLogic TM provides a decision guide to sites including AOLTM and other shopping
sites (PersonaLogic 1999). Its popularity stems from the fact that retailers began to
provide more than a simple "search" method on the retail sites and this is the first
generation product available as an online decision guide. However, as was discussed in
depth in section 2.5, the Figure of Merit may not accurately express the buyer's
preference in the shopping decision. This is apparent in the sense that FOM fails to
capture the transitional ranges for attributes. The application of multi attribute utility
theory seems to be motivated by the needs to provide more sophisticated decision tools
both to buyers and vendors.
7.3.2.2 Utility Theory
There is a group of people trying to deliver a more sophisticated decision tools to the
online retail sites. A research group at MIT Media Laboratory devised a comprehensive
shopping guide engine using agent technology (Guttman, Viegas et al. 1997). As a part of
the system, multiple attribute utility theory is chosen as a decision support tool. Titled as
Tete@Tete(T@T), the support tool can be used to effectively differentiate both products
and merchants in an otherwise homogenous-looking marketplace. The major benefit of
using utility theory over CSP is the improved mathematical sophistication, which is often
translated into measurement accuracy. However, this benefit comes at the cost of a more
complicated and lengthy assessment process. The buyer will have to elicit utility
functions for individual attributes, and also have to determine a number of scaling factors
via cross-reference lotteries. In order to fully exploit the utility theory, the buyer not only
has to spend more time in the elicitation process but also is assumed to have minimal
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
139
knowledge in utility theory. Figure 7.10 shows Graphical User Interface for TeTe@Tete
project. The product attributes are displayed at the left column of the panel, while the
user's utility functions are shown at the right column of the panel in GUI. In between is
the individual score of the corresponding product attributes. After appropriate steps are
taken for determining the scaling factors, the aggregated utility of the product is shown as
the bright line at the top of center column.
The research group at the MIT Media lab then launched a start-up company based upon
the prototype of T@T and has been in operation in Cambridge, MA as of early 1999
(Kleiner 1999).
Figure 7.10 User Interface for T@T project using multi attribute utility
theory
7.3.2.3 Comparison of Decision Tools
Based upon the discussion provided in Chapters 2 and 3, the advantage of using
acceptability model as a decision guide for retail sites is visible. In terms of operability
and mathematical sophistication, multiple attribute acceptability places itself between the
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
140
Figure of Merit method and multiple attribute utility theory. Mathematical sophistication
of the multi attribute acceptability model was claimed from the rigorous construction
steps taken compared to that of Figure of Merit Method. The construction process of
acceptability model shares much of it with multiple attribute utility theory. The
operational simplicity of the acceptability model was deduced from the smaller number
of steps necessary to construct the multi attribute acceptability function compared to that
of multi attribute utility function.
A key balance between these two dimensional space will be beneficial to both parties in
that:
e
Buyers can express their key decision factors rather completely without many and
complex steps.
*
Vendors can secure a relatively complete picture of potential buyer's key preference
structure without imposing significant operational burden on the buyer side. With this
rather complete picture obtained, vendor can provide more potentially attractive
selections to the buyers.
7.3.3
System Realization
This section discusses the realization of a simple online shopping engine using
acceptability model as a decision engine and Microsoft® Access as a central database.
After provision of a brief discussion on the systems architecture, its implementation
procedure as well as its functionality will be discussed in detail.
7.3.3.1 Architecture
The Figure 7.11 shows a diagram of the main components in the implementation. There
are two main parts in the system: client and server. The client part is written in JavaTM
since the system should be deployable over the Internet. The server side is implemented
in JavaTM, C++ and Visual Basic with Microsoft® Access as a relational database.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
141
The client side has two main components: Graphical User Interfaces(GUIs) for data
visualization and a Client Module(CM) for communication to and from the server. The
CM, in turn, has a component that facilitates communications to server via RMI (Remote
Method Invocation). The GUI part is also decomposed into two main components. These
are:
e
Acceptability Setting Panel has two main parts. A main panel showing a list of
decision attributes. It also has a set of popup modules that allows a buyer to set
individual acceptability functions.
" Result Panel has two main components. A main panel showing the result, product
names, and their overall acceptability values. It also has a set of popup windows
showing a detailed analysis of a specific product's individual acceptability levels.
The server has four main components
*
Java Server: This part facilitates communication to the client. This part receives and
sends events from and to the client. Part of it is implemented in JNI (Java Native
Invocation) to facilitate communication between Java Server and C++ Engine.
JAVA server
r
JAVA clien7t
Comm. Between
Java and C+ +
Acceptability setting
Result Display
aSearch
internet
JNI
C++ model
O
eEvaluation
eSorting
MS Access
Figure 7.11 System architecture of a simple online shopping engine using
acceptability model as the decision guide
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
142
*
C++ Engine: This part is responsible for evaluating each record retrieved from the
database based upon a set of acceptability functions. It also determines and sorts a
finite number of highly scored records that will be shown to the user. This part has a
component in COM (Component Object Model) for C++ object to communicate to
Access Data Model written in Visual Basic.
* Access Data Model: This model retrieved a record from the Microsoft® Access. This
model is written in Visual Basic.
* Microsoft* Access: A relational database. The database is constructed using product
descriptions found in one of the online shopping web sites.
7.3.3.2 Implementation Result
Figures 7.12 and 7.13 show the GUI parts constructed for the application example.
IrMAiez0mdTimpLe!!Mud Ix
1
Figure 7.12 Acceptability setting interface. The left window shows the list of
the decision attributes. The right panel allows the user to set and modify
the acceptability function for an individual attribute
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
143
Figure 7.12 shows the first GUI with the list of the decision attributes and a panel that
allows the users to set and modify individual acceptability functions. The acceptability
function set in this figure shows that the buyer is risk averse to the attribute, weight. After
setting all the individual acceptability functions, the buyer presses the button at the
bottom of the panel to send a service request to the server. Figure 7.13 shows two panels
associated with the search result received from the server. The Figure 7.13(A) displays a
set of products that have shown highest overall acceptability. The Figure 7.13(B) shows a
detailed analysis panel for one product listed in (A). With this panel, a user can view the
individual acceptability of a specific product and tradeoffs amongst the attributes.
I
F1 1/t.t
p-vM
-d/
(A)
od-
I
(B)
Figure 7.13 Figure (A) is the main result panel where (B) is a detail analysis
panel for a specific product shown in the main panel
7.4
SUMMARY
This chapter presented two implementation examples using acceptability model as an
integrated decision analysis tool. The first example illustrates the use of acceptability
model as a decision module at MIT DOME system. DOME provides a generic modeling
framework where designers collaborate in a heterogeneous environment.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
144
The acceptability serving as a decision guide, designers can view individual design
performance attributes, decide tradeoff among competing attributes, and make overall
decisions.
The second example illustrates the use of acceptability model as a decision guide for a
simplified commerce engine implementation. The role of decision analysis on retail sites
is broader than traditional decision situations since the decision analysis tool is the only
communication channel between buyers and vendors. Tools currently available and under
development are discussed and the advantage of using acceptability model as a decision
guide is presented. A simplified commerce engine is then implemented as a client/server
system with Microsoft® Access as a relational database.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
145
8. CONCLUSIONS
8.1
SUMMARY OF THESIS
This thesis presents a goal-oriented design evaluation framework for use throughout the
product design process. A set of prescriptive, intuitive decision tools is built into a
coherent decision framework that will help designers in evaluating uncertain, evolving
designs. The unified framework encompasses four areas.
In the area of multi dimensional value analysis, a previously proposed acceptability-based
decision model was reviewed and further developed. The key concepts of aspiration and
rejection levels are refined and given mathematical definitions. A multi dimensional
acceptability function is constructed from individual acceptability functions using
pragmatic operational assumption of preferential independence. It was deduced that the
intuitive form of the multi dimensional acceptability function is due to the mathematical
definitions of aspiration and rejection levels. Advantages and limitations of using
acceptability-based evaluation model are discussed compared to other scoring methods.
A goal setting method, based upon a decomposition/recomposition framework provides a
formal model helping designers quantify a goal level. This method is useful for defining
aspiration levels in acceptability function. The goal setting task is mathematically
modeled as a parameter assessment process with incomplete knowledge. This method
models a parameter assessment task as a hypothetical decision chain subject to stochastic
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
146
uncertainty. In this model, designers' sublevel judgments are elicited as a set of
probabilities. Unless designers can completely resolve the uncertainties, these sets of
probabilities are in conflict with each other. However, a consistency relaxation method
argues that they serve a good practical starting point for securing answers closer to real
life answers. Under appropriate operational assumptions, the sub-level probability sets are
translated into a transition matrix to construct a discrete Markov Chain. The conditions
for the existence of limiting probabilities and the use of limiting probabilities in practical
applications are discussed. As a post analysis, second largest eigen value of a transition
matrix is analyzed to gain insight regarding information context of the transition matrix.
A specific transition matrix is statistically compared against a pool of randomly selected
transition matrices. This post analysis can help decision analyst to assess the information
status of the transition matrix used in the model.
Mean semi-variance analysis, the third component of the framework, addresses the role
of uncertainties of single events on decisions made by designers in a descriptive manner.
Two different kinds of uncertainties present in design processes are classified;
uncertainties of repeatable events and uncertainties of single events. Although these two
uncertainties carry different implications to decision-makers, most formal decision
analysis tools fail to adequately account this factor in the decision framework. Relevant
previous work such as Allais paradox and portfolio construction in finance theory were
reviewed to further justify the needs to address this issue. This part of research argues
that a decision-maker's risk attitude towards uncertainty is not fully quantified in
preference structure. Based on those findings a conceptual framework on uncertainties of
single events is provided and a metric is suggested to complement the current expectation
based framework.
The final element of the framework is a data aggregation model. It develops a systematic
method for combining multiple estimates or subjective judgments by different experts in
design process. This method is practically important since most formal decision analysis
tools incorporate only a single probability distribution as an uncertainty component. This
model assumes that an expert opinion is based upon a personal information pool that is
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
147
modeled as an unknown, ideal pool of information disturbed with noisy, personal bias.
Under this assumption, the method develops a probability mixture mechanism that
diminishes the noisy part of the personal information pool while emphasizing the ideal
pool of information. This model uses a step by step approach; quantifying the difference
amongst the estimates, merging estimates with exclusive mean-difference, merging
estimates with exclusive variance-difference and combining two distinct merging
mechanisms via the quantified difference parameter among the estimates.
Those four research topics, combined with pre-developed acceptability model, form a
comprehensive decision framework based upon the notion of goal. One research theme
behind these four topics is to emphasize the role of uncertainties in two decision analysis
components; decision-maker's preference and uncertainties of alternatives.
8.2
CONTRIBUTIONS
The major contribution of this thesis is the provision of a comprehensive value-based
decision analysis framework in engineering and design process context. The four research
topics discussed in detail are closely linked together and serve as a guidance in diverse
decision situations. The following subsections provide some of the major contributions.
8.2.1
Mathematical Basis for Acceptability Model
The starting point for this thesis was that decision model should be context-dependent.
Therefore, a decision analysis method being used in the design process should be built
from observations of design practice and the viewpoint of design practitioners. In this
context, the acceptability model serves as a starting point for development of a
comprehensive, design context-dependent framework for design decision making under
uncertainty. This thesis contributed to solidification and expansion of the existing
acceptability model to form a comprehensive design evaluation framework, mapping the
design language of the acceptability model into a set of definitions and mathematical
steps. The steps taken for the construction of multi dimensional acceptability function is
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
148
similar to those of utility theory. This thesis takes the necessary steps to mathematically
show the intuitiveness of previously proposed acceptability model. The first contribution
is that:
+ Designers have a decision framework based upon design practice and built on design
languages.
* Designers have a design evaluation model that achieves mathematical sophistication
at modest operational effort.
8.2.2
A Formal Goal Setting Model
The goal setting method provides a formal framework for helping designers quantify a
goal level in early design phase. This research models the goal setting task as parameter
estimation process under uncertainties of single events-there are not many tools
available for helping designers in this quantification process.
A decomposition/recomposition model helps designers with a practical method for
establishing goal levels. Using this method for setting a goal level:
* Designers can effectively use soft information in a quantification task.
+ Designers can explore their knowledge pool to a greater extent, increasing the chance
of securing a parameter more carefully thought of.
This method is extensible to a general parameter estimation task subject to uncertainties
of single events.
8.2.3
Classification of Uncertainties in Design Process
Another contribution of this research is the provision of a new perspective on the
uncertainties encountered in general decision making. The crux for this part of
contribution is that two different kinds of uncertainties, uncertainties of single events and
repeatable events, should be interpreted differently in the decision making process.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
149
Although they are formalized and quantified using probabilities, these two kinds of
uncertainties poise different implications to decision-makers in the decision context. This
research brings attention to this issue and provides conceptual arguments from various
viewpoints. Finally, it suggests a supplementary, descriptive metric to partially bolster the
current expectation-centric framework. This metric may be interpreted as measuring an
evolving value of design in the design process. The contribution in this perspective is:
+ Designers can classify the uncertainties encountered in the design process.
* Designers have a metric to supplement current expectation-based metric for decision
making under uncertainties of single events. This metric can be flexibly used to
account the temporal impact of uncertainties.
8.3
RECOMMENDATIONS FOR FUTURE WORK
8.3.1
Dynamic Decision Model in Design Iteration Context
In its essence, most of design decision-making tools provide a temporal picture of a
decision situation. However, in the design iteration context, a designer's preference
changes over time, so do the associated uncertainties. Most currently available decision
frameworks fail to, or minimally address this time-variability of a decision problem. As a
matter of fact, most quantitative decision analysis tools are mainly focused on the
multiattributivity of decision problems. A design evaluation framework accounting for
time variability including the dynamics of designers' preference change is a challenging
and important issue in design decision making.
8.3.2
Multidimensional Goal Setting Model
The theoretical side of goal setting process modeled as a discrete Markov Chain is valid
for multi dimensional case. As long as designers can express a set of sublevel
probabilities for a certain state, a corresponding transition matrix can be built and the
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
150
limiting probabilities are calculated. In this thesis, an example is given for one
dimensional goal setting case.
However, it is foreseen that as the number of attributes increases, the sublevel probability
elicitation part of the model becomes complex. If the attributes are independent in a
certain operational way, a concept similar to the probability independence in multivariate
probability theory may be introduced to make the elicitation process more tractable.
However, if the attributes are not operationally independent, the complex elicitation
process may easily overwhelm the participating designers. The operational issue
regarding multi dimensional goal setting model needs further investigation.
Another recommendation is to see whether the elicitation sequence matters in a multi
dimensional goal level setting task. If it does, how does the sequence be decided?
This topic is also associated with multi dimensional acceptability function since
acceptability function hinged on the notion of goal.
8.3.3
Extension of Goal Setting Model as an Estimation Tool
As was previously discussed in the preceding sections, goal setting task was
mathematically formulated as a parameter estimation process under uncertainties of
single events. In many cases, designers face the same kind of problems that can be
formulated as a parameter assessment under uncertainties of single events. The basic
framework for goal setting method can be used as a tool for helping designers in such
cases. It is recommended to extend the current model to a more general assessment tool
under lack of knowledge.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
151
APPENDIX A
This appendix shows the construction of a three dimensional acceptability function using
simple mathematical steps. Although the multiplicative acceptability for a three
dimensional case is deduced using mathematical induction, this appendix shows that an
identical result is obtained directly using the mathematical definitions of aspiration and
rejection levels.
Under the mutual acceptability independence among attribute x, y, and z, the three
dimensional acceptability function should be a proper combination of the individual
acceptability functions, a(x),a(y), and a(z),
a(x, y, z)= k a(x)+l -a(y)+ m .a(z)+ p -a(x)- a(y)
+q a(x)- a(z)+ r -a(y)- a(z)+ s -a(x)- a(y)- a(z)
In addition, from the properties of the one dimensional acceptability functions,
a(xo,y,z)=O Vy,z
a(x, yo, z)= 0 V x,z
(A.2)
a(x, y, zo) = 0 V x, y
a(x*, y*, z')= 1
Substituting the first property of (A.2) into the equation (A. 1)
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
152
a(xOy,z) = 0 =l a(y)+m-a(z)+ r.a(y).a(z)
Vy,z .-. l=m=r=O
(A.3)
From the second condition in (A.2),
a(x, yo, z) = 0 = k - a(x) + m -a(z) + q -a(x)- a(z)
Vx,z
..
k=m=q=O
(A.4)
Also from the third condition in (A.3),
a(x,y,z o )= 0 = k-a(x)+l-a(y)+p-a(x)-a(y)
Vx,y
..
k=l=p=O
(A.5)
Substituting the results from (A.2) through (A.5), the equation (A. 1) simplifies into
a(x, y, z)= s -a(x)- a(y)- a(z)
(A.6)
The value of unknown s is determined from the last property in (A.2),
a(x*, y*, z*)
=
1= s -a(x*)- a(y*)- a(z*) = s
(A.7)
Therefore,
a(x, y, z) = a(x)- a(y)- a(z)
(A.8)
An identical result is obtained using the definition of mutual acceptability independence
and mathematical induction.
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
153
APPENDIX B
This appendix contains the handout used for goal setting interview at Ford. The purpose
of the survey was to observe the user's perception on the method and to measure the
engineers' expectation of level from using a new technology in the product development
process. The purpose of this handout was to provide the interviewees with a brief
description of the method before the conduct of an interview. This survey was conducted
on March 23, 1999 at Dearborn, MI.
Purpose
This is a part of my PhD research in the area of design decision making under
uncertainty. For every decision, ranging from everyday life to sophisticated engineering
work, you often have to estimate important quantities with only limited amount of
information
One example is the weather forecasting - we often encounter statements such as " the
chance that it will rain tomorrow is 30%". The meteorologist, with his data and personal
experience, is predicting an uncertain future event. Probability is used to present the
meteorologist's uncertainty in the prediction.
The "goal-setting method", developed at MIT CADLab for the past years, is designed to
help engineers set appropriate goals for uncertain events. In words, the tool will help
engineers set product target specifications at early design phase. It might also help to
answer questions such as
In which year did US population exceed 200 million?
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
154
What will the Dow Jones IndustrialAverage be on Jan 2"', 2001 ?
By which year, 20 % of the new passengercars manufactured in the US will be powered
by alternative technologies rather than gasoline-basedengines?
Common to all the questions above is that, all have an exact answer if all necessary
information is available. For example, on Jan. 3, 2001, you could provide an exact
number. However, all you can do today is guess with all the information now available.
The Method and How it works
Consider the last question in the previous paragraph (car question).
In year T, 20 % of the new passenger cars manufactured in the US will be powered by
alternativetechnologies other than gasoline-basedengines.
Let T be the year that you think will make the above statement true.
Compare the year T against an arbitrary year, say, 2010. There are three possible
outcomes for this comparison.
T will be less than 2010
T will be about 2010
T will be greaterthan 2010
And if you are not 100 % sure about the result, you can assign a probability to each event.
For instance, my personal opinion is as follows.
Chance that (T is less than 2010) = 0.1
Chance that (T is about 2010) => 0.2
Chance that (T is greaterthan 2010) => 0.7
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
'55
(NB: the three probabilities have to sum up to 1.0)
Engineering Application
DOME(Distributed Object-based Modeling Environment) is a client/server application
developed at MIT CADLab. It enables engineers in a heterogeneous environment in
every sense, to collaborate the design task over the internet. We hope this software will
bring measurable values to the engineering organizations.
This survey is aimed at measuring the benefits that you are expecting from using DOME
in your organization. More specifically,
Compared to the current practice, the use of DOME will bring
X % of reduction in average development time
Y % of reduction in average development cost.
Z % improvement in average quality.
Quantifying the numbers(X,Y,Z) in the above statements, using the goal setting method
is the main topic of this survey. The survey will be done using an interactive Java
program.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
156
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
157
BIBLIOGRAPHY
Apostolakis, G. (1990). The Concept of Probability in Safety Assessment of
Technological Systems. Science. 250: 1359-1364.
ASI, Ed. (1987). Quality Function Deployment. Executive Briefing. Dearborn, Michigan,
American Supplier Institute Inc.
Baron, J. (1988). Thinking and deciding. Cambridge, New York, Cambridge University
Press.
Bauer, V. and M. Wegener (1977). Application of Multi-Attribute Utility Theory.
Decision Making and Change in Human Affairs. H. Jungermann and G. d. Zeeuw.
Dordrecht-Holland, Reidel Publishing Company.
Bodie, Z. (1989). Investments. Horwood, Illinois, Irwin.
Brealey, R. A. and S. C. Meyers (1996). Principles of Corporate Finance, McGraw Hill.
CADLab,. (1999). DOME system, MIT CADLAB. http://cadlab.mit.edu
Curtis, K. (1994). From management goal setting to organizational results : transforming
strategies into action. Westport, Conn., Quorum Books.
Dlesk, D. C. and J. S. Lieman (1983). "Multiple Objective Engineering Design."
Engineering Optimization 6.
Drake, A. and R. R. Keeney (1992). Decision Analysis, MIT Center for Advanced
Engineering Study.
Fishburn, P. (1977). "Mean-Risk Analysis with Risk Associated with Below-Target
Returns." The American Economic Review 67.
Franceschini, F. and S. Rossetto (1995). "QFD: The problem of Comparing
Technical/Engineering Design Requirements." Research in Engineering Design 7.
French, S. (1984). Fuzzy Decision Analysis: Some Criticisms. Fuzzy Sets and Decision
Analysis. H.-J. Zimmermann and L. A. Zadeh. Amsterdam
New York, North-Holland.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
158
French, S. (1986). Decision Theory : an introduction to the mathematics of rationalty.
Chichester, West Sussex, Halsted Press.
Genest, C. and J. V. Zidek (1986). "Combining Probability Distributions: A Critique and
an Annotated Bibliography." Statistical Science 1(1): 114-148.
Glass, G. V., B. McGaw, et al. (1981). Meta-analysis in Social Research. Beverly Hills
London, Sage Publication.
Guttman, R. H., F. Viegas, et al. (1997). Tete-a-Tete, MIT Media Laboratory.
Hamburger, H. (1986). Representing, combining and using uncertain estimates.
Uncertainties in Artificial Intelligence. L. N. Kanal and J. F. Lemmer, Elsevier Science
Publishers B.V.
Hauser, J. R. and D. Clausing (1988). "The House of Quality." Harvard Business Review
66(3).
Hogarth, R. M. (1977). Methods for Aggregating Opinions. Dordrecht, Holland, Reidel
Publishing Company.
Hora, S. C. and R. L. Iman (1989). "Expert Opinion in Risk Analysis : The NUREG1150 Methodology." Nuclear Science and Engineering 102: 323-331.
Howard, R. and J. Matheson (1984). The Principles and Applications of Decision
Analysis, Strategic Decision Group.
Kahneman, D. and A. Tversky (1979). "Prospect Theory: An analysis of Decision under
Risk." Econometrica 47(2).
Keen, P. G. W. (1977). The evolving Concept of Optimality. Multiple Criteria Decision
Making. M. K. Starr and M. Zeleny. Amsterdam, North-Holland Publishing.
Keeney, R. L. and H. Raiffa (1993). Decisions with Multiple Objectives: Preferences and
Value tradeoffs, Cambridge University Press.
Kleiner, A. F. (1999). Frictionless Inc.
Kreps, D. (1992). A course in microeconomic theory, Princeton University Press.
Machina, M. J. (Summer 1987). "Choice Under Uncertainty : Problems Solved and
Unsolved." Economic Perspectives 1(1).
Mas-Colell, A., M. D. Whinston, et al. (1995). Microeconomic Theory. New York
Oxford, Oxford University Press.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory
159
Mistree, F. and H. M. Karandikar (1987). "Conditional Post-solution Analysis of
Multiple-objective Compromise Decision Support Problem." Engineering Optimization
12.
Otto, K. N. (1992). A Formal Representation Theory for Engineering Design. Mechanical
Engineering Department. Pasadena, California, California Institute of Technology.
Otto, K. N. (1993). Measurement Foundations for Design Engineering Methods. ASME
Design Theory and Methodology.
Otto, K. N. and E. K. Antonsson (1991). "Trade-Off Strategies in Engineering Design."
Research in Engineering Design 3(2).
Pahl, G. and W. Beitz (1996). Engineering Design: a Systematic Approach, Springer.
PersonaLogic (1999). http://www.personalogic.com.
Popper, K. R. (1956). "Three Views Concerning Human Knowledge." Contemporary
British Philosophy.
Pratt, J. W., H. Raiffa, et al. (1965). Introduction to Statistical Decision Theory. New
York, McGraw-Hill.
Pugh, S. (1991). Total design : integrated methods for successful product engineering.
Wokingham, England ; Reading, Mass., Addison-Wesley Pub. Co.
Raiffa (1968). Decision analysis: Introductory lectures on choice under uncertainty.
Reading Mass, Addision-Wesley.
Ramaswamy, R. and K. Ulrich (1993). "Augmenting the House of Quality with
Engineering Models." Research in Engineering Design 5.
Rice, J. A. (1995). Mathematical Statistics and Data Analysis. Belmont, California,
Duxbury Press.
Saaty, T. (1980). The analytic hierarchy process: planning. priority setting. resource
allocation. New York, London, McGraw-Hill International Book Co.
Sackman, H. (1974). Delphi Assessment: expert opinion, forecasting, and group process.
Santa Monica, California, RAND corporation.
Sanders, F. (1963). "On subjective probability forecasting." Journal of Applied
Meteorology 2.
Schiller, Robert J. (1997) Human Behavior and the Efficiency of the Financial System.
Recent Development in Macroeconomics, Federal Reserve Bank of New York,
MassachusettsInstitute of Technology - Computer-aidedDesign Laboratory
160
Simon, H. A. and J. G. March (1958). Organization, Wiley.
Sterdy, A. C. (1962). Aspiration levels, attitudes, and performance in a goal-oriented
situation, Industrial Management Review, MIT.
Suh, N. P. (1990). The Principles of Design. New York, Oxford, Oxford University
Press.
Thurston, D. L. (1991). "A Formal Method for Subjective Design Evaluation with
Multiple Attributes." Research in Engineering Design 3(2).
Tribus, M. (1969). Rational Description, Decisions, and Designs. New York, Pergamon
Press.
Ulrich, K. and S. D. Eppinger (1995). Product Design and Development, McGraw-Hill.
Veneziano, D. (1994). Uncertainty and Expert Opinion in Geologic Hazards. Whitman
Symposium, MIT.
Wallace, D. R. (1994). A Probabilistic Specification-based Design Model: applications to
design search and environmental computer-aided design. Mechanical Engineering
Department. Cambridge, MA, MIT.
Wallace, D. R., M. Jakiela, et al. (1995). "Design Search under Probabilistic
Specification using Genetic Algorithm." Computer-Aided Design.
Winkler, R. L. (1968). "The consensus of subjective probability distributions."
Management Science 15.
Wood, K. L., K. N. Otto, et al. (1992). "Engineering design calculations with fuzzy
parameters." Fuzzy Sets and Systems 52.
Zeleny, M. (1982). Multiple Criteria Decision Making, McGraw-Hill.
Massachusetts Institute of Technology - Computer-aidedDesign Laboratory